id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2305.05624 | Spontaneous Mutations from Terahertz Proton Tunneling | Protons in the gap between base pairs of the double helix store the code of
life by breaking the chiral symmetry that swaps the sense strand with its
complementary partner. When these hydrogen bonds break during replication and
transcription, pairs of protons switch sides restoring chiral symmetry and
destroying genetic information. Using time-independent second-order
perturbation theory, we show that the observed rate of such spontaneous
mutations follows in the sudden approximation for bond breaking provided
protons in bonds between bases tunnel across the gap with terahertz
frequencies. | Noah Bray-Ali | 2023-04-13T23:13:40Z | http://arxiv.org/abs/2305.05624v3 | # Spontaneous Mutations from Terahertz Proton Tunneling
###### Abstract
Protons in the gap between base pairs of the double helix store the code of life by breaking the chiral symmetry that swaps the sense strand with its complementary partner. When these hydrogen bonds break during replication and transcription, pairs of protons switch sides restoring chiral symmetry and destroying genetic information. The observed rate of such spontaneous mutations follows in the sudden approximation for bond breaking provided protons in bonds between bases tunnel across the gap with terahertz frequencies.
## I Introduction
Looking down the helical axis of deoxyribonucleic acid (DNA), we find a winding stack of pairs of nucleotide bases linking the two strands of its double helix structure [1]. Remarkably, the molecule has a chiral symmetry axis passing through the gap between base pairs and running roughly from the minor groove of the double helix to its major groove: Half a turn about the chiral axis swaps the two helices. Nevertheless, the code of life breaks this chiral symmetry and stores genes in the sequence of bases along just one helix: the sense strand.
Spontaneous mutations restore the chiral symmetry of the double helix and let genes change by shifting the tautomeric form of bases (Fig. 1). Following such a shift, hydrogen bonds form between identical atoms. In wild-type bonds, by contrast, oxygen provides the lone pair of electrons and nitrogen accepts the proton in the hydrogen bond nearest the major groove of the double helix. In mutant bonds, the pair of protons tunnels back and forth coherently across the gap like the pair of electrons in the covalent chemical bond [2]. And just as the covalent bond is spin symmetric, so also the mutant hydrogen bonds are chiral symmetric: When we split the mutant pair, there is an equal chance of finding either tautomeric form on a given base provided only that the other base has the complementary form. The genetic information is lost [3].
The chance of spontaneous mutation \(P=2.0\times 10^{-10}\) gives roughly one tautomeric shift for every few billion base pairs [4]. In the sudden approximation, the wild-type proton pair wavefunction for a base pair between adenine \(A\) and thymine \(T\) takes the form [5; 6]
\[|W\rangle=|AT\rangle+\epsilon\,|A^{*}T^{*}\rangle\,, \tag{1}\]
where, \(|AT\rangle\) is the dominant tautomeric form, \(|A^{*}T^{*}\rangle\) has tautomeric shifts in both bases, and \(\epsilon=\sqrt{2P}=2.0\times 10^{-5}\) is the probability amplitude for this to take place.
During replication and transcription, the base pair splits quickly compared to the time-scale for tautomeric shifts. From the wild-type wavefunction in Eq. (1), the chance of finding the shifted form is then simply \(\epsilon^{2}=2P\) the square of the probability amplitude for the shift. This is twice the chance of spontaneous mutation \(P\), since the shifted adenine \(A^{*}\) has equal chance of being found in either tautomeric form once it binds to wild-type cytosine \(C\). Chiral symmetry then gives the mutant proton pair wavefunction
\[|M\rangle=\frac{1}{\sqrt{2}}\,|A^{*}C\rangle+\frac{1}{\sqrt{2}}\,|AC^{*} \rangle\,. \tag{2}\]
The equal probability amplitude for the two tautomeric forms in the mutant wavefunction follows from the fact that both bases use nitrogen atoms to make hydrogen bonds.
In this Article, we show that terahertz frequency proton tunneling between base pairs creates the wild-type and mutant proton pair wavefunctions responsible for spontaneous mutation. The Methods section shows how to express the wild-type tautomeric shift probability amplitude \(\epsilon=2t^{2}/(U\Delta)\) in terms of the energy-scale \(t=h\times 1.7\) THz \(=7.0\times 10^{-3}\) eV for this proton tunneling, the energy splitting between tautomers \(\Delta=h\times 24\) THz \(=0.10\) eV, and the charge transfer energy \(U=h\times 1200\) THz \(=4.9\) eV for putting both protons on the same base [7]. The Results section argues that the stability of the gene follows from the emergence of the radio-frequency scale \(J=2t^{2}/U=h\times 480\) MHz for the protons to swap sides. The slowness of the corresponding proton swap time \(h/J=2.1\times 10^{6}\) fs on the femtocon second time-scale \(h/\Delta=40\) fs set by the tautomeric energy splitting ensures the wild-type tautomeric shift has small probability amplitude \(\epsilon=J/\Delta=2.0\times 10^{-5}\). Conversely, the bonds between bases break during replication and transcription on the femto-second time-scales corresponding to changes in electronic structure. Charge transfer \(h/U=0.83\) fs gives a roughly similar time-scale. This leaves little time for the protons in the wild-type wavefunction to adjust to the changes. Instead, we get the chance for spontaneous mutation \(P=\epsilon^{2}/2=2.0\times 10^{-10}\) in the sudden approximation to bond breaking. We end with Discussion of the meaning of these results for evolution and for aging [8].
## II Methods
The wild-type proton pair wavefunction \(\ket{W}\) is the lowest energy state of the effective Hamiltonian [5; 9]
\[H_{\rm eff}= -\frac{\Delta}{2}\left(\ket{AT}\bra{AT}-\ket{A^{*}T^{*}}\bra{A^{*} T^{*}}\right)\] \[-J\left(\ket{AT}\bra{A^{*}T^{*}}+\ket{A^{*}T^{*}}\bra{AT}\right), \tag{3}\]
where, the first term splits the tautomers and the second term swaps the protons. The swap energy \(J=2\times 10^{-6}\) eV is small compared to the tautomeric splitting \(\Delta=0.10\) eV. This means we can find the wild-type wavefunction \(\ket{W}=\ket{AT}+\epsilon\ket{A^{*}T^{*}}\) by time-independent first-order perturbation theory.
Dropping terms beyond first-order in the small quantities \(J/\Delta\) and \(\epsilon\), we simply act on \(\ket{W}\) with the effective Hamiltonian \(H_{\rm eff}\ket{W}=-(\Delta/2)\ket{AT}+(\epsilon\Delta/2-J)\ket{A^{*}T^{*}}\). We then match this with the product \(-(\Delta/2)\ket{W}=-(\Delta/2)\ket{AT}-(\epsilon\Delta/2)\ket{A^{*}T^{*}}\). Physically, the matching means the energy of the proton pairs does not change at first-order in \(J/\Delta\) from the value it would have without proton swaps. Carrying out the matching, we find that the probability amplitude for proton swap \(\epsilon=J/\Delta=2\times 10^{-5}\) is small enough that we can simply stop at first-order in perturbation theory for the wild-type wavefunction.
Terahertz proton tunneling generates the effective Hamiltonian \(H_{\rm eff}\) at second-order in time-independent perturbation theory from the full Hamiltonian [10]
\[H=\begin{pmatrix}-\Delta/2&0&-t&-t\\ 0&\Delta/2&-t&-t\\ -t&-t&U&0\\ -t&-t&0&U\end{pmatrix} \tag{4}\]
where, the matrix acts on the column vector \((a,b,c,d)^{T}\) for the proton pair wave function \(a|AT\rangle+b|A^{*}T^{*}\rangle+c|A^{+}T^{-}\rangle+d|A^{-}T^{+}\rangle\). The zwitterion states \(|A^{+}T^{-}\rangle\) and \(|A^{-}T^{+}\rangle\) have charge transfer energy \(U=4.9\) eV compared to the tautomers \(|AT\rangle\) and \(|A^{*}T^{*}\rangle\). They arise from the tautomers when protons hop. The base on which the proton lands has positive charge from the extra proton. At the same time, the base from which the proton came is left with negative charge from the extra lone pair of electrons. The small size of the tunneling energy \(t=U/700=7\times 10^{-3}\) eV compared to the charge transfer energy means we can effectively eliminate the high-energy zwitterion states from the low-energy tautomeric physics through a simple change of basis.
In the language of quantum optics, the zwitterions have a "bright" state \(|+\rangle\) that mixes with the tautomers and a "dark" state \(|-\rangle\) that does not mix with them [11]. The dark state \(|-\rangle=(|A^{+}T^{-}\rangle-|A^{-}T^{+}\rangle)/\sqrt{2}\) has equal probability to land in either zwitterion but opposite probability amplitude. It retains the energy \(U\) that it would have without proton tunneling. Destructive interference from the two zwitterions decouples the dark state from the tautomers.
Meanwhile, the zwitterion bright state \(|+\rangle=-(\sqrt{2}t/U)(|AT\rangle+|A^{*}T^{*}\rangle)+(|A^{+}T^{-}\rangle+|A ^{-}T^{+}\rangle)/\sqrt{2}\) has equal probability amplitude for both zwitterions. It lowers its energy by \(-2J\) through constructive interference, where, \(J=2t^{2}/U=h\times 480\) MHz is the radio-frequency energy scale for proton swaps. This new scale emerges from the terahertz-frequency \(t=h\times 1.7\) THz for proton tunneling when we change basis to the zwitterion bright and dark states.
Coming back to the tautomers, the dominant and rare forms mix with the zwitterion states to make a new pair of tautomers whose dynamics are governed by the effective Hamiltonian in Eq. (3). The role of the dominant
Figure 1: Pairing arrangements of adenine before (above) and after (below) it has undergone tautomeric shift. The major groove of the double helix lies above the base pairs while the minor groove lies below them. We are looking down the axis of the helix. The chiral symmetry axis runs from the minor to the major groove along a line roughly half way between the bases in each pair. After the shift, the bases pair using the same atom (N for nitrogen) on either side of the bond closest to the major groove, restoring the chiral symmetry which interchanges the sense strand and its complement. (Adapted with permission from Ref. [3]).
tautomer in the effective Hamiltonian is played by the state \(\left|AT\right\rangle+\left(t/U\right)\left(\left|A^{+}T^{-}\right\rangle+\left|A^ {-}T^{+}\right\rangle\right)\). It is easy to check that this state has no overlap with the bright and dark zwitterion states [12]. Similarly, for the rare tautomer, we must take \(\left|A^{+}T^{-}\right\rangle+\left(t/U\right)\left(\left|A^{+}T^{-}\right\rangle +\left|A^{-}T^{+}\right\rangle\right)\) to ensure it too has no overlap with the states of the new zwitterion basis.
With this simple change of basis, the zwitterion bright and dark states split off from the low frequency dynamics of the tautomers relevant for spontaneous mutation. The full Hamiltonian simplifies to the new matrix [13]
\[V^{T}HV=\begin{pmatrix}-\Delta/2&J&0&0\\ J&\Delta/2&0&0\\ 0&0&U-2J&0\\ 0&0&0&U\end{pmatrix} \tag{5}\]
where, \(V\) (and \(V^{T}\)) are the matrices whose columns (rows) are the vectors that give the wavefunctions for the new basis states in terms of the original basis. The matrix now acts on column vectors that give the proton pair wavefunction in the new basis of states. The new basis is simply the new dominant tautomer, the new rare tautomer, the bright zwitterion, and the dark zwitterion.
The effective Hamiltonian from Eq. (3) appears in the upper left corner and creates proton swaps that exchange the new tautomer basis states. Meanwhile, the zwitterion bright and dark states are stationary states of the full Hamiltonian with energies \(U-2J\) and \(U\), respectively. The change of basis and the new form for the full Hamiltonian in that basis are exact up to and including terms of order \((t/U)^{2}=2.0\times 10^{-6}\). The energy \(t=U/700\) gained by terahertz proton tunneling between base pairs is small compared to the ultraviolet-scale energy \(U=4.9\) eV needed to excite electrons in the resonating rings of the nucleotide bases and accommodate the charge transfer. This means we can simply stop at second-order in the time-independent perturbation theory for the effective Hamiltonian of the tautomeric shifts that generate spontaneous mutation [14].
## III Results
The chance of spontaneous mutation \(P=(J/\Delta)^{2}/2=2.0\times 10^{-10}\) emerges from the wild-type proton pair wavefunction \(|W\rangle\) of nucleotide base pairs in DNA given in Eq. (1). The wavefunction has amplitude \(J/\Delta\) to swap protons between the bases, where, \(J=2t^{2}/U=h\times 480\) MHz is the microwave energy scale that emerges from terahertz frequency \(t=h\times 1.7\) THz proton tunneling and the ultraviolet energy \(U=4.9\) eV needed to accomodate the charge transfer. When bonds between bases break during replication and transcription, the femto-second time scale for bond-breaking is fast compared to the pico-seconds needed for protons to tunnel between bases. In the sudden approximation, the protons land in the rare tautomer with probability \((J/\Delta)^{2}\approx 3\times 10^{-10}\) given by the square of the probability amplitude for the proton swap within the wild-type proton pair wavefunction.
The tautomeric shift induced by bond-breaking forces the new base pairs that form to have the mutant wavefunction \(|M\rangle\) given in Eq. (2). It has equal amplitude for a given base to be found in either tautomeric form. This restores the chiral symmetry between the two strands of the double helix and destroys the genetic information that was stored within the old base pair.
The physical picture for the proton swap proceeds in two coherent steps. First, the proton closer to the major groove, say, tunnels across the gap between bases. This costs charge transfer energy but is not forbidden by any symmetry principle. Next, the other proton, located closer to the minor groove, tunnels across the gap to gain back the charge transfer energy. At the end of the coherent second-order proccess, the protons have swapped.
This physical picture for spontaneous mutation coincides with that of "super-exchange" in quantum magnetism [15; 16]. In place of proton pairs, consider a pair of electrons in the partially filled \(d\)-shell (\(f\)-shell) of two transition-metal (rare-earth) ions separated by a closed shell ligand such as fluorine or oxygen in an electrically insulating, transparent salt such as manganese fluoride (MnF\({}_{2}\)). Instead of the location relative to the groove (major or minor) used by the proton pair, the electrons use their spin state, up or down. The difference in spin permits a pair of electrons to sit in the same orbital on one or the other of the two neighboring transition metal ions. This costs a large charge transfer energy but is not forbidden by any symmetry principle. Meanwhile, the spin of the electron that hops back need not match the spin of the one that hops first. In this way, the spin of the electron on neighboring sites can exchange on an energy scale that exceeds the thermal energy available at room temperature [17].
## IV Discussion
Terahertz proton tunneling in the hydrogen bonds between nucleotide base pairs within DNA generates radio-frequency proton pair swaps that drive spontaneous mutation. Yet both processes are slow compared to the femto-second scale bond-breaking during replication and transcription. The sudden splitting makes the pair choose at random which tautomer to take: the low-energy form found in free nucleotides within the cytoplasm or the high-energy form that generates mutations. Remarkably, the chance they land in the rare tautomer gives roughly twice the observed spontaneous mutation rate. The factor of two reflects the restoration of the chiral symmetry between the two strands of the double helix structure when the rare tautomer binds to a free nucleotide.
The spontaneous mutation rate acts as a molecular clock driving evolution. Each generation must replicate its genome to pass along genetic information to its de
scendants. However, the molecular process of replication itself generates changes in the sequence of bases within DNA. These spontaneous mutations restore the symmetry between the sense strand and its complementary partner. They destroy genetic information at random within the genome at a regular rate. The spontaneous mutation rate can be expressed in terms of the energy splitting between tautomers and the emergent energy scale for proton pair swap between nucleotides.
Spontaneous mutation also sets the life span of organisms by making cells age [8]. Each time the cell uses the code of life to live its life, it must break bonds between base pairs to read the base sequence on the sense strand. Like replication, the transcription process itself generates spontaneous mutations. The sudden splitting of the strands makes base pairs choose at random which tautomer to take.
During transcription, when a base pair in DNA lands in the rare form, the base on the sense strand then must form a mutant bond with the "wrong" base in the complementary messenger ribonucleleic (mRNA) molecule. This mutant bond then breaks to release the single-stranded mRNA. By the chiral symmetry created within the mutant bond, though, the base on the sense strand of the DNA has an even chance of coming back as either the dominant or the rare tautomer.
To complete transcription, the sense strand base returns to bind its partner. If it returns in the dominant form, however, the bases do not bind: Both bases now have their protons close to the same groove (major or minor) of the double helix and the electrostatic repulsion between protons keeps the bases apart. To bind the strands, the transcription machinery must remove one of the bases and replace it with a free nucleotide in the dominant form.
Since the sense strand makes and breaks twice as many bonds as its partner does, the machinery prefers to use the complementary base as its template for repair. This locks in the spontaneous mutation as a mutant bond with chiral symmetry between the two strands and complete loss of genetic information. In this way, the act of using the code of life to live in fact destroys the code itself. The rate of loss of genetic information due to transcription is set by the same balance of energy scales that sets the spontaneous mutation rate. The balance sets the pace of aging and fixes the life span of a given generation.
By way of conclusion, it is worth recalling the key steps. The analysis began by recognizing that protons tunnel with terahertz frequency between the nucleotides in the base pairs that bind the double helix structure of DNA and that store the code of life. The proton that hops back across the gap, however, need not be the same one that hopped first. The resulting proton swap between base pairs leaves both bases in the pair in a rare, high-energy tautomer form. The emergent radio-frequency proton pair swap energy-scale competes with the tautomer energy splitting to set the spontaneous mutation rate. During replication and transcription, the hydrogen bonds between base pairs break fast and force the base pair to choose which form to take. This random choice drives the genetic drift between generations and leads to evolution. It also leads to the aging of organisms and sets the length of each generation.
## Acknowledgements
This research was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958, by the Department of Energy under grant No. DE-FG02-00ER41132, and by the Mainz Institute of Theoretical Physics within the Cluster of Excellence PRISMA+ (Project ID 39083149).
|
2306.08678 | A Luminous Red Supergiant and Dusty Long-period Variable Progenitor for
SN 2023ixf | We analyze pre-explosion near- and mid-infrared (IR) imaging of the site of
SN 2023ixf in the nearby spiral galaxy M101 and characterize the candidate
progenitor star. The star displays compelling evidence of variability with a
possible period of $\approx$1000 days and an amplitude of $\Delta m \approx
0.6$ mag in extensive monitoring with the Spitzer Space Telescope since 2004,
likely indicative of radial pulsations. Variability consistent with this period
is also seen in the near-IR $J$ and $K_{s}$ bands between 2010 and 2023, up to
just 10 days before the explosion. Beyond the periodic variability, we do not
find evidence for any IR-bright pre-supernova outbursts in this time period.
The IR brightness ($M_{K_s} = -10.7$ mag) and color ($J-K_{s} = 1.6$ mag) of
the star suggest a luminous and dusty red supergiant. Modeling of the
phase-averaged spectral energy distribution (SED) yields constraints on the
stellar temperature ($T_{\mathrm{eff}} = 3500_{-1400}^{+800}$ K) and luminosity
($\log L/L_{\odot} = 5.1\pm0.2$). This places the candidate among the most
luminous Type II supernova progenitors with direct imaging constraints, with
the caveat that many of these rely only on optical measurements. Comparison
with stellar evolution models gives an initial mass of $M_{\mathrm{init}} =
17\pm4 M_{\odot}$. We estimate the pre-supernova mass-loss rate of the star
between 3 and 19 yr before explosion from the SED modeling at $\dot M \approx
3\times10^{-5}$ to $3\times10^{-4} M_{\odot}$ yr$^{-1}$ for an assumed wind
velocity of $v_w = 10$ km s$^{-1}$, perhaps pointing to enhanced mass loss in a
pulsation-driven wind. | Jacob E. Jencson, Jeniveve Pearson, Emma R. Beasor, Ryan M. Lau, Jennifer E. Andrews, K. Azalee Bostroem, Yize Dong, Michael Engesser, Sebastian Gomez, Muryel Guolo, Emily Hoang, Griffin Hosseinzadeh, Saurabh W. Jha, Viraj Karambelkar, Mansi M. Kasliwal, Michael Lundquist, Nicolas E. Meza Retamal, Armin Rest, David J. Sand, Melissa Shahbandeh, Manisha Shrestha, Nathan Smith, Jay Strader, Stefano Valenti, Qinan Wang, Yossef Zenati | 2023-06-14T18:00:06Z | http://arxiv.org/abs/2306.08678v2 | # A Luminous Red Supergiant and Dusty Long-period Variable Progenitor for SN 2023ixf
###### Abstract
We analyze pre-explosion near- and mid-infrared (IR) imaging of the site of SN 2023ixf in the nearby spiral galaxy M101 and characterize the candidate progenitor star. The star displays compelling evidence of variability with a possible period of \(\approx\)1000 days and an amplitude of \(\Delta m\approx 0.6\) mag in extensive monitoring with the Spitzer Space Telescope since 2004, likely indicative of radial pulsations. Variability consistent with this period is also seen in the near-IR \(J\) and \(K_{s}\) bands between 2010 and 2023, up to just 10 days before the explosion. Beyond the periodic variability, we do not find evidence for any IR-bright pre-supernova outbursts in this time period. The IR brightness (\(M_{K_{s}}=-10.7\) mag) and color (\(J-K_{s}=1.6\) mag) of the star suggest a luminous and dusty red supergiant. Modeling of the phase-averaged spectral energy distribution (SED) yields constraints on the stellar temperature (\(T_{\rm eff}=3500^{+800}_{-1400}\) K) and luminosity (\(\log L/L_{\odot}=5.1\pm 0.2\)). This places the candidate among the most luminous Type II supernova progenitors with direct imaging constraints, with the caveat that many of these rely only on optical measurements. Comparison with stellar evolution models gives an initial mass of \(M_{\rm init}=17\pm 4\)\(M_{\odot}\). We estimate the pre-supernova mass-loss rate of the star between 3 and 19 yr before explosion from the SED modeling at \(\dot{M}\approx 3\times 10^{-5}\) to \(3\times 10^{-4}\)\(M_{\odot}\) yr\({}^{-1}\) for an assumed wind velocity of \(v_{w}=10\) km s\({}^{-1}\), perhaps pointing to enhanced mass loss in a pulsation-driven wind.
Supernovae(1375) -- Massive stars(732) -- Stellar mass loss(1613) -- Evolved stars(481) -- Circumstellar dust(236)
## 1 Introduction
The direct identification of core-collapse (CC) supernova (SN) progenitors in archival, pre-explosion imaging provides a vital test of our understanding of stellar evolution. To date, detections of \(\sim\)25 progenitor candidates have been reported (see, e.g., Smartt, 2015; Van Dyk, 2017 for recent reviews and references therein), with several now confirmed to have disappeared in late-time imaging (e.g., Van Dyk et al., 2023a and references therein). A great success of this decades-long effort has been the confirmation that the progenitors of the most common class of CC SNe, the hydrogen-rich Type II-plateau and II-linear SNe (SNe II-P and II-L), are massive (\(>\)8 \(M_{\odot}\)) red supergiants (RSGs), in excellent agreement with predictions from stellar evolutionary theory.
Still, unexpected questions have emerged as the sample of SN II-P and II-L progenitors has grown. Specifically, the sample seems to consist of RSGs of only modest initial masses \(\lesssim\)18 \(M_{\odot}\)(e.g., Smartt et al., 2009; Smartt, 2015) even though observed populations of RSGs extend to \(\gtrsim\)25 \(M_{\odot}\)(Humphreys and Davidson, 1979; Davies and Beasor, 2018; McDonald et al., 2022). The apparent lack of higher-mass progenitors, dubbed the "RSG problem," remains controversial. Numerous solutions have been proposed, including the direct collapse of higher-mass progenitors to black holes, effects related to the uncertain environmental or circumstellar extinction, the difficulties of connecting limited observations to uncertain stellar models, and questions regarding the statistical validity of the upper-mass limit itself (e.g., Davies et al., 2007; Kochanek et al., 2008; Smith et al., 2011; Walmswell and Eldridge, 2012; Davies and Beasor, 2018, 2020, 2020; Kochanek, 2020).
An important component of tying CC SNe to their massive progenitors is an understanding of stellar mass loss during the final evolutionary phases. Material surrounding the star as it approaches CC can dramatically affect both the appearance of the progenitor and the observable properties of the SN (e.g., Kochanek et al., 2012; Smith, 2014; Davies et al., 2022). Growing evidence from SN observations, namely early "flash" spectroscopy (Gal-Yam et al., 2014; Yaron et al., 2017; Bruch et al., 2021; Tartaglia et al., 2021), numerical light-curve modeling (Morozova et al., 2017, 2018; Subrayan et al., 2023), and instances of observed pre-SN activity (e.g., Kilpatrick and Foley, 2018; Jacobson-Galan et al., 2022; Matsumoto and Metzger, 2022 though see also evidence for progenitor stability in, e.g., Johnson et al., 2018; Tinyanont et al., 2019), all point to dense circumstellar material (CSM) around SN II progenitors. Several mechanisms for enhanced mass loss have been proposed to explain the presence of this material. These include nuclear burning instabilities, enhanced pulsation-driven winds, wave-driven mass loss, and neutrino-driven mass loss (Heger et al., 1997; Yoon and Cantiello, 2010; Arnett and Meakin, 2011; Quataert and Shiode, 2012; Shiode et al., 2013; Moriya, 2014; Shiode and Quataert, 2014; Smith and Arnett, 2014; Smith, 2014; Woosley and Heger, 2015; Fuller, 2017; Wu and Fuller, 2021).
Here, we analyze extensive pre-explosion near- and mid-infrared (IR) imaging and characterize a candidate progenitor star of SN 2023ixf. Discovered on 2023 May 19.73 (all dates UT) by Itagaki (2023) and located in M101 (\(D=6.85\pm 0.15\) Mpc; \(\mu=29.18\) mag; Riess et al., 2022), SN 2023ixf is one of the nearest SNe II of the last decade. As an exceptionally well-studied nearby galaxy, M101 has a rich archival data set, allowing us to study the photometric evolution of the progenitor in the final years before explosion--a vitally important phase for which it is rarely possible to obtain direct constraints. Intensive early monitoring of SN 2023ixf already shows evidence of dense CSM (Berger et al., 2023; Bostroem et al., 2023; Grefenstette et al., 2023; Jacobson-Galan et al., 2023; Smith et al., 2023; Teja et al., 2023; Yamanaka et al., 2023). The Milky Way extinction toward SN 2023ixf is \(E(B-V)_{\rm MW}=0.0077\) mag (Schlafly and Finkbeiner, 2011), and we adopt a foreground, host galaxy extinction of \(E(B-V)_{\rm host}=0.03\)(Smith et al., 2023). The value for the host extinction is consistent with that reported by Lundquist et al. (2023, \(E(B-V)_{\rm host}=0.031\pm 0.006\) mag). We correct for these using the extinction law of Fitzpatrick (1999) with \(R_{V}=3.1\).
## 2 Data and Observations
### Spitzer Imaging
The location of SN 2023ixf (R.A., decl.: \(14^{\rm h}03^{\rm m}38\fs 56,+54^{\circ}18^{\prime}41\farcs 9\), J2000.0) was imaged by the Spitzer Space Telescope (Werner et al., 2004; Gehrz et al., 2007) during the cold mission in all four channels (3.6, 4.5, 5.8, and 8.0 \(\mu\)m; [3.6], [4.5], [5.8], and [8.0], respectively) of the Infrared Array Camera (IRAC; Fazio et al., 2004) on 2004 March 8.32 (PI: G. Rieke, PID 60). These images were stacked into Super Mosaics1 along with images taken on 2007 December 31, but as only the 2004 images cover the SN position, we consider that date to be the effective time of these observations for our photometric measurements. As part of a previous study on IR variability in nearby galaxies, PSF-fitting photometry source catalogs were made for M101 using the Super Mosaics in all four bands (Karambelkar et al., 2019). An empirical model of the PSF for each Mosaic was made using the DAOPHOT/ALLSTAR package (Stetson, 1987), with corrections for the finite radius of the model PSF (following the method of Khan, 2017, and see Karambelkar et al., 2019 for more details). As shown in Figure 1, a source is clearly visible at the location at [3.6] and [4.5] and is detected in the PSF catalogs at [3.6] \(=17.5\pm 0.1\) and [4.5] \(=16.75\pm 0.04\) mag.2 This source was previously reported by Szalai and Van Dyk (2023), and its position was confirmed to be consistent with that of the SN by (Kilpatrick et al., 2023). We consider this star as a strong candidate progenitor of SN 2023ixf.
Footnote 1: Super Mosaics are available as Spitzer Enhanced Imaging Products through the NASA/IPAC Infrared Science Archive (SSC And IRSA, 2020).
Footnote 2: All photometry is reported on the Vega system.
There is no clear point-like source visible in the longer wavelength channels, and no detections were recovered at the position in the respective photometry catalogs. As the detection limit is likely dominated by the significant background emission, we adopt upper limits as the measured surface brightness at the position integrated over the size of the PSF plus 2 times the image rms at a position of blank sky. This yields limiting magnitudes of [5.8] \(>14.1\) and [8.0] \(>11.8\) mag, using the Vega system
zero-magnitude fluxes defined in the IRAC Instrument Handbook (IRAC Instrument And Instrument Support Teams, 2021).
The SN position was then imaged numerous times at [3.6] and [4.5] since 2012 during the warm mission by multiple programs (PI: P. Garnavich, PID 80126; PI: M. Kasliwal, PIDs 80196, 90240), including with frequent monitoring of M101 between 2014 and the end of 2019 by the SPitzer Infrared Intensive Transients Survey (SPIRITS; PI: M. Kasliwal, PIDs 10136, 11063, 13053, 14089). The post-basic calibrated data level images were downloaded from the Spitzer Heritage Archive (IRSA, 2022) and Spitzer Early Release Data Service3 and processed through an automated image-subtraction pipeline (for survey and pipeline details, see Kasliwal et al., 2017; Jencson et al., 2019). The Super Mosaics were used as template images for the subtractions. An example difference image is shown in Figure 1, demonstrating the variability of the source. We performed aperture photometry on our difference images adopting the appropriate aperture corrections from the IRAC Handbook and following the method for a robust estimate of the photometric uncertainties as described in Jencson (2020). We sum our difference flux measurements with the reference PSF-photometry measurement on the Super Mosaics, again using the handbook-defined values to convert to Vega-system magnitudes, to produce our final light curves of the source shown in Figure 2.
Footnote 3: [https://irsa.ipac.caltech.edu/data/SPITZER/Early_Release/](https://irsa.ipac.caltech.edu/data/SPITZER/Early_Release/)
### Ground-based, Near-IR Imaging
We obtained imaging of M101 in the near-IR \(J\) and \(K_{s}\) bands with the MMT and Magellan Infrared Spectrograph (MMIRS, 0\(\farcs\)2 pixels; McLeod et al., 2012) on the 6.5 m MMT Observatory telescope on Mt. Hopkins in Arizona at multiple epochs in 2021-2023. These images were taken as part of an ongoing program to monitor the unusual variable star and failed SN candidate M101-OC1 reported by Neustadt et al. (2021) and serendipitously covered the location of SN 2023ixf prior to the explosion. The last images were taken on 2023 May 9.37, just 10.36 days before the discovery of the SN.
Each observation consisted of dithered sequences alternating between the target position on M101 and an offset blank-sky field every few minutes to allow for accurate subtraction of the bright near-IR sky background. We reduced the images using a custom pipeline4 that performs standard dark-current subtraction, flat-fielding, sky background estimation and subtraction, astrometric alignments, and final stacking of the individual exposures.
Footnote 4: Adapted from the MMIRS imaging pipeline developed by K. Paterson, available here: [https://github.com/CIERA-Transients/Imaging_pipelines](https://github.com/CIERA-Transients/Imaging_pipelines)
We also downloaded \(J\)- and \(K_{\rm cont}\)-band imaging with the Near-Infrared Imager5 (NIRI) on the 8 m Gemini-N Telescope on Maunakea from the Gemini Observatory Archive. The images were taken with the f/6 camera (0\(\farcs\)117 pixels) on 2010 April 18 (PI: Bosch; PID GN-2010A-Q-27). We reduced all the images using DRAGONS(Labrie et al., 2023), a Python-based platform for reducing Gemini data, and following the procedures for extended sources outlined in the NIRI imaging-reduction tutorial.6
Footnote 5: [http://www.gemini.edu/instrumentation/niri](http://www.gemini.edu/instrumentation/niri)
Footnote 6: [https://dragons.readthedocs.io/projects/niriimg-drttutorial/en/stable/](https://dragons.readthedocs.io/projects/niriimg-drttutorial/en/stable/)
As shown in Figure 1, a bright, point-like source is visible at the position of the SN in both \(J\) and \(K_{s}\). The star is consistent with the location of the Spitzer source, and again, the same star was identified in the NIRI imaging by Kilpatrick et al. (2023). The field-of-view (FOV) of the MMIRS imager (6\(\farcs\)9\(\times\)6\(\farcs\)9) is sufficient to calibrate the photometric zero-points using aperture photometry of relatively isolated stars in images with cataloged \(JHK_{\rm s}\)-band magnitudes in the Two Micron All Sky Survey (2MASS; Skrutskie et al., 2006). We then derived a model of the effective (e)PSF for each image by fitting bright, isolated stars using the EPSFBuilder tool of the photutils package in Astropy. We performed PSF-fitting photometry at the location of the candidate progenitor as well as for a set of approximately 60 stars spread across the images with varying degrees of crowding and galaxy-background emission. We include a low-order, two-dimensional polynomial in the fit to account for the spatially varying background for each star, taking care to avoid overfitting the data. We adopt the rms error of the fit residuals, scaled by a factor of the square root of the reduced \(\chi^{2}\) (typically \(\gtrsim 1\)) for the fit, as the nominal statistical uncertainty per pixel, and multiply by the effective footprint, or a number of "noise pixels," of the ePSF7 to obtain an estimate of the statistical uncertainty for each flux measurement. We used the set of 2MASS calibration stars to derive aperture corrections (\(\lesssim 0.1\) mag in all three filters) to place the PSF-fitting magnitudes on the scale of the image photometric zeropoints. We adopt the statistical flux uncertainty, summed in quadrature with the rms error of the stars used in estimations of the zero-point and ePSF aperture correction, as the total uncertainty in our final magnitudes. Owing to the limited number of isolated 2MASS stars, even with the large FOV of MMIRS, the zero-point rms (typically \(\approx\)0.1 mag) dominates the error budget.
Footnote 7: A derivation of this quantity is provided by F. Masci here: [http://web.ipac.caltech.edu/staff/fmasci/home/mystats/noisepix_specs.pdf](http://web.ipac.caltech.edu/staff/fmasci/home/mystats/noisepix_specs.pdf)
The FOVs of the NIRI (\(\approx\)2\({}^{\prime}\times\)2\({}^{\prime}\)) images are smaller, and there were not enough isolated 2MASS stars in
the field to do a direct calibration. We instead cross-calibrated our PSF photometry of stars in these images, performed in the same manner as described above, to a set of \(\approx\)10 common stars with the corresponding MMIRS image in the same filter (the \(K_{\rm cont}\) NIRI image is calibrated to MMIRS \(K_{s}\)). We then adopted the statistical uncertainty from the PSF fitting (as above), summed in quadrature with the zero-point uncertainty (from the standard deviation of the individual stars used in the cross-calibration) as our measurement uncertainty. All of our near-IR photometry are shown in Figure 2.
## 3 Pre-Explosion Light Curves and Variability
As shown in Figure 2, the IR pre-explosion light curves extend back almost two decades prior to the explosion of SN 2023ixf. The Spitzer light curves display clear variability with an apparent periodicity of \(\approx\)3 yr and full amplitudes of 0.6 mag at [3.6] and 0.45 mag at [4.5] (as also recently reported by Kilpatrick et al., 2023). Between 2021 and 2023 May (less than two weeks before the SN), the near-IR \(J\) and \(K_{s}\)-band light curves also appear to brighten by 0.5 and 0.6 mag, respectively. It is not immediately clear whether this is part of the same periodic variability observed by Spitzer or indicative of a small outburst in the final few years before the explosion.
To test for periodicity, we simultaneously fit the [3.6] and [4.5] light curves using the Lomb-Scargle method (Lomb, 1976; Scargle, 1982) implemented in Astropy, restricting our search to sinusoidal signals. The resulting periodogram peaks at a best-fitting period of \(P=1119.4\) days with Lomb-Scargle power of 0.75. The peak is split, with a nearby secondary period of \(P=967.6\) days at a lower score of 0.67. No other peak in the power spectrum has a score higher than 0.35. The sinusoidal fit (reduced \(\chi^{2}=1.02\)) provides a signif
Figure 1: IR pre-explosion imaging of the site of SN 2023ixf. The source identified as the progenitor candidate is indicated by the white crosshairs at the center of each panel. In the left column, we show the Spitzer/IRAC [3.6] (top) and [4.5] (bottom) Super Mosaics. In the bottom center and right panels, respectively, we show the [4.5] image from 2015 June 18 and its corresponding difference image, where the Super Mosaic was used as the template for subtraction. The negative (white) flux at the position in the difference image indicates the source was fainter in 2015 than in 2004. The top center and right panels are the MMIRS \(J\)- and \(K\)-band images from 2023 April 2. The orientation and scale of each image are the same as indicated in the upper left panel.
icantly better fit to the Spitzer data over a null hypothesis of a constant flux in each band (reduced \(\chi^{2}=3.3\)), supporting the possibility of periodic variability in the light curve. We adopt the periods at half the maximum power as upper and lower bounds for the uncertainty on the possible period, giving \(P=1119.4^{+132.4}_{-233.3}\) days. This result is consistent with that reported by Soriasam et al. (2023b) from an independent analysis of the Spitzer data.
In Figure 3, we fold all of the IR light curves with the same best-fitting period, where the phase-weighted average magnitude has been subtracted out for each band. The ground-based near-IR data agree remarkably well with the periodic cycle derived for the Spitzer light curves without any additional tuning of the parameters. This provides strong evidence that the brightening seen just prior to the explosion in the \(J\) and \(K_{s}\)-bands is part of a normal pulsation cycle of the star. We see no clear evidence for any outbursts or eruptive variability up to just 10 days before the SN.
RSGs commonly exhibit periodic light-curve variations attributed to radial pulsations (Stothers, 1969; Stothers & Leung, 1971; Guo & Li, 2002), with more luminous RSGs typically exhibiting longer periods and larger optical amplitudes (Kiss et al., 2006; Yang & Jiang, 2011, 2012; Soraisam et al., 2018). In the IR, Karambelkar et al. (2019) extended the known period-luminosity correlations for long-period variable stars (e.g., Riebel et al., 2015; Goldman et al., 2017) to higher luminosities (\(M_{[4.5]}<-12\) mag) and longer periods (\(>\)1000 days). They postulated that the brightest of these sources may be dusty RSGs or the so-called super-AGB stars from massive (\(\approx\)8-12 \(M_{\odot}\); Siess, 2007; Doherty et al., 2015, 2017) progenitors. Super-AGBs may be expected to exhibit the reddest IR colors (\([3.6]-[4.5]\gtrsim 1\) mag), longest periods (\(\gtrsim\)1500 days) and largest amplitudes \(\Delta m\gtrsim 1.5\). Given its high luminosity (\(M_{[4.5]}\approx-12.3\)), the relatively more modest color (\([3.6]-[4.5]=0.8\pm 0.1\) mag), 1000 day period, and amplitudes (\(\approx\)0.6 mag) of the SN 2023ixf progenitor candidate are likely more consistent with a dusty RSG (see also Section 3.1 below).
### IR Photometric Classification and Bolometric Correction
Based on our periodicity analysis above, we compute foreground-extinction-corrected, phase-weighted mean magnitudes in each band of \(J=20.18\pm 0.20\) mag, \(K_{s}=18.52\pm 0.19\) mag, \([3.6]=17.67\pm 0.18\) mag, and \([4.5]=16.82\pm 0.16\) mag. To account for the uncertainty in the period and the variability amplitude in each band,
Figure 2: Mid- and near-IR pre-explosion light curves of the SN 2023ixf progenitor candidate. Mid-IR [3.6] and [4.5] Spitzer light curves are based on image subtraction (filled gray circles and black squares, respectively) relative to PSF photometry on the 2004 Super Mosaics (open symbols). The \(J\)- and \(K_{s}/K_{\rm cont}\)-band light curves (orange pentagons and red diamonds, respectively) from ground-based NIRI and MMIRS imaging extend up to just 10.35 days before the SN discovery, indicated by the purple dashed line. The epoch of archival 2002 HST imaging reported by Soriasam et al. (2023a), Pledger & Shara (2023), and Kilpatrick et al. (2023) is indicated by the blue downward arrow. (The data used to create this figure are available in the published article.)
we incorporate a 15% uncertainty summed in quadrature with that of the best individual measurement (the 2023 April 2 measurements from MMIRS and the 2004 Spitzer Super Mosaics). At \(M_{K_{s}}=-10.7\) mag, the star is firmly above the tip of the red giant branch (TRGB) and, moreover, is brighter than nearly all asymptotic giant branch (AGB) stars identified in nearby galaxies including the Large and Small Magellanic Clouds (L/SMCs), M31, and M33 (see, e.g., Cioni et al., 2006; Boyer et al., 2011; Massey et al., 2021, and references therein). Its near-IR color of \(J-K_{\rm s}=1.6\pm 0.28\) mag is redder than the range typically used to discriminate RSGs from luminous AGBs. As noted in Boyer et al. (2011), however, it is essentially impossible to distinguish a dusty RSG, which will be very red in \(J-K_{s}\), from an AGB star, with IR photometry alone. Massey et al. (2021) argue that stars brighter than \(M_{K_{s}}=-10\) mag are likely RSGs even at redder colors, as they are more luminous than expected for the brightest AGB stars. Based on this, and because of its likely association with the Type II SN 2023iF, we find that the progenitor candidate is most likely an RSG that suffers additional reddening from a dense molecular wind or circumstellar dust.
The \(K_{s}\) band is useful as a luminosity indicator for RSGs, both because the effects of extinction are reduced compared to optical bands and because the bolometric correction, \(BC_{K}\), is found empirically to be constant across early-to-late M-type supergiants in Milky Way and LMC star clusters (Davies and Beasor, 2018). Assuming an M-type spectrum (\(T_{\rm eff}\lesssim 3700\) K) and adopting their value of \(BC_{K}=m_{bol}-m_{K}=3.0\) mag, we obtain bolometric luminosities of \(\log(L/L_{\odot})\approx 5.0\). Given the red colors of the star described above, the true bolometric correction may be smaller if there is excess circumstellar extinction. Still, this value can likely be viewed as a robust lower limit on the luminosity. We discuss the possible locations of the star in a Hertzsprung-Russell diagram (HRD) based on modeling of the spectral energy distribution (SED) below in Section 4.
## 4 SED Modeling
In Figure 4, we construct an SED of the progenitor candidate from the phase-averaged magnitude measurements in the ground-based near-IR and Spitzer bands. The photometric magnitudes were converted to luminosities, \(\lambda L_{\lambda}\), using the filter transmission curves compiled by the Spanish Virtual Observatory (SVO) Filter Profile Service8 to compute zero-point fluxes and effective wavelengths for each filter. We also show Hubble Space Telescope (HST) measurements for the progenitor candidate reported by Kilpatrick et al. (2023), which, based on our best-fitting period, may have been timed near the bottom of the pulsation cycle (See Figure 3). Given the significant uncertainty in the amplitude of any optical variability, we do not include these points in the fitting procedure described below but note that luminosity estimates based primarily on the HST data may be underestimates.
Footnote 8: Documentation for the SVO Filter Profile Service is available at [http://ivoa.net/documents/Notes/SVOFPSDAL/index.html](http://ivoa.net/documents/Notes/SVOFPSDAL/index.html)
The SED is very red, peaking in the near-IR between the \(J\) and \(K_{s}\) bands. To estimate the physical parameters of the star, we fit the SEDs with the Grid of Red supergiant and Asymptotic Giant Branch ModelS (GRAMS; Sargent et al., 2011; Srinivasan et al., 2011) using a similar procedure to that described in Jencson et al. (2022). This suite of radiative transfer models consists of a base grid of 1225 spectra from spherically symmetric shells of varying amounts of silicate dust (Ossenkopf et al., 1992, appropriate for RSGs) around stars of constant mass-loss rates computed using the dust radiative transfer code 2-Dust(Ueta and Meixner, 2003). The grid employs input PHOENIX model photospheres (Kucinskas et al., 2005, 2006) for 1 \(M_{\odot}\) stars (model spectra can be scaled for more luminous and massive, i.e., supergiant, stars) with effective temperatures, \(T_{\rm eff}\), between 2100 and 4700 K, and at a fixed subsolar metal
Figure 3: IR light curves (symbols are the same as in Figure 2) folded to a best-fitting period of \(P=1119.4\) days from a joint Lomb–Scargle analysis of the [3.6] and [4.5] data. A phase-weighted average magnitude has been subtracted out for each band. The ground-based near-IR data agree remarkably well with the pulsation cycle derived from the Spitzer measurements, without any additional fine-tuning of the parameters. The inferred phases of the explosion epoch and archival HST observations are indicated by the purple dashed line and blue downward arrow, as in Figure 2.
licity9\(\log(Z/Z_{\odot})=-0.5\) and a fixed surface gravity \(\log g=-0.5\). The amount of circumstellar dust is characterized in terms of the optical depth at 1 \(\mu\)m, \(\tau_{1}\), from which a dust mass-loss rate, \(\dot{M}_{\rm d}\) is inferred assuming a wind speed of \(v_{w}=10\,{\rm km\,s^{-1}}\). The inner radius of the dust shell, \(R_{\rm in}\), takes values of 3, 7, 11, and 15 times the stellar radius, \(R_{*}\).
Footnote 9: The metallicity of the input stellar models in GRAMS was chosen to be similar to the LMC, while a value closer to solar may be more appropriate for the environment in a large spiral galaxy of SN 2023ixf. We do not expect this to significantly affect the shape of broad-band SEDs, however, and we believe our estimates of stellar parameters (\(L\), \(T_{\rm eff}\)) will not depend strongly on this choice (see discusions in, e.g., Beasor & Davies, 2016; Van Dyk et al., 2019; Jencson et al., 2022).
For each model in the grid, we compute the scale factor that minimizes the value of \(\chi^{2}\) between the IR data points and synthetic photometry derived from the model for each filter. The 20 best models with the lowest \(\chi^{2}\) values are compared to the data in Figure 4. These models span \(T_{\rm eff}=2100\)-4300 K and \(\log L/L_{\odot}=5.01\)-5.22, while the single best-fitting model (minimum \(\chi^{2}=7.5\)) has \(T_{\rm eff}=2300\) K and \(\log L/L_{\odot}=5.08\). As a function of \(T_{\rm eff}\), the \(\chi^{2}\) distribution has a secondary local minimum (\(\chi^{2}=8.7\)) at 3500 K and a corresponding luminosity of \(\log L/L_{\odot}=5.12\).
### Position in the HRD and Initial Mass
In Figure 5, we place the progenitor candidate in the HRD based on the results of our SED modeling described above in Section 4. Several models, including the single best-fitting model at \(T_{\rm eff}=2300\) K, are colder (\(T_{\rm eff}<3000\) K) than expected for end-stage RSGs, appearing far to the right of the terminal points of the stellar tracks derived from the Mesa Isochrones and Stellar Tracks models (MIST; Choi et al., 2016, 2017, nonrotating, solar metallicity). This may be an effect of extended CSM mimicking the appearance of a cooler star, i.e., from strong molecular opacity from enhanced winds (e.g., Davies & Plez, 2021), additional extinction and excess IR emission from dust (e.g., Scicluna et al., 2015; Massey et al., 2006; Haubois et al., 2019), or a combination of both. We discuss these possibilities and the inferred mass-loss rates in more detail below in Section 4.2.
Considering this, we adopt the best-fitting warmer model (\(T_{\rm eff}>3000\) K) with \(T_{\rm eff}=3500\) K and \(\log L/L_{\odot}=5.12\) as our preferred model though the lu
Figure 4: The SED of the SN2023ixf progenitor candidate is shown in both panels, including phase-weighted average flux measurements from Spitzer and MMIRS (black diamonds) and the HST measurements reported by Kilpatrick et al. (2023, gray diamonds). Downward arrows indicate upper limits. In the left panel, the 20 best-fitting GRAMS models to the IR data points are shown as curves, where the color indicates \(\chi^{2}\) for the fit as indicated by the color bar. The best-fitting single model (\(T_{\rm eff}=2300\) K, \(\log L/L_{\odot}=5.07\)) is indicated as the thick dashed curve, and the corresponding synthetic photometry in the four IR bands included in the fitting are shown as the red open square symbols. A warmer model (\(T_{\rm eff}=3500\) K, \(\log L/L_{\odot}=5.12\)) at a secondary, relative minimum in the \(\chi^{2}\) distribution is shown as the thick dotted curve. In the right panel, we show two superwind models from Davies et al. (2022), with the same mapping of color to \(\chi^{2}\).
minosity is well constrained regardless of the temperature. As shown in Figure 5, the progenitor candidate of SN 2023ixf is among the most luminous RSG progenitors of a Type II SN with direct-detection constraints (Smartt, 2015; Van Dyk, 2017; Kilpatrick et al., 2017; Kochanek et al., 2017; Kilpatrick and Foley, 2018; O'Neill et al., 2019; Rui et al., 2019; Van Dyk et al., 2019; Sollerman et al., 2021; Van Dyk et al., 2023b). We emphasize though that many of these previous estimates were derived primarily using only a few (usually HST) optical bands and may systematically underestimate the luminosities of the stars without IR constraints on the SED.
Our inferred luminosity for the SN 2023ixf progenitor candidate is a factor of \(\approx\)2 higher than that recently reported by Kilpatrick et al. (2023). We suspect this difference is attributable to two main sources: (1) modest discrepancies in the NIRI \(K_{\rm cont}\) and [4.5] magnitudes, coupled with our use of phase-averaged measurements to construct the SED, resulting in fluxes that are \(\approx\)30-40% higher in those bands and (2) our use of O-rich SED models, which contain significant flux in the silicate "bump" near 10 \(\mu\)m that is absent from the graphitic models used by Kilpatrick et al. (2023). In contrast, Soraisam et al. (2023b) infer an even higher luminosity (\(\log L/L_{\odot}=5.2\) to 5.5) by applying the period-luminosity relation for RSGs of Soraisam et al. (2018).
Comparing our range of luminosities to the end points of MIST evolutionary tracks (accounting for an additional \(\approx\)2.3% uncertainty in the distance from Riess et al., 2022), we infer an initial mass in the range \(M_{\rm init}=17\pm 4\)\(M_{\odot}\).
### Constraints on Pre-SN Mass Loss
All but one of the 20 best models have \(\tau_{1}=2.2\) and resulting dust mass-loss rates between \(\dot{M}_{\rm d}=1.6\times 10^{-7}\) and \(1.4\times 10^{-6}\)\(M_{\odot}\) yr\({}^{-1}\). A single model with \(\tau_{1}=4.45\) is the reddest model shown in Figure 4, which significantly underpredicts the \(J\)-band flux. Assuming a gas-to-dust ratio of 200 (appropriate for Milky Way RSGs; van Loon et al., 2005), this corresponds to mass-loss rates in the range \(\dot{M}=3\times 10^{-5}\) to \(3\times 10^{-4}\left(\frac{v_{w}}{10\rm~{}km~{}s^{-1}}\right)\)\(M_{\odot}\) yr\({}^{-1}\) (\(1.6\times 10^{-3}\left(\frac{v_{w}}{10\rm~{}km~{}s^{-1}}\right)\)\(M_{\odot}\) yr\({}^{-1}\) for the \(\tau_{1}=4.45\) model).
These values are elevated compared to modern estimates of mass-loss rates for normal RSGs in this luminosity range; the mass and luminosity dependent prescription of Beasor et al. (2020), for example, gives \(\dot{M}\sim 10^{-6}\)\(M_{\odot}\) yr\({}^{-1}\) for our preferred stellar parameters (\(M_{\rm init}=17\)\(M_{\odot}\), \(\log L/L_{\odot}=5.1\)). They are more in line with those of dusty, OH-IR stars (massive AGB and RSG stars exhibiting circumstellar maser emission and IR excesses) in the LMC and Galactic center/bulge (Goldman et al., 2017). The red \(J-K_{s}\) color (see Section 3.1) points to a dusty RSG with significant CSM, so the enhanced mass-loss rates inferred here are perhaps unsurprising.
Our estimates are largely consistent with those derived from early X-ray observations (\(3\times 10^{-4}\)\(M_{\odot}\) yr\({}^{-1}\) for \(v_{w}=50\) km s\({}^{-1}\); Grefenstette et al., 2023) but somewhat lower than those for confined CSM (\(\lesssim\)10\({}^{15}\) cm) from early observations of "flash" features in the spectra of SN 2013ixf (\(\dot{M}\sim 10^{-3}\)-\(10^{-2}\)\(M_{\odot}\) yr\({}^{-1}\)for \(v_{w}=50\) km s\({}^{-1}\); Bostroem et al., 2023; Jacobson-Galan et al., 2023). The exact values depend strongly on multiple assumptions (e.g., gas-to-dust ratio, in addition to \(v_{w}\)). We note as well that a larger mass of self-shielding cold dust, emitting beyond the shorter wavelength [3.6] and [4.5] IRAC channels, could also be hidden if the CSM is highly aspherical or clumpy, leading to underestimates of the mass-loss rate in the SED fitting.
Our analysis of the IR variability of the progenitor candidate over the last \(\approx\)13 yr (Section 3) indicates
Figure 5: HRD showing possible locations of the SN 2023ixf progenitor candidate. The result for the single best-fitting model from our SED analysis is shown as the large red star symbol, while the range of the 20 best models are shown by the small circles. The orange four-pointed star represents a secondary, local minimum in the \(\chi^{2}\) distribution at a more typical temperature for RSGs. The color of each point represents the \(\chi^{2}\) for the fit, as indicated by the color bar. The “\(\times\)” symbols show the locations of the star assuming early, mid, and late M-type spectra and applying the \(K\)-band bolometric corrections of Davies and Beasor (2018) to the average \(K_{s}\)-band measurement. We show stellar evolutionary tracks from MIST (nonrotating, solar metallicity) for a set of massive stars in the range \(M=8\)–\(26\)\(M_{\odot}\) as black curves for comparison. We also show the collection of directly detected SN progenitors of Types II (light gray squares) and IIb (dark gray circles; see text in Section 4.1).
that the star was likely undergoing steady, long-period pulsations. Importantly, there is no indication of any outbursts between 3 and 11 yr pre-explosion in Spitzer monitoring, nor are there any large changes in the near-IR fluxes or colors up to only 10.4 days before the SN. Optical imaging also places stringent limits on any long-lasting outbursts up to \(\approx\)400 days pre-explosion (Neustadt et al., 2023). This apparent stability is inconsistent with predictions from early observations of the SN for increased activity of the progenitor over the last \(\sim\)3 yr (Jacobson-Galan et al., 2023). Instead, our findings point to a steady but enhanced wind that develops over the final \(\gtrsim\)decade of the star's life. For our assumed velocity, the extent of the CSM from such a wind lasting at least 13 yr would be \(R_{\rm CSM}>4\times 10^{14}\) cm.
We explore a "superwind" scenario (as described in Forster et al., 2018) using the models presented in Davies et al. (2022). These models use a 3800 K MARCS stellar atmosphere (Gustafsson et al., 2008) and attach a wind following a \(\rho\propto r^{-2}\) density profile. The velocity of the wind is taken to be a \(\beta\) law as a function of radius (see Eq. 2 in Davies et al., 2022). The chosen wind parameters are taken from Forster et al. (2018) and assume a superwind where \(\dot{M}=10^{-3}\)\(M_{\odot}\) yr\({}^{-1}\), \(v_{w}=10\) km s\({}^{-1}\) and \(\beta=3\). The wind is propagated out to distances of \(r_{w}=3\), 3.5, 5, 10 and 20 \(R_{*}\), where 20 \(R_{*}\) roughly corresponds to 260 yr after the onset of the superwind.
Unlike an outburst, a superwind can take hundreds of years before it begins to substantially affect the observed spectrum (see Fig. 2 in Davies et al., 2022). In Figure 4, we find that the red IR colors of the progenitor candidate are best matched by the models with \(r_{w}=10\) and 20 \(R_{*}\) (\(\chi^{2}=4.7\) and 5.7, respectively), while the other models give significantly worse fits (\(\chi^{2}\gtrsim 30\)). This would imply the superwind was launched \(\sim\)200 yr prior to SN. We note, however, that these models are dust free. The inclusion of dust, in combination with enhanced molecular opacity in the wind, may be able to produce redder spectra similar to that of the progenitor candidate without invoking these longer timescales.
## 5 Summary and Conclusions
We have identified a candidate progenitor of SN 2023ixf as a bright IR source in archival Spitzer/IRAC and serendipitous ground-based near-IR imaging of M101 with MMT/MMIRS and Gemini-N/NIRI. In Spitzer, the star displays evidence of a long \(\approx\)1000 day period since 2012, likely indicative of radial pulsations. Variations seen in the near-IR \(J\) and \(K_{s}\) between 2010 and 2023--extending just 10 days before the SN discovery--are fully consistent with the Spitzer-derived pulsation period. There is no evidence for dramatic brightening due to eruptive, pre-SN outbursts, in tension with predictions from early SN observations (e.g., Jacobson-Galan et al., 2023) and with those for instabilities on the timescales of the final nuclear burning stages (e.g. Woosley et al., 2002). The IR colors of the star are consistent with a luminous, highly evolved, and dusty RSG. Modeling of the phase-averaged SED of the star yields constraints on the stellar temperature (\(T_{\rm eff}=3500^{+800}_{-1400}\) K) and luminosity (\(\log L/L_{\odot}=5.1\pm 0.2\)), placing the candidate among the most luminous Type II SN progenitors with direct imaging constraints. Comparison with stellar evolution models indicates an initial mass of \(M_{\rm init}=17\pm 4\)\(M_{\odot}\). We estimate the pre-SN mass-loss rate of the star from the SED modeling at \(\dot{M}\approx 3\times 10^{-5}\) to \(3\times 10^{-4}\left(\frac{v_{w}}{10\ {\rm km\ s^{-1}}}\right)\)\(M_{\odot}\) yr\({}^{-1}\).
Given the inferred high initial mass and long pulsation period, detailed comparisons with late-stage RSG pulsation models (see, e.g., recent work on Betelgeuse by Saio et al., 2023) are a promising avenue to further constrain the fundamental stellar properties and inform their connection to enhanced pre-SN mass loss. Furthermore, as one of the nearest and brightest Type II SNe of the last decade, SN 2013ixf will be exceptionally well observed. Already, early multi-wavelength data sets indicate clear signatures of interaction of the SN shock wave with dense, nearby CSM (Berger et al., 2023; Bostroem et al., 2023; Grefenstette et al., 2023; Hosseinzadeh et al., 2023; Jacobson-Galan et al., 2023; Smith et al., 2023; Teja et al., 2023; Yamanaka et al., 2023; E. Zimmerman et al., in prep.). This will enable a rare chance to compare constraints on the surrounding CSM from SN observations directly to those from archival observations of the progenitor star and to recent theoretical predictions for the effects of enhanced CSM on observable properties of Type II SNe (e.g., Goldberg et al., 2020). Continued monitoring to late times will be vital to both confirm the disappearance of the candidate progenitor and constrain any late-time interaction signatures that will extend our understanding of the full pre-SN mass-loss history (as in, e.g., Rizzo Smith et al., 2023). Altogether, we expect SN 2023ixf to be a keystone object for interpreting early interaction signatures in CC SNe and connecting observations of their progenitors to the theory of massive star evolution and end-stage mass loss.
## 6 Acknowledgements
We thank B. Davies for sharing the superwind models and for illuminating discussions. We thank S. Points for advice regarding some of the near-IR photometry. We appreciate the efforts of the observing support staff of the MMT and for their help in planning and obtaining observations presented in this work. We also thank the anonymous referee for helpful comments.
Time-domain research by the University of Arizona team and D.J.S. is supported by NSF grants AST-1821987, 1813466, 1908972, & 2108032, and by the Heising-Simons Foundation under grant #20201864. S.V. and the UC Davis time-domain research team acknowledge support by NSF grant AST-2008108. J.E.A. is supported by the international Gemini Observatory, a program of NSF's NOIRLab, which is managed by the
Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation, on behalf of the Gemini partnership of Argentina, Brazil, Canada, Chile, the Republic of Korea, and the United States of America. J.S. acknowledges support from the Packard Foundation. This publication was made possible through the support of an LSSTC Catalyst Fellowship to K.A.B., funded through Grant 62192 from the John Templeton Foundation to LSST Corporation. The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of LSSTC or the John Templeton Foundation. This work is based in part on archival data obtained with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with NASA. Observations reported here were obtained at the MMT Observatory, a joint facility of the University of Arizona and the Smithsonian Institution. Based on observations obtained at the Gemini Observatory (Programs GN-2010A-Q-27), which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), National Research Council (Canada), CONICYT (Chile), Ministerio de Ciencia, Tecnologia e Innovacion Productiva (Argentina), Ministerio da Ciencia, Tecnologia e Inovacao (Brazil), and Korea Astronomy and Space Science Institute (Republic of Korea). This research made use of Photutils, an Astropy package for detection and photometry of astronomical sources (Bradley et al., 2021). Gemini:Gillett (NIRI), MMT (MMIRS), Spitzer (IRAC) DAOPHOT/ALLSTAR (Stetson, 1987), DRAGONS (Labrie et al., 2023), Astropy ([https://www.astropy.org/](https://www.astropy.org/); Astropy Collaboration et al., 2013, 2018, 2022), photutils (Bradley et al., 2021), 2-Dust (Ueta & Meixner, 2003)
|
2306.10037 | Legal and ethical considerations regarding the use of ChatGPT in
education | Artificial intelligence has evolved enormously over the last two decades,
becoming mainstream in different scientific domains including education, where
so far, it is mainly utilized to enhance administrative and intelligent
tutoring systems services and academic support. ChatGPT, an artificial
intelligence-based chatbot, developed by OpenAI and released in November 2022,
has rapidly gained attention from the entire international community for its
impressive performance in generating comprehensive, systematic, and informative
human-like responses to user input through natural language processing.
Inevitably, it has also rapidly posed several challenges, opportunities, and
potential issues and concerns raised regarding its use across various
scientific disciplines. This paper aims to discuss the legal and ethical
implications arising from this new technology, identify potential use cases,
and enrich our understanding of Generative AI, such as ChatGPT, and its
capabilities in education. | Fereniki Panagopoulou, Christina Parpoula, Kostas Karpouzis | 2023-06-09T14:54:09Z | http://arxiv.org/abs/2306.10037v1 | # Legal and ethical considerations regarding the use of ChatGPT in education
###### Abstract
Artificial intelligence has evolved enormously over the last two decades, becoming mainstream in different scientific domains including education, where so far, it is mainly utilized to enhance administrative and intelligent tutoring systems' services and academic support. ChatGPT, an artificial intelligence-based chatbot, developed by OpenAI and released in November 2022, has rapidly gained attention from the entire international community for its impressive performance in generating comprehensive, systematic, and informative human-like responses to user input through natural language processing. Inevitably, it has also rapidly posed several challenges, opportunities, and potential issues and concerns raised regarding its use across various scientific disciplines. This paper aims to discuss the legal and ethical implications arising from this new technology, identify potential use cases, and enrich our understanding of Generative AI, such as ChatGPT, and its capabilities in education.
Artificial intelligence; ChatGPT; education; ethical issues; legal issues.
## 1 Introduction
A new technological tool is now available to us, under the guise of an application for compiling complex scientific answers with the assistance of artificial intelligence. But is this a blessing for learners and a curse for educators? As we all know, nothing in life is ever solely black or white, as things are usually a shade of grey. Thus, when it comes to this matter, too, attention and deliberation are required before making any aphorisms. What is clearly emerging, however, is a discernible change in the rules of the game [14, 20], as well as a valuable opportunity to provide a truly adaptive and meaningful learning experience. Therefore, this contribution aims to discuss the legal and ethical implications arising from this matter and propose ways in which the education community could use this emerging technology.
The rest of the paper is organized as follows. In Section 2, some clarifications on terminology are made. In Section 3, a brief literature review related to the use of ChatGPT in education is presented. In Section 4, a number of scenarios in which the underlying technology behind ChatGPT can improve the teaching and learning experience are discussed. In Section 5, the main legal issues that ChatGPT tool has posed are discussed in detail. In Section 6, multidisciplinary perspectives on opportunities, challenges and implications of ChatGPT for scientific research, practice and policy are presented. Finally, in Section 7, some concluding remarks are made. The bibliographic references are listed at the end of the paper in Section 8.
## 2 Some clarifications on terminology
Generative pre-trained transformer (GPT) technology is part of the family of Large Language Models that are used, inter alia, to compose/generate text by successively predicting words from other words, but without specifying the datasets it creates [1]. It is not, in fact, a new technology: it has been around for some years, but it is now being made available to the public for the first time, free of charge, and mature enough to be deployed in commercial applications and in disciplines outside natural language processing or content creation. ChatGPT is, in essence, a chatbot, to which the user enters a _prompt_, i.e., textual input which provides the context for the required response and additional instructions on writing style. ChatGPT then composes its response based on its training, the context of interaction and, more recently, information retrieved from the Web in real time. Its answers can be extensive and personalized, storing and integrating the history of the conversation,
and offering users the illusion of having interacted with a natural person. According to the definition provided by the software itself in response to a relevant question, ChatGPT is an artificial intelligence program that can chat with people and answer questions. Until now, the answers have been provided without reference to the sources. Hence, it could be compared to a student who has read the course material but lacks critical thinking skills (Karpouzis, 2023).
## 3 ChatGPT in education: selective literature review
Since its public release on November 30, 2022, ChatGPT five months after its launch has already experienced a rapid growth and widespread adoption, becoming one of the most popular artificial intelligence user applications in history, so far reaching over 173 million active users. This unprecedented ChatGPT's success has posed new challenges and possibilities to a plethora of scientific domains such as finance, healthcare, medicine, material science and engineering, customer management etc. The role of this cutting-edge piece of technology in the global education field also constitutes one of the main areas of interest and contention between academics, researchers, practitioners and teachers worldwide, with a significant portion of them viewing ChatGPT as an alternative vehicle to improve and promote learning, and manage heavy workload in education as well, while others view it as a threat to integrity which opens the door to artificial intelligence-assisted cheating and/or plagiarism (Kasnecki, 2023).
GhatGPT's impact on the sector of education and lifelong learning was first systematically explored in a review by Mhlanga (2023). Mhlanga adopted the document analytical method for his research and 8 ChatGPT-related articles were finally selected to be included in his investigation in order to outline the concerns and opportunities regarding ChatGPT's use in education. According to his findings, educators expressed serious concerns and worries that students are likely to outsource their work to ChatGPT because of its ability to content creation and rapid generation of humanlike, convincing and comprehensible texts. Further, he gave emphasis on the importance of taking steps to ensure that ChatGPT is used responsibly and ethically in education, and highlighted that privacy, fairness, non-discrimination and transparency should be guaranteed. Recently, in a systematic review preprint examining 60 ChatGPT-related articles, the potential limitations and future perspectives as regards ChatGPT's use in healthcare education were investigated by Sallam (2023). The author's findings indicated that ChatGPT's benefits were expressed in 85% of the records with the most frequent being its usefulness in writing academic assignments and scientific papers, while possible risks of ChatGPT's use were cited in almost 97% of the records with the most prevalent being a plethora of ethical issues (such as bias, lack of originality, inaccurate responses etc.) and citation/referencing errors. Further, Lo (2023) reviewed 50 ChatGPT-related articles investigating how ChatGPT is utilized across various scientific fields including education. His research findings highlighted that ChatGPT is capable of revolutionizing the educational landscape if it is adopted as an instructors' assistant and as a students' virtual tutor; however, he also expressed serious concerns as regards the threats posed to academic and teaching ethics and integrity, and raised misinformation, disinformation, and mal-information issues related to ChatGPT's artificial intelligence-generated content.
It can therefore be seen that ChatGPT represents a transformational tipping point in the evolution of education and requires a more comprehensive investigation and deeper understanding of the benefits, challenges and the implications of ChatGPT-assisted learning for both educators and learners. From this perspective, it is necessary to adopt an Explainable Artificial Intelligence (XAI) approach in education, since XAI addresses four traditional moral principles; benefcience, non-maleficence, autonomy, and justice; this thereby seems to be the best way to improve trust and ethical practice in "algorithmic" educational contexts, as also discussed by Farrow (2023). Since artificial intelligence technologies have already been widely used to educational institutions serving various learning, research and practice purposes, it is of crucial importance to understand the nature of XAI in education, determine what might make it effective, and identify any ethical or practical limits in teaching and learning processes.
Undoubtedly, artificial intelligence black box technologies, such as ChatGPT, create mistrust, thus we need to bridge the gap in artificial intelligence explainability and understanding in order to comprehend artificial intelligence bias and drift, and further shape and plan the delivery of education using such technologies. An open, transparent and explainable Artificial Intelligence is the key to opening the ChatGPT's black box (Parpoula, 2023) and has the potential to improve ChatGPT's performance and provide insight into how the model is making decisions and learning. Through greater accountability and legibility, training of the involved stakeholders in ethical and legal perspectives, qualification programmes in artificial intelligence-related ethics, and greater public awareness of artificial intelligence, XAI can be a retort to the black box problem which responds with transparency to foster trust allowing users understand ChatGPT's reasoning and learning process. Since other discernible ChatGPT-related changes and implications in education have yet to emerge, continued monitoring of ChatGPT's automation, security and performance is warranted in order to ensure that ChatGPT's advantages in education are
optimized, while its drawbacks are minimized.
## 4 Generative AI as an opportunity for education
The immediate response from the education community, as soon as ChatGPT was introduced, pointed to it being a risk or a threat, allowing students to plagiarize, and limiting their creativity, while reducing the individual differences between different authors (Dwivedi, 2023). However, educators and researchers also identified a number of scenarios in which Generative AI, the underlying technology behind ChatGPT, can greatly improve the teaching and learning experience. For example, educators can utilise ChatGPT to create role-playing exercises or simulate the writing style of famous authors; in this manner, the text generated can be used to attract students not interested in the mainstream teaching style, but find, for instance, contemporary music, more relatable. By adapting a generated or existing text to the style of, for example, a rap singer or a K-Pop artist, educators manage to retain the scientific integrity of their educational content, while increasing its relevance.
Another option is that of generating pros and cons with respect to a specific issue; ChatGPT has the potential to "humanize" web search, i.e., help users locate and retrieve information in the same manner as asking a fellow or colleague. A set of pros and cons can be used either as part of a more general research project or as part of a debate exercise, where students are asked to support or find weaknesses to a specific argument. Besides the actual scientific value of this generated content, a well-structured debate can be used towards improving social and soft skills, such as citizenship.
A third use which is very popular among educators is that of adapting an existing or generated text to a specific audience. In this case, educators can adapt their content with respect to scientific depth or language, making it more relevant to students with different skills and competencies. This approach is very popular with language learning (Tsatiris, 2021), and recently found its way to commercial applications, such as Duolingo. In this context, educators, either humans or an application, can select the suitable content with respect to the learning objectives of a particular module, the individual learning needs and preferences of a student, and the means of presentation and testing predicted to be more interesting for them. In this manner, students spend more time with the learning application and focus on the aspects needed for them to improve. Since ChatGPT was trained with an abundance of text from Wikipedia, books, and blog posts, its ability to generate textual content is a perfect match for language learning applications.
## 5 The emerging legal issues
This new artificial intelligence tool gives rise to several legal issues. The main ones are outlined below:
### Issues related to plagiarism
In the case at hand, the person signing the text appears to have drafted something that is not the product of his or her intellectual property, but rather that of a third party who, in this case, is no longer a natural person, but a digital technology. Consequently, it appears that some form of cheating is taking place (Karabatzos, 2023), even though the boundaries between compositional work and the examination of sources undertaken by search engines are clearly permeable. Submitting a paper under these circumstances is against the rules of academic ethics. Indeed, it should be noted that, in accordance with Article 197(2)(b) of Law No. 4957/2022, it is a disciplinary offence "to plagiarize or conceal the direct or indirect contribution of other persons to the subject of scientific work or research". Having said that, such plagiarism can now be checked electronically and by using artificial intelligence methods, such as, for example, the Turnitin application ([https://www.turnitin.com/solutions/ai-writing](https://www.turnitin.com/solutions/ai-writing)), although there is other software available that can 'trick' Turnitin.
In view of the above, one may rightly wonder whether what we are faced with, in a wider sense, is the art of deception. The reality of the matter is that the currently available copying methods exhaust the imagination of the examinees, who resort to all possible means for doing so, ranging from cribbing, and having other parties compose their coursework, to the use of technical means, such as mobile phones and the software in question (Koulouri 2023). And, if we were to go a little deeper, we might end up concluding that our entire culture is a copycat, the requirement being that it be a good one.
### Issues concerning copyright
Another question that arises in this context is who should be considered the final work's author. The possible answers can be summarized as follows (Panagopoulou-Koutnatzi, 2023; Chiou, 2022, 2021, 2017):
* The work is the property of the creator of the artificially intelligent software. This position is countered by the fact that the application of the ideas of a creation does not constitute a derivative work, as ideas
do not fall within the scope of copyright protection under Art. 2 of Law 2121/1993. * The work that is generated belongs to the creator of the artificially intelligent software infrastructure as intellectual property, but not as copyright. In this sense, the work created may be considered intellectual property belonging to the creator of the artificial intelligence, e.g., as an invention, but not his or her copyright. In this sense, the intellectual creation itself might belong to the user of the creative artificial intelligence and not to the creator of the artificial intelligence. Since work suffices as a criterion for the acquisition of intellectual creation, following Art. 1(3) of Directive 91/250/EEC and Article 6 of Directive 2006/116/EEC, the secondary achievements would be the property of the user of the original software, since it was the user who put the device in question into operation to produce them (Christodoulou, 2019: 122-123). * The produced work comprises the joint creation of the creator and the user of the creative artificial intelligence. This approach takes into consideration the fact that the final, jointly created work is the co-creation of both parties (Christodoulou, 2019: 122-123). * The created work becomes a free good, which is now in the public domain, since machines cannot create intellectual works (Christodoulou, 2018: 54, note 119). In this case, we are dealing with the so-called zero-sum solution. In this context, the secondary creation is not the product of a natural person and, as such, it is not a work, but rather a free good belonging to the public domain. Even so, ownership of the product of the artificial intelligence will be acquired by its owner through the processing of material belonging to a third party or as fructus (Panagopoulou-Koutnatzi, 2023: 54, note 122). This solution is founded on the argument that what we have at hand is not a human creation that is tangible and original. At the same time, this approach has the disadvantage of lacking any motivation for the manufacturer of the artificial intelligence (Igglezakis, 2022: 214). Moreover, it is maintained that free distribution is inconsistent with the Berne Convention, from which the law of copyright derives, and which also establishes the principle of the author. * The work generated is the product of the creative software, meaning that the artificial intelligence becomes a creator from the position of the creation (Zekos, 2022: 80). In this way, the legal personality of the artificial intelligence device is acknowledged either by analogy or by virtue of legislation. The European Parliament (EP) resolution of 16 February 2017 on "Civil Law Rules on Robotics", which was rejected by the EP in October 2020, moves towards the direction of establishing a special legal framework for robotics in the long term. The aim is to consider more sophisticated, autonomous robots as electronic persons, with the obligation to rectify any potential damage caused. A further aim is to apply (legal) electronic personhood in cases where robots make autonomous decisions or otherwise interact independently with other persons (EP resolution on "Civil Law Rules on Robotics" 2017). This position entails the risk of limiting responsibility for potential damage to the benefit of the devices' manufacturers and has not been endorsed by the European legal order (Panagopoulou-Koutnatzi, 2023: 123). In the Thaler v. Comptroller judgment issued by the England and Wales Court of Appeal (EWCA) in 2021, it was held that a machine cannot be an inventor within the meaning of the law, as a machine is not a natural person (EWCA 2021). In China, by contrast, it has been ruled that an article generated by a robot is protected by copyright (Sawers, 2020). * In view of the weaknesses entailed in the above positions, the solution of unjust enrichment is proposed, under the lens of civil law, pursuant to the provisions of Art. 904 et seq. of the Civil Code. In this case, ownership is acquired as unjust enrichment deriving from a lawful cause (by virtue of a contract) or even without lawful cause. None of the above solutions can be said to wear the crown of absolute rightness, and the answer to the question of copyright must be given based on the particular facts of each case. It is anticipated, however, that artificial intelligence will necessitate the transformation of the law on copyright, which may end up having to attribute rights to non-human creators (Chiou, et al., 2016).
### Issues of legal responsibility
If the drafted document is adopted, it is only reasonable that the issue of liability should arise. Who will be responsible if the created work contains false statements? Quite evidently, this is a question that does not lend itself to an obvious answer.
The first position is that responsibility must be borne, but also managed, by the manufacturers of artificial intelligence products. This could be achieved by establishing a rebuttable presumption providing that, in case of doubt, manufacturers shall be deemed responsible. This would strengthen responsibility and foresight on their part. To this end, it would seem appropriate that an impact assessment should be required before activating any artificial intelligence application (see, for example, Article 5 of Law No. 4961/2022). This model of responsibility seems to be largely adopted by the Draft Regulation on Artificial Intelligence, which assigns a great deal of
responsibility to the software designer and emphasizes the importance of forethought at the design stage. Furthermore, AI companies are under obligation to exercise rigorous after-sales control over their products and to conduct continuous upgrades that will prevent unforeseen impacts (Kowert, 2017: 203). Therefore, AI companies must devise ways to prevent the misuse of their products in order to protect themselves but also to avoid depriving society of the great benefits that they will offer. They must take appropriate measures to minimize the risks that may arise while, in doing so, they will also reduce their potential responsibility and ensure that their products are suitable for the society in which we live (Kowert, 2017: 203). No one should develop artificial intelligence systems without having a sense of responsibility for them, even if they are autonomous machine learning systems, since responsibility can now also be introduced as information (Winfield & Jirotka, 2018). Strict liability on the part of the creator should play a key role in terms of compensating for damage caused by defective products and their components, whether they come in tangible or digital form (European Commission, 2019: 8).
The second position lies in the view that responsibility should be attributed to the user of the technology that the intelligence involves, i.e. the researcher who applies the technology. This does not, however, resolve all the issues raised by artificial intelligence: still, it remains a prima facie honest solution as far as the researcher seeking a proposal for the problem at hand is concerned. Rather than embracing the proposal without question, the user of the program in question ought to check that the proposal is fully adapted to the facts of the case under consideration and consider the possible scenario that the algorithm may be biased. In this direction, we could adopt the rebuttable presumption that human judgement prevails over the algorithm's decision in case of doubt (cf: Article 22(3) GDPR). Of course, the risk of the user being carried away by the proposal and being led to misguided reasoning when it comes to the final text should not be underestimated.
The third position is based on the sharing of responsibility between the manufacturer or developer of the artificial intelligence technology and its user. Each of them will be responsible for his or her share of responsibility: the developer for the manufacturing defect and the user for the failure in handling it or for not taking into account the facts in the case under consideration. Even though this system of responsibility seems appealing and appears to be the most prevalent one, it also comes with its own problems and controversies. Attribution of responsibility may in many cases be rendered an issue that is difficult to solve and prove. If there are two or more actors, in particular (a) the person who primarily makes the decision on the use of the relevant technology and benefits from it (frontend operator); and (b) the person who continuously determines the characteristics of the relevant technology and provides substantial and ongoing support to the backend (backend operator), objective responsibility should rest with the person who has greater control over the risks of the operation (European Commission, 2019: 8).
The fourth position involves the attribution of responsibility to the technology itself. But, if the technology is to be held responsible, it must first be granted legal personality (Papakonstantinou & De Hert, 2020). In 2015, the EP adopted a resolution inviting the Commission to consider the possibility of creating a special legal status for robots in the long term (Zornoza, et al., 2017; Borenstein & Arkin, 2016; Deng, 2015; Lin & Bekey, 2015; Veruggio & Abney, 2011). The aim of this would be to have at least the most sophisticated, autonomous robots acquire the status of electronic persons responsible for rectifying any damage they may cause, and possibly also the recognition of electronic personhood in cases where robots make autonomous decisions or otherwise interact with third parties independently.
This solution was rejected in October 2020 by the EP committee (2016) which adopted three resolutions on the ethical and legal aspects of artificial intelligence software systems, namely a) Resolution 2020/2012 (INL) on a framework of ethical aspects of artificial intelligence, robotics and related technologies; b) Resolution 2020/2014(INL) on a civil liability regime for artificial intelligence; and c) Resolution 2020/2015(INI) on intellectual property rights for the development of artificial intelligence technologies. All three resolutions acknowledge that artificial intelligence will have significant benefits across various sectors (businesses, the labour market, public transport, and the health sector).
Even so, as pointed out in the resolution on the ethical aspects of artificial intelligence, there do exist concerns that the current legal framework of the European Union, including consumer law, labour law and social acquis, data protection legislation, product safety and market surveillance legislation, as well as anti-discrimination legislation, may no longer be adequate to effectively address the risks posed by artificial intelligence, robotics, and related technologies. All three resolutions are unequivocal in not granting legal personality to artificial intelligence software systems. Consequently, tempting as it may be, it appears that this solution will not be adopted in the near future, although it is not ruled out for a little later when the concept of digital personality will have matured.
In any event, the proposal for a Directive on adapting non-contractual civil liability rules to artificial intelligence is a step in the right direction, which will create a rebuttable "presumption of causation" to ease the burden of proof placed on victims, who must prove the damage caused by an artificial intelligence system. Additionally, it provides national courts with the power to order the disclosure of evidence relating to high-risk artificial intelligence systems that are suspected to have caused damage.
### Issues regarding the freedom of expression
A question that arises is whether the software is covered under the freedom of expression (Massaro & Norton, 2016: 1169). More specifically, does the machine have the discretion to characterize someone as a famous or obscure professor or a notable scholar? If we, as ordinary citizens, ask questions to a journalist and the journalist answers them, it is indisputable that the journalist is covered by the constitutionally guaranteed freedom of speech. Similarly, when we submit a question to the software it must decide, at that moment, which "answers" it should give us and in what order. If those answers are regarded as an expression of the software, then any governmental attempt to regulate the technology should be regarded as censorship (Wu, 2013: 161). To the extent that the developer of the software incorporates his or her opinion and attempts to influence the public, freedom of speech is assumed to apply in this case (Wu, 2013: 1533). There are considerable concerns about the misinformation (Tsakarestou, 2023) of citizens through the re-dissemination of false news.
### Issues pertaining to the protection of personal data
The use of large language models in education raises concerns over privacy and data security, as learner data are often sensitive (special categories). The indiscriminate collection and processing of our personal data resulting from the operation of artificial intelligence gives rise to intense questions about the compatibility of the technology with the right to personal data protection and that of informational self-determination. A large amount of data used in artificial intelligence constitutes personal data (Igglezakis, 2022: 175) and much of it falls into special categories.
By way of explanation, the operation of artificial intelligence requires the collection and processing of large data sets that are difficult to put under the control of the data subject. As the algorithm often outperforms its creator, due to the latter's inability to comprehend the way in which it operates, it is not always possible to inform the data subject on how the algorithm works and, by implication, on the data being collected and its wider processing. As a result of this, the principle of transparency is not adhered to.
At the same time, inaccuracies concerning persons give rise to questions concerning the violation of the principle of data accuracy. Other usual risks lie in the unauthorized access to learners' data and the use of such data for purposes beyond those related to education. In this respect, the Italian Data Protection Authority has ordered the temporary restriction of the processing of Italian users' data against OpenAI, the American company that developed and manages the ChatGPT platform. In parallel with this, the Authority has also initiated a related investigation (GPDP, 2023).
The Italian Data Protection Authority has highlighted the lack of information for users and all interested parties whose data are being collected by OpenAI, and especially the absence of a legal basis justifying the mass collection and storage of personal data for the purposes of "training" the algorithms underpinning the operation of the platform. Despite the fact that, according to the terms published by OpenAI, the service is intended for users aged 13 and over, the Italian Authority noted that the lack of any mechanism for verifying the age of users exposes minors to responses that are wholly inappropriate for their level of development and understanding. This clearly raises the issue of the responsibility of the controller to take appropriate technical and organizational measures to prevent children from having access to this type of software (Panagopoulou-Koutnatzi, 2017: 51). OpenAI, which does not have an establishment in the European Union but has appointed a representative in the European Economic Area, must notify the measures it has taken to implement Garante's request within 20 days, subject to a fine of up to EUR 20 million or up to 4% of its global annual turnover.
### Risks posed against the liberal character of the democratic political system
The imposition of a "dictatorship" of the average in science, the provision of specific, premeditated, tested knowledge, poses the danger of undermining the liberal character of our constitution, in the sense of imposing the average in science and in our thinking in general (Foundethaki, 2023), of establishing a common understanding of things, but also of spreading misleading or false news. It is reasonable to wonder whether we should intervene legislatively to preserve the core of liberal democracy. This question is a multifaceted one, as the establishment of a particular perception leads, in turn, to the formulation of a specific electoral preference and, in this instance, to the indirect manipulation of voting.
What would be the appropriate response on the part of the scientific community?
It is true that this new technology has been a source of great concern in the educational community. Have we arrived at the death of the author (Barther,1968) or the reader (Vamvakas, 2023)? There is talk of the depletion of the knowledge ecosystem, and this is because the development of knowledge has, as its starting point, the matters that the scholar is immersed in, such as published scientific papers and books that build on previous knowledge (Spinelli, 2023). Education stems from what has educated the educator who, after the "ordeal" of intellectual pursuit, can pass on the knowledge acquired to his or her students. The "un-educated" response to any question posed seems to deprive learners of the necessary interactions with the knowledge ecosystem (Spinelli, 2023). The automatic answer deprives us of the journey of knowledge. Fears are expressed about the impairment of original intellectual creativity and critical thinking (Karabatzos & Skevi, 2023). Could it be said that we have come to the end of the age of conventional writing skills and education at large? Is this an intellectual revolution (Parpoula, 2023), a hoax, or a commonplace of evil (Chomsky, 2023)? It would be best if we did not hasten to rash conclusions.
The question, however, remains a vexing one: how should we approach the issue of software? A solution that has been put forward is to discard technology, essentially turning the clock back to the 20th century, and have students take their exams with pen and paper, without the use of electronic devices connected to the internet (Villasenor, 2023). For instance, the University of California in Los Angeles is looking into the possibility of making it a violation of its honour code of ethics to use ChatGPT to take exams or write papers (Villasenor, 2023). Likewise, in Germany, the University of Tubingen has decided to restrict the use of this software for students and researchers (Universitat Tubingen, 2023).
The reasons for this hesitation are not problem-free. Learners may rely excessively on this model, but information that is generated effortlessly could adversely affect critical thinking and problem-solving skills. This is because this model simplifies the acquisition of answers or information--and this is something that could reinforce the learners' laziness, as well as limit their interest in conducting their own research to reach their own conclusions or solutions (Enkelejda, et al., 2023).
Nevertheless, banning technology would be a technophobic approach if it were to be taken indiscriminately. The voices of those who urge their students to use ChatGPT in their written assignments also entail certain concerns. Instead of banning learners from using artificial intelligence writing tools that can save them time and effort, we should teach them how to use them ethically and productively (Villasenor, 2023), to enable them to comprehend the complex matter of diversity of sources and the educational process in general. In this sense, choosing to exercise strict control over the system would be preferable.
To remain competitive throughout their careers, learners need to be trained on how to prompt an artificial intelligence writing tool to generate a meaningful output and assess its quality, accuracy, and originality (Villasenor, 2023). For this reason, the software should be a tool that will support each related course (Pedis & Karpouzis, 2023). Learners must be taught how to write well-structured, coherent essays incorporating a combination of artificial intelligence-generated text, along with traditional writing. As professionals, they need to learn how to work productively with artificial intelligence systems, utilizing them to complement and enhance human creativity with the extraordinary potential that they promise to bring to the table (Villasenor, 2023).
In addition to pedagogical reasons for approaching ChatGPT as an opportunity, and not as a threat, there are also practical reasons for doing so: apart from concerns regarding academic freedom (Parpoula, 2023), it is also utopic to effectively prohibit access to this technology (Villasenor, 2023). This software is freely available, and learners cannot be monitored in the free space of their private life (Karabatzos & Skevi, 2023). The reasoning for supporting a total ban clearly does not solve the problem at hand (Floros,2023).
The imposition of a total ban on the use of ChatGPT would also inevitably result in the injustice of false positive and false negative results in the course of monitoring the use of the software. Some learners who use ChatGPT despite the ban could, either by chance or thanks to the rather thorough processing of the text that is generated by artificial intelligence, avoid having their text flagged out as being assisted by it. In a worse scenario, some learners could be falsely accused of using ChatGPT, causing immense anxiety and leading to penalties for a slip they did not actually make (Villasenor, 2023).
It would be presumptuous to ignore an application that offers personalized instruction and feedback to learners
based on their individual learning needs and their progress. For instance, the application could provide personalized math instruction to learners, resulting in improved learning outcomes (Baidoo-Anu & Owusu Anash, 2023). The application could assist in the development of reading and writing skills (for example, by suggesting syntactical and grammatical corrections), as well as in the development of writing and critical thinking skills (Enkelejda et al., 2023). These models can also be used in the creation of questions and as prompts that will encourage learners to think critically about what they read and write, and to analyze and interpret the information they are presented with (Enkelejda et al., 2023). This does not mean that the software can replace teachers, but rather that it can assist them (Baidoo-Anu & Owusu Anash, 2023).
Furthermore, we should also not overlook the potential of the software to empower learners with disabilities. Language models can be used to develop inclusive learning strategies with adequate support for tasks such as adaptive writing, translation, and the identification of important content in various formats (Enkelejda et al., 2023). Still, it is important to note that the use of large language models should be supplemented with the assistance of professionals, such as speech therapists, teachers, and other specialists who will be able to adapt the technology to the specific needs of the learner's disabilities (Enkelejda et al., 2023).
Writing a good essay from scratch requires careful, and often painstaking thinking about its structure, flow, and delivery. This can be developed as a skill in early education classes. Learning to write without the use of artificial intelligence does, indeed, promote focused, disciplined thinking. But learning how to successfully combine conventional writing with the support of artificial intelligence to create really good essays also requires these skills (Villasenor, 2023). It is like turning our backs to the future, having an application at our disposal and saying that it is forbidden to use it. By the same reasoning, we could prohibit finding sources through search engines and only allow searching in conventional libraries. Is that what we wish to do? It would appear better to allow the creative use of artificial intelligence, with a view to finding ways of combining it with traditional education.
#### How could this be achieved?
1. Educators ought not to look down on the issues of new technologies, but to be trained and comprehend, in this respect, what ChatGPT is and how it works (Karabatzos & Skevi, 2023), as well as explore technological applications through the lens of academic integrity (Parpoula, 2023).
2. Learners must be prepared for a future in which artificial intelligence will be just another technological tool.
3. Learners should be solely and exclusively responsible for the texts that they hand in under their names. If they contain inaccuracies, they will be responsible for finding the truth. If their structure is problematic, they will also have the responsibility of their signature. If the text is stylistically or logically inconsistent, it will be their responsibility. If there is partial plagiarism, they will also be legally responsible for it (Villasenor, 2023). They will, thus, be responsible for checking and evaluating their sources (Karabatzos & Skevi, 2023) and, above all, they must add references to their text, which is something that the software does not provide at this time.
4. In this sense, learners should be encouraged to be responsible, informed users of artificial intelligence technologies that will play an extremely important role during their careers. This responsibility entails an obligation to report on the use of the software in the text.
5. Educators must shift their focus and place more importance on how they teach, adopting a different approach in their teaching method.
6. Without this meaning that we will have to renounce or ban technology, we could think of some alternative, complementary ways of studying and testing, such as: 1. Cultivating conventional writing starts in the lower grades so students are not cut off from this very useful skill. 2. The performance of learners could be (co-)assessed through conventional examinations that will not involve the use of mobile phones. 3. It is recommended that learners should be tested orally on critical questions concerning their coursework (Lakassas, 2023). 4. It would be better if assessment questions were to require more critical thinking to make it difficult for machines to compose them. We have a duty to educate people so that we have citizens and scientists engaged in critical thinking (Pitta, 2023) and do away with the "dictatorship" of the average. 5. Examinations could be conducted remotely, online, using artificial intelligence systems that monitor all suspicious actions on the part of examinees, including the use of software. In fact, this is how the examinations for selecting senior civil servants in public administration were conducted. This type of examination requires a prior study of technology's impact on the
examinees' rights.
* Examiners are advised to submit the assessment questions to the software to enable them to be aware of the answers it generates, even though the software produces different answers each time.
## 7 In lieu of an epilogue
As is the case with every new technology, ChatGPT also poses a new challenge. It is up to us not to demonize it and, instead, proceed to the ethical use of this technology by adhering to the fundamental principles of logic, moral philosophy, and aesthetics. We ought to make use of the countless advantages of technology, assessing and managing any risks it may entail (Panagopoulou-Koutnatzi, 2023; Chiou, 2022, 2021, 2017). We have the ability to control technology with the help of technology itself, and the transparency and comprehensibility of technology will be crucial in this respect.
|
2305.09036 | Anisoplanatic Optical Turbulence Simulation for Near-Continuous $C_n^2$
Profiles without Wave Propagation | For the simulation of anisoplanatic optical turbulence, split-step
propagation is the gold standard. Within the context of the degradations being
limited to phase distortions, one instead may focus on generating the phase
realizations directly, a method which has been utilized in previous so-called
multi-aperture simulations. Presently, this modality assumes a constant $C_n^2$
profile. This work presents an alternative derivation for Zernike correlations
under anisoplanatic conditions. Multi-aperture simulation may easily
incorporate these correlations into its framework and achieve a significantly
higher degree of accuracy with a minimal increase in time. We additionally use
our developed methodology to explain previously reported discrepancies in an
empirical implementation of split-step with the analytic tilt correlation.
Finally, we outline a major limitation for Zernike-based simulation which still
remains. | Nicholas Chimitt, Stanley H. Chan | 2023-05-15T21:46:18Z | http://arxiv.org/abs/2305.09036v1 | Anisoplanatic Optical Turbulence Simulation for Near-Continuous \(C_{n}^{2}\) Profiles without Wave Propagation
###### Abstract
For the simulation of anisoplanatic optical turbulence, split-step propagation is the gold standard. Within the context of the degradations being limited to phase distortions, one instead may focus on generating the phase realizations directly, a method which has been utilized in previous so-called multi-aperture simulations. Presently, this modality assumes a constant \(C_{n}^{2}\) profile. This work presents an alternative derivation for Zernike correlations under anisoplanatic conditions. Multi-aperture simulation may easily incorporate these correlations into its framework and achieve a significantly higher degree of accuracy with a minimal increase in time. We additionally use our developed methodology to explain previously reported discrepancies in an empirical implementation of split-step with the analytic tilt correlation. Finally, we outline a major limitation for Zernike-based simulation which still remains.
Zernike polynomials, atmospheric turbulence, phase distortions, simulation
## 1 Introduction
Optical wave propagation through a turbulent medium produces a complicated form of wave distortions, affecting both the amplitude and phase. In this work, as in many others we describe here, we focus on the phase distortions generated by the atmosphere. Much of the classical literature focuses on the case of a single point source which has led to a considerable body of work dedicated to the understanding of the statistics of a single wavefront [30, 8, 9, 20, 7]. Among these, Noll [20] decomposed the turbulent wavefront distortions by the Zernike polynomials, which is critical to this work. Beyond a single wavefront, the correlation statistics for two wavefront tilts as a function of their separation, otherwise known as angle of arrival correlations [6, 10, 1], have been investigated. These correlations have a long correlation length and contain much of the energy of the distortions. For the sake of anisoplanatic modeling, the angle-of-arrival correlation is an important marker of accuracy.
In this work, we focus on the problem of correlation for _all_ Zernike coefficients as a function of their separation in the object plane (or equivalently, their angular separation). Within this space are works [12, 19, 33] in which the spatial correlation of Zernike coefficients (or more generally any basis representation) are considered in a variety of situations. Most directly related to our problem is the work of Whiteley et al. [36], choosing to comment on this paper once the major concepts have been suitably described. In contrast to these, we use a comparatively simple approach to arrive at similar results, and demonstrate the applicability of these results to generation of turbulent phase statistics. This has been approximately achieved by Chimitt and Chan [4], with the limitation of their approach having a simple interpretation within our analysis. With our results, we can match the predicted tilt results of Fried [6] as a special case of our general result.
Beyond theoretical understanding, our results apply directly to recently proposed multi-aperture simulation approaches by Chimitt et al. [4, 5] and Mao et al. [16]. These methods use Zernike-based generation concepts to simulate anisoplanatic turbulence on an image, though with two significant limitations. One of these is the limitation of only being applicable to a constant \(C_{n}^{2}\) profile which significantly limits its usage. However, with adoption of the theoretical
groundwork described in this work, these Zernike-based approaches can now generate realizations for near-continuous \(C_{n}^{2}\) profiles in both constant and arbitrarily varying cases. It is important to mention that in spite of the removal of this significant limitation, there still remain some challenges for the utilization of these results for a full-frame image, which we detail towards the end of this paper.
The most common approach towards simulating these effects on an image is the classical method of split-step propagation [2, 11]. While the split-step modality is more general than the discussed Zernike-based simulation, within the context of phase distortion modeling Zernike-based models are more suitable for large dataset generation which has been highlighted in previous works [4, 16, 5]. Split-step is based upon numerical wave simulation, which is largely neglected by the computer vision/image processing communities due to speed in generation [16]. By contrast, Zernike-based simulation's lack of wave propagation while maintaining a high degree of accuracy highlights the core benefit of this modality. The cost to be paid for these benefits is (i) the restriction to phase distortions and (ii) a complicated correlation expression - though, simple to evaluate numerically.
We outline our main contributions as follows:
1. **Alternative derivation of Zernike correlations:** An approach towards deriving the Zernike coefficient correlations is provided. While the derivation has some similarity to Whiteley et al. [36], our approach is comparatively simplistic and additionally allows for more general numerical evaluation techniques. The derivation provided also has a visual interpretation of the correlations;
2. **Introduction of a \(C_{n}^{2}\)-slice:** With the expression for Zernike correlations, we discretize and interpret this result to empower Zernike-based methods [4, 16] for varying turbulence profile simulation, leading us to introduce a \(C_{n}^{2}\)-slice;
3. **Zernike-based simulation with \(C_{n}^{2}\)-slices:** The \(C_{n}^{2}\)-slice concept, facilitated by the general nature of our result, can be used to replace the correlations of previous methods [4, 16, 5]. This leads us to describe various aspects of the state of Zernike-based simulation's accuracy and speed. Additionally, we outline remaining limitations in Zernike-based simulation methodologies.
## 2 Background
We begin with some definitions of the Zernike polynomials using Noll's conventions as well as some additional notations for tracking source positions. After discussing two simulation approaches and their limitations in more detail, we then present the angle-of-arrival correlations described in Fried [6] using our notation. Afterwards, we turn to a seemingly unrelated problem of the correlation of the same point source viewed by a two _separate_ imaging systems. Ultimately, we will use the results of the two aperture correlations to analyze and directly compare with Fried[6].
### Preliminary Definitions and Single Point Source Statistics
We define the object plane coordinates to be \(\mathbf{x}=(x,y)\), while we use normalized polar coordinates \(\boldsymbol{\rho}=(\rho,\theta)\) for the coordinates within the aperture plane, with \(\rho=1\) on the edge of the aperture. We denote the propagation distance as \(L\), with \(z\) as the distance _from_ the imaging system (thus \(z=0\) is in the aperture plane and \(z=L\) in the object plane). We further define \(D\) and \(R\) to be the aperture diameter and radius, respectively. Finally, we adopt Noll's indexing conventions for mapping the radial and angular components to a single index, \((n,m)\to i\).
The Zernike polynomials \(\{Z_{i}\}\) were famously applied to the problem of turbulent phase distortions by Noll [20]. Noll chose to define the Zernike polynomials such that
\[\frac{1}{\pi}\int d\boldsymbol{\rho}P(\rho)Z_{i}(\boldsymbol{\rho})Z_{j}( \boldsymbol{\rho})=\delta_{ij}, \tag{1}\]
with \(P(\rho)=1\) when \(\rho\leq 1\). This accordingly defines the Zernike polynomials over a unit circle. The Zernike polynomials can be used to represent the phase component \(\phi(\boldsymbol{\rho})\) of a wave originating at point \(\mathbf{x}\) that has propagated through a turbulent medium through basis decomposition,
\[\phi_{\mathbf{x}}(R\boldsymbol{\rho})=\sum_{i}a_{\mathbf{x},i}Z_{i}( \boldsymbol{\rho}). \tag{2}\]
We add the subscript \(\mathbf{x}\) to track the position of the point source in the object plane. Noll [20] investigates the correlation of two Zernike coefficients for a single point source,
\[\mathbb{E}[a_{\mathbf{x},i}a_{\mathbf{x},j}]=\frac{1}{\pi^{2}}\iint d \boldsymbol{\rho}d\boldsymbol{\rho}^{\prime}P(\rho)P(\rho^{\prime})Z_{i}( \boldsymbol{\rho})Z_{j}(\boldsymbol{\rho}^{\prime})\mathbb{E}[\phi_{\mathbf{x} }(R\boldsymbol{\rho})\phi_{\mathbf{x}}(R\boldsymbol{\rho}^{\prime})]. \tag{3}\]
This covariance may be re-written using the phase structure function for a spherical wave as
\[\mathcal{D}(R\boldsymbol{\rho}-R\boldsymbol{\rho}^{\prime})=2.91k^{2}\int_{0}^{L} dzC_{n}^{2}(z)\left|R(\boldsymbol{\rho}-\boldsymbol{\rho}^{\prime})\left(\frac{L-z}{L} \right)\right|^{5/3}. \tag{4}\]
We may write the structure function as \(\mathcal{D}(R\boldsymbol{\rho}-R\boldsymbol{\rho}^{\prime})=\mathbb{E}[(\phi_{ \mathbf{x}}(R\boldsymbol{\rho})-\phi_{\mathbf{x}}(R\boldsymbol{\rho}))^{2}]\), which is independent of position \(\mathbf{x}\). Combined with \(\mathbb{E}[\phi_{\mathbf{x}}(R\boldsymbol{\rho})]=\mathbf{0}\), we may substitute this in (3), giving us
\[\mathbb{E}[a_{\mathbf{x},i}a_{\mathbf{x},j}]=\frac{-2.91k^{2}}{2\pi^{2}}\int dzC _{n}^{2}(z)\iint d\boldsymbol{\rho}d\boldsymbol{\rho}^{\prime}P(\rho)P(\rho^{ \prime})Z_{i}(\boldsymbol{\rho})Z_{j}(\boldsymbol{\rho}^{\prime})\left|R( \boldsymbol{\rho}-\boldsymbol{\rho}^{\prime})\left(\frac{L-z}{L}\right) \right|^{5/3}. \tag{5}\]
The evaluation of this integral is given in the appendix of Noll's paper[20]. This result allows for the description and simulation of a single wavefront distorted by atmospheric turbulence using the Zernike polynomials; this result, however, has no direct implication for the statistics beyond a single wavefront or point source. For anisoplanatic simulation based upon the Zernike polynomials we will need a more general expression.
### Simulation Approaches: Summary and Current Limitations
There exist a handful of methods to simulate atmospheric turbulence in the literature, with the two main focuses of this work being the split-step [2, 11, 28] and multi-aperture [4, 16, 5] simulations. These methods operate from two fundamentally different perspectives, though split-step is currently the more theoretically justified approach. There is a gap in accuracy between the two methods due to inherent assumptions that were made to simplify the analysis within the multi-aperture method. This work, in part, attempts to minimize this gap.
#### 2.2.1 Split-Step Simulation
The traditional approach to generating optical phase statistics is that of split-step simulation. Split-step models the forward process of nature directly, making it considerably accurate and intuitive. A split-step simulation can be described by three main steps:
1. **Generate phase screens:** First, discrete phase screens representing the turbulent distortions along the path of propagation are generated. These are typically of a small order, with [11] effectively using 9 for their simulations;
2. **Numerical wave propagation:** A source field is generated and propagated via evaluation of the Fresnel integral [28] to the first phase screen. The phase is imparted into the wave, which is then propagated to the next phase screen. The process is then repeated until landing upon the aperture;
3. **Image generation:** A point spread function (PSF) is then formed by the incident wave, which can then be applied to the source object. Steps 2 and 3 are then repeated for every point on the object.
We include the final step for completeness, though the core of the split-step simulation is in the phase screen generation and numerical wave propagation. Additionally, sub-sampling of the object plane is typically performed to reduce the simulation time. We give a visualization of the split-step method in Figure 1. Split-step gains the ability to generate anisoplanatic samples with minimal theoretical effort; the phase screens can be made large enough so that every point in the object plane will pass through the large phase screens. By virtue of the sharing of phase screen components during the wave propagation, as in nature, the phase realizations at the aperture will be correlated according to the theory.
Split-step has a few limitations. From the perspective of generating training data, the most notable is the speed at which the method is able to generate samples. This limitation arises from both (i) the generation of multiple phase screens; (ii) numerical wave propagation. For a complicated \(C_{n}^{2}\) profile with sudden peaks and valleys, split-step will require a large amount of phase screens to numerically propagate through, a point we shall elaborate on later in this work. This generation and propagation step will need to be done _per realization_. For the usage of split-step in generating data for the purposes of numerical analysis, this will take a potentially infeasible amount of time. Additionally, the number of phase screens have an upper limit by the necessity of their independence.
#### 2.2.2 Zernike-Based Simulation
Zernike-based simulation is fundamentally different than split-step propagation, it does not directly simulate the wave propagation process. Instead, it pulls statistics directly at the aperture plane for each pixel in the image. Therefore, it is limited by the theoretical understanding of spatial statistics of the Zernike coefficients (or alternatively some other basis representation). Previous multi-aperture simulations differ from other Zernike-based simulations such as [25] by virtue of their ability to approximately model anisoplanatism. The multi-aperture variety of Zernike-based simulations can be described by two major steps:
1. **Generate Zernike coefficients:** A Zernike coefficient vector is formed for a subset of pixels in the image. This uses Noll's results as a starting point, with some additional modifications proposed by [4];
2. **Image Generation:** A PSF is then formed from the Zernike representation (either analytically or via the numerical approach of [16]) for every pixel in the image, and is then applied to the image.
The core of the simulation rests in the generation of the Zernike coefficients. Drawing samples of the Zernike vectors, correlated both spatially and inter-modally, is the primary focus of this type of simulation of which we give a visualization in Figure 2. This also highlights the main reason for the dramatic improvement in speed over split-step: With knowledge of the correlations, there is no need to numerically propagate a wave. The trade-off for Zernike-based simulation is that there must be additional efforts in ensuring the spatial correlations are generated according to theory. Additionally, it is important to note there is not straightforward path towards a principled incorporation of amplitude effects. Therefore, split-step outperforms Zernike-based simulation with respect to scintillation effects.
There are two major limitations specific to these simulations, one of which this work seeks to mitigate. The multi-aperture approach cannot perfectly match the theoretical statistics for even a constant \(C_{n}^{2}\) profile. This is due to an approximation by Taylor series at the center of its theoretical analysis [4]. Within their analysis, it is _impossible_ to simulate with the multi-aperture simulation for a varying \(C_{n}^{2}\) profile. These limitations in accuracy and the inability to simulate varying turbulent profiles are what we overcome in this work. Swapping their correlation statistics for ours within their simulation, both constant and path-varying turbulence profiles can be matched both in theory and empirically. The core benefit here is that multi-aperture simulation is orders of magnitude faster than split-step allowing for large amounts of data generation.
#### 2.2.3 Alternative Simulation Approaches
There are many other approaches that seek to model the effects generated by the atmosphere on a wave or image. One of these alternatives is known as the brightness function simulation developed through a series of works [35, 13, 14]. The brightness function model is faster than split-step, instead propagating "bundles" of rays through a perturbing medium. These bundle of rays are then distributed across the imaging plane for each pixel as a function of the medium, resulting in spatially varying effects as a function of the phase screens. More traditional ray tracing approaches beyond the brightness function model have been applied for the simulation and modeling of turbulent effects. Voelz et al. [34]
Figure 1: A visualization of split-step propagation. A point (optionally a grid of points) is propagated through a series of phase screens by numerical wave propagation. The result is a collection of phase realizations for each point propagated, which can then be used to form PSFs.
provides an analysis of standard ray tracing approaches, with carefully performed ray tracing matching wave optics simulations to a suitable degree of accuracy for most applications. Additionally, a comprehensive work on a similar simulation modality is described by [21] and made publicly available.
In addition to these methods, there exist simulation approaches by Repasi and Weiss [23, 24], Leonard et al. [15], or Potvin et al. [22] which use a blend of analytic and empirical properties (empirically based on the NATO RTG-40 dataset [32, 31]) to simulate PSFs and the subsequent images directly. These simulation methodologies have been revisited more recently by Miller et al. [18, 17]. These methods share some similarity to Zernike-based methods, with a main difference being that the coefficients drawn in order to simulate the effects on an image describe various quantities in the spatial domain as opposed to the phase domain.
### Angle-of-Arrival Correlations
The primary theoretical comparison used for verification of our approach is that of angle-of-arrival correlations as performed by Fried [6]. Tilt has a long correlation range, and is an important marker for accurate generation. Fried analyzed the correlation of distortions of two separate point sources in the object plane. As separation between the two points increases, we expect them to be less correlated as they are propagating through increasingly different regions of the atmosphere. In this context, Fried specifically analyzed the tilt vector, the vector normal to the plane of best fit for phase distortion \(\phi_{\mathbf{x}}(\boldsymbol{\rho})\). The tilt vector is defined to be
\[\boldsymbol{\alpha}_{\mathbf{x}}=\frac{2\lambda}{R}\int d\boldsymbol{\rho}P( \boldsymbol{\rho})\phi_{\mathbf{x}}(R\boldsymbol{\rho})\boldsymbol{\rho}. \tag{6}\]
Preferring to write this consistently with our usage of the Zernike polynomials, we may write
\[\boldsymbol{\alpha}_{\mathbf{x}}=\frac{\lambda}{R}\int d\boldsymbol{\rho}P( \boldsymbol{\rho})\phi_{\mathbf{x}}(R\boldsymbol{\rho})[Z_{2}(\boldsymbol{\rho })\hat{\mathbf{i}}+Z_{3}(\boldsymbol{\rho})\hat{\mathbf{j}}], \tag{7}\]
Figure 2: A visualization of the multi-aperture simulation. For each pixel in an image, a Zernike vector is generated, which weights the Zernike polynomials to create a phase realization per-pixel. These phase realizations may then be used to form PSFs.
with unit vectors \(\hat{\mathbf{i}}\) and \(\hat{\mathbf{i}}\) in the \(\mathbf{x}\) and \(\mathbf{y}\) directions, accordingly. The problem of finding the tilt correlation can then be written compactly using our notation as
\[\mathbb{E}[\boldsymbol{\alpha}_{\mathbf{x}}^{T}\boldsymbol{\alpha}_{\mathbf{x}^ {\prime}}]=\left(\frac{\lambda}{R}\right)^{2}\left(\mathbb{E}[a_{\mathbf{x},2 }a_{\mathbf{x}^{\prime},2}]+\mathbb{E}[a_{\mathbf{x},3}a_{\mathbf{x}^{\prime },3}]\right), \tag{8}\]
where we note the \(x\)-tilt and \(y\)-tilt terms are independent, therefore there is no cross correlation to account for. We visualize this problem in Figure 3(a). We note this may also be written using the phase structure function, however with the added consideration of the separation of the point sources,
\[\mathcal{D}(R\boldsymbol{\rho}-R\boldsymbol{\rho}^{\prime},\mathbf{x}- \mathbf{x}^{\prime})=2.91k^{2}\int_{0}^{L}dzC_{n}^{2}(z)\left|R(\boldsymbol{ \rho}-\boldsymbol{\rho}^{\prime})+\left(\frac{z}{L-z}\right)(\mathbf{x}- \mathbf{x}^{\prime})\right|^{5/3}. \tag{9}\]
The magnitude term in this expression may be viewed as the varying distance between the two difference vectors as a function of \(z\).
Fried's results are limited to the two tilt Zernike functions. For our application, if one desires to pull statistics for the higher order aberrations, the spatial correlation functions are required for generation. Additionally, this result only provides a description of the joint behavior of the Zernike coefficients. Therefore, additional work will be required to empower multi-aperture methods.
### Two-Aperture Correlations
We now turn to a slightly different problem, though as we will show, closely related. The works of Chanan [3] and more generally Takato and Yamaguchi [29] analyze the correlation of two _different_ imaging systems both imaging the same point. These are inherently different from angle-of-arrival as they are imaging the same point source. These works were inspired by astronomical imaging situations in which two apertures can give additional useful information on the correlation of the wavefront distortions allowing for improved distortion correction.
Takato and Yamaguchi as well as Chanan consider apertures separated by a vector \(\mathbf{s}\) measured from center to center and is written in terms of the aperture size \(D\). For example, \(D\mathbf{s}\) with \(\mathbf{s}=1\) corresponds to two apertures that are tangential. In our notation, their problem can be written as
\[\mathbb{E}[a_{\mathbf{x},i}(\mathbf{0})a_{\mathbf{x},j}(\mathbf{s})]=\frac{1}{ \pi^{2}}\iint d\boldsymbol{\rho}d\boldsymbol{\rho}^{\prime}P(\rho)P(\rho^{ \prime})Z_{i}(\boldsymbol{\rho})Z_{j}(\boldsymbol{\rho}^{\prime})\mathbb{E}[ \phi_{\mathbf{x}}(R\boldsymbol{\rho})\phi_{\mathbf{x}}(R\boldsymbol{\rho}^{ \prime}+D\mathbf{s})]. \tag{10}\]
Figure 3: Visualization of the geometries of the two types of problems. In (a) we show the problem analyzed by Fried [6], where two points in the object plane located at positions \(\mathbf{x},\mathbf{x}^{\prime}\). In (b) we show the problem of Takato and Yamaguchi [29] where two apertures, with separation \(D\mathbf{s}\), are viewing a single point source in the object plane.
We provide a visualization of this problem in Figure 3(b). We note that this problem is equivalent to Noll's in the case of \(\mathbf{s}=\mathbf{0}\). This formulation may be used in accordance with the structure function (4) with additional displacement \(D\)s. The resulting expression from the analysis of [29] is rather cumbersome, therefore, we take some care to simplify the notation with the intent of clarity which we leave to Appendix A. Their final result is given by
\[\mathbb{E}[a_{\mathbf{x},i}(\mathbf{0})a_{\mathbf{x},j}(\mathbf{s})]=0.00969k^{ 2}2^{14/3}\pi^{8/3}(D/2)^{5/3}\sqrt{(n_{i}+1)(n_{j}+1)}f_{ij}(\mathbf{s},k_{0}) \int dzC_{n}^{2}(z), \tag{11}\]
giving the correlation for two apertures separated by a vector \(\mathbf{s}\), with \((n_{i},m_{i})\to i\), and similarly for \(j\). We note that this may be written in terms of \(D/r_{0}\) with \(r_{0}\) as the Fried parameter[9, 26], defined as
\[r_{0}=0.185\left[\frac{4\pi^{2}}{k^{2}\int_{0}^{L}\left(\frac{L-z}{L}\right)C_ {n}^{2}(z)}\right]^{3/5}, \tag{12}\]
where we have chosen to write the spherical form (the planar form drops the \((L-z)/L\) term in the integral). The expression describing \(f_{ij}\) is given in the Appendix A.
## 3 Deriving the Spatial Zernike Correlations
The two groups of Zernike spatial correlations considered, angle-of-arrival and two-aperture correlations, are for two separate groups of problems. However, to enable the generation of the optical statistics by methods of [4, 16], we merge the two ideas to describe Zernike correlations for all coefficients. We begin with stating the problem at hand as
\[\mathbb{E}[a_{\mathbf{x},i}a_{\mathbf{x}^{\prime},j}]=\frac{1}{\pi^{2}}\iint d \boldsymbol{\rho}d\boldsymbol{\rho}^{\prime}P(\rho)P(\rho^{\prime})Z_{i}( \boldsymbol{\rho})Z_{j}(\boldsymbol{\rho}^{\prime})\mathbb{E}[\phi_{\mathbf{x }}(R\boldsymbol{\rho})\phi_{\mathbf{x}^{\prime}}(R\boldsymbol{\rho}^{\prime})]. \tag{13}\]
This differs from (10) by consideration of two point sources located at points \(\mathbf{x},\mathbf{x}^{\prime}\). As before, this may be written using the phase structure function as
\[\mathbb{E}[a_{\mathbf{x},i}a_{\mathbf{x}^{\prime},j}]=\frac{-1}{2\pi^{2}} \iint d\boldsymbol{\rho}d\boldsymbol{\rho}^{\prime}P(\rho)P(\rho^{\prime})Z_{ i}(\boldsymbol{\rho})Z_{j}(\boldsymbol{\rho}^{\prime})\mathcal{D}(R \boldsymbol{\rho}-R\boldsymbol{\rho}^{\prime},\mathbf{x}-\mathbf{x}^{\prime}). \tag{14}\]
The formulation of our problem is then most in accordance with Fried's approach, though notably we have changed from the case of the tilt vector to any arbitrary Zernike polynomial.
This section begins with presenting the main theoretical result of this work: the correlation of the Zernike coefficients for the case of a continuous turbulence profile. This will consist of the usage of (11) to solve (13). We will then move to discretize the main continuous results, causing us to define a \(C_{n}^{2}\)-slice, which will draw some analogy to a phase screen from split-step. This second perspective will be the one taken for the sake of numerical evaluation. We finish with how the approximation of Chimitt and Chan [4] fits into this framework and discuss its limitations.
### Continuous Case: Varying \(C_{n}^{2}\)
Before carrying out the main derivation, we must mention that our approach will be applied to the case of spherical waves. However, the results of Takato and Yamaguchi are developed for planar waves. This creates a potential conflict, applying results from planar waves to spherical ones. To remedy this, we facilitate the general nature of their results which has no restriction on the \(C_{n}^{2}\) profile. As shown in (11), the turbulence profile is a non-closed expression in their final result. We may therefore choose to write the turbulence profile to satisfy our requirements via
\[C_{n}^{2}(z)=\left(\frac{L-z}{L}\right)^{5/3}\tilde{C}_{n}^{2}(z)[u(z)-u(z-L)], \tag{15}\]
where \(u(z-a)\) is the unit step function which is unity for \(z>a\) and \(0\) otherwise and \(\tilde{C}_{n}^{2}(z)\) is the original, unmodified turbulence profile in the planar case (of course, the two are the same). Substitution of this turbulence profile into (11) then satisfies our requirement for spherical waves. Our later comparisons will be done for spherical wave statistics, which lends credence to this approach.
With our approach to using Takato and Yamaguchi's results for spherical waves in mind, we first begin with rewriting (9) as
\[\mathcal{D}(R\boldsymbol{\rho}-R\boldsymbol{\rho}^{\prime},\mathbf{x}-\mathbf{ x}^{\prime})=2.91k^{2}\int_{0}^{L}dz\left(\frac{L-z}{L}\right)^{5/3}C_{n}^{2}(z) \left|R(\boldsymbol{\rho}-\boldsymbol{\rho}^{\prime})+\left(\frac{z}{L-z} \right)(\mathbf{x}-\mathbf{x}^{\prime})\right|^{5/3}. \tag{16}\]
This allows us to write (14) as
\[\mathbb{E}[a_{\mathbf{x},i}a_{\mathbf{x}^{\prime},j}]=\frac{-2.91k^{2 }}{2\pi^{2}}\int dz\left(\frac{L-z}{L}\right)^{5/3}C_{n}^{2}(z)\iint d\rho d \rho^{\prime}\] \[\times P(\rho)P(\rho^{\prime})Z_{i}(\mathbf{\rho})Z_{j}(\mathbf{\rho}^{ \prime})\left|R(\mathbf{\rho}-\mathbf{\rho}^{\prime})+\left(\frac{z}{L-z}\right)( \mathbf{x}-\mathbf{x}^{\prime})\right|^{5/3}. \tag{17}\]
If we seek to leverage the results of the two-aperture statistics, we must relate the magnitude term to some two-aperture separation in accordance with Takato and Yamaguchi. We therefore define
\[\mathbf{s}(z)=\left(\frac{z}{D(L-z)}\right)(\mathbf{x}-\mathbf{x}^{\prime}), \tag{18}\]
to be a displacement that is changing with distance along the path of propagation. With this substitution, we can write
\[\mathbb{E}[a_{\mathbf{x},i}a_{\mathbf{x}^{\prime},j}]=\frac{-2.9 1k^{2}}{2\pi^{2}}\int dz\left(\frac{L-z}{L}\right)^{5/3}C_{n}^{2}(z)\iint d \mathbf{\rho}d\mathbf{\rho}^{\prime}\] \[\times P(\rho)P(\rho^{\prime})Z_{i}(\mathbf{\rho})Z_{j}(\mathbf{\rho}^{ \prime})\left|R(\mathbf{\rho}-\mathbf{\rho}^{\prime})+D\mathbf{s}(z)\right|^{5/3}. \tag{19}\]
We then recognize the inner double integral to be of the same form as [3, 29]. Defining \(\mathscr{A}=0.00969k^{2}2^{14/3}\pi^{2/3}R^{5/3}\), this allows us to simply leverage the results of Takato and Yamaguchi using a weighted integration of their solutions, resulting in
\[\mathbb{E}[a_{\mathbf{x},i}a_{\mathbf{x}^{\prime},j}]=\mathscr{A}_{i,j}\int_{ 0}^{L}\left(\frac{L-z}{L}\right)^{5/3}C_{n}^{2}(z)f_{ij}\left(\mathbf{s}(z),k_ {0}\right)dz, \tag{20}\]
where \(\mathscr{A}_{i,j}=\mathscr{A}\sqrt{(n_{i}+1)(n_{j}+1)}\). This result, however, has an additional visual interpretation. Turning to (18), we can write this in terms of a "virtual" aperture which varies with distance, which we define to be
\[\hat{D}(z)=D\left(\frac{L-z}{z}\right). \tag{21}\]
We may then write the displacement as \(\mathbf{s}(z)=(\mathbf{x}-\mathbf{x}^{\prime})/\hat{D}(z)\). We provide a visualization of this virtual aperture in Figure 4 which illustrates two points in an object forming two cones with diverging radii. The overlap of the cross sections at each individual infinitesimal slice is the problem analyzed by Takato and Yamaguchi. However, our result differs by being a _sum_ of these solutions; each infinitesimal slice contributes a correlation which is dictated by the results of [29]. Intuitively, the slice closest against the aperture will contribute global correlation. Physically this is due to the fact that every point source on an object will pass through this slice. Mathematically, the case of this final slice will result in \(\hat{D}(z)\rightarrow\infty\), which will cause \(\mathbf{s}(z)\to 0\) for all finite \(\mathbf{x}-\mathbf{x}\), implying perfect global correlation. To present the main result completely, we substitute (18) into (20), giving us
\[\mathbb{E}[a_{\mathbf{x},i}a_{\mathbf{x}^{\prime},j}]=\mathscr{A}_{i,j}\int_{ 0}^{L}\left(\frac{L-z}{L}\right)^{5/3}C_{n}^{2}(z)f_{ij}\left(\left(\frac{z}{ D(L-z)}\right)(\mathbf{x}-\mathbf{x}^{\prime}),k_{0}\right)dz. \tag{22}\]
### Discrete Case: \(C_{n}^{2}\)-slices
A unique perspective given in this work is the direct application to simulation of turbulent optics statistics. The expression for the correlation of two points in the object plane (22) is desirable for the sake of simulation. These results can be directly used in previous multi-aperture simulations [4, 16] with only minor modification. In addition to this, (22) is preferable to the results of Whiteley et al. [36] for numerical evaluation of the integral. This is due to the fact that their derivation used a particular Riemann summation rule as a component of their derivation. Equation (22) assumes no such rule. The result is an expression which can be adapted to suit the needs of the application without needing to re-derive for a separate integration rule. In particular, the way in which one performs a summation over \(C_{n}^{2}(z)\) is the primary knob one may tune to utilize the results of (22).
Since one may choose the way in which the integral is represented as a Riemann sum, one possibility would be to average the turbulence profile along the interval of propagation. There is some analogy here to the phase screens used in split-step propagation, though they are not directly equivalent. The main difference is that within this framework, one
does not have to actually generate phase screens, rather, just evaluate the statistical expression. To differentiate the two, we denote them as \(C_{n}^{2}\)-slices. We begin with defining a collection of \(C_{n}^{2}\)-slices to be
\[C_{n}^{2}(z;M)=\sum_{m=1}^{M}\delta\left(z-\frac{Lm}{M+1}\right)\int_{L(m-1)/M}^ {Lm/M}C_{n}^{2}(v)dv, \tag{23}\]
where we are integrating along the path of propagation using \(v\) as a dummy variable. For some simplicity in notation, we define the locally collapsed \(m\)th \(C_{n}^{2}\)-slice to be
\[\overline{C_{n}^{2}}(z_{m})=\int_{L(m-1)/M}^{Lm/M}C_{n}^{2}(v)dv. \tag{24}\]
Using the representation of \(C_{n}^{2}\)-slices in place of the turbulence profile in (22), we can arrive at
\[\mathbb{E}[a_{\mathbf{x},i}a_{\mathbf{x}^{\prime},j};M]=\mathscr{A}_{i,j}\sum _{m=1}^{M}\left(\frac{M+1-m}{M+1}\right)^{5/3}\overline{C_{n}^{2}}(z_{m})f_{ ij}\left(\frac{m(\mathbf{x}-\mathbf{x}^{\prime})}{D(M+1-m)},k_{0}\right). \tag{25}\]
We use (25) as a basis for our numerical testing. Due to the generality of the expression provided in (22), one may use an alternative integration rule in order to decide each \(C_{n}^{2}\) value as previously stated. Furthermore, instead of focusing on numerical integration, one may instead optimize the discrete \(C_{n}^{2}\) values for objective functions which optimize quantities such as the Fried parameter, isoplanatic angle, and log amplitude variance as in Hardie et al. [11]. In this case, individual \(C_{n}^{2}\) values which describe the phase screen parameters were optimized in a fashion similar to that by Schmidt [28]. Thus (25) can also be used to evaluate the impact of simulation parameters on the various Zernike correlations if one replaces \(\overline{C_{n}^{2}}\) with other values.
### Comparison to Earlier Multi-Aperture Methods
The fundamental work that enabled the first iteration of the multi-aperture simulation [4] imposed two main restrictions to achieve their results. The first is the assumption of a constant \(C_{n}^{2}\) profile. However, the more limiting restriction is an approximation upon the structure function, which they use a first order Taylor series to simplify. The following consideration achieves the same results, shedding some light on the limitations of these previous results.
To arrive at these same results, we first choose to define a single \(C_{n}^{2}\)-slice along the entire path of propagation,
\[C_{n}^{2}(z;1)=\int_{0}^{L}\delta\left(z-\frac{L}{2}\right)C_{n}^{2}(v)dv. \tag{26}\]
Figure 4: Visualization of the virtual aperture. In (a) we show in 3D space how the virtual aperture changes with position along the length of propagation. In (b) we show the increase in the virtual aperture diameter as a function of position.
With the additional assumption of a constant \(C_{n}^{2}\) profile, then
\[C_{n}^{2}(z;1)=LC_{n}^{2}\delta\left(z-\frac{L}{2}\right). \tag{27}\]
The resulting substitution of this \(C_{n}^{2}\) profile results in the same correlation function as in [4],
\[\mathbb{E}[a_{\mathbf{x},i}a_{\mathbf{x}^{\prime},j};1]=\mathscr{A}_{i,j} \left(\frac{1}{2}\right)^{5/3}LC_{n}^{2}f_{ij}\left(\frac{(\mathbf{x}-\mathbf{ x}^{\prime})}{D},k_{0}\right). \tag{28}\]
This demonstrates the limiting assumption inherent to previous multi-aperture simulators more clearly. Specifically, these simulation methodologies assume a single \(C_{n}^{2}\)-slice at the halfway point of the propagation path. Beyond previously discussed limitations, even the case of constant \(C_{n}^{2}\) profiles will experience moderate deviations given certain camera configurations, which has been observed [4]. To comment more directly on the usage of our results within theirs, the more general results in this work only requires the replacement of (28) with (25), keeping the rest of the simulation the same. This offers a considerable increase in accuracy with only an initial small loss in speed - after the generation of the theoretical spatial correlation, the generation method (and therefore the speed) is identical.
### Comparison to Existing Zernike Correlations
The work most closely aligned with ours is that of Whiteley et al. [36]. Whiteley et al. provides an expression for spatial and temporal Zernike correlations, and similarly utilize Takato and Yamaguchi. The core difference is notably in approach taken and result. On the approach side, [36] uses an assumption on the independence of neighboring atmospheric slices. Ours additionally uses such a fundamental assumption, as this underlies the standard Markov approximation, however our inclusion of this independence is "built into" the result of Takato et al. [29]. Therefore, it is useful to make the comment that both works ultimately rely on this same layered atmospheric property - though Whiteley et al. [36] assumes this earlier in their derivation. The result is that if one wishes to change a Riemann sum approach (such as max/min vs. Simpson's rule) one must carry the derivation out again. With ours, one may start at the result of (22), and discretize as desired following Section 3.2 directly. Furthermore, the visual interpretation of the integration process via the virtual aperture along with the subjectively more convenient form of (22) are two added benefits of our approach.
It is important to note that Whiteley et al. [36] does offer a more general framework with regards to aperture motion and temporal correlations. This is done primarily through use of Taylor's frozen flow hypothesis. To illustrate how these concepts may be incorporated into our framework, the temporal effects can be modeled in a similar fashion via
\[\mathbf{s}(z)=\left(\frac{z}{D(L-z)}\right)(\mathbf{x}-\mathbf{x}^{\prime})+ \frac{\mathbf{v}(z)\tau}{D}, \tag{29}\]
where \(\mathbf{v}(z)\) is the mean transverse wind velocity at position \(z\) along the path of integration. We note the division by \(D\) is a requirement to match the form of Takato and Yamaguchi's expression [29]. We may substitute (29) into (20), along with potential modifications of (29) as outlined in Sasiela's book [27].
## 4 Numerical Comparisons and Discussions
Our numerical results can be divided into two sections: theoretical comparisons and empirical statistics. On the side of theoretical comparisons, we first numerically compare the angle-of-arrival results with our expression. We present a wide variety of situations, all of which we are able to match sufficiently close. We accept this as a strong suggestion of their equivalence, though the generality of the expression obtained in this work makes a direct analytical equivalence difficult. Furthermore, we discuss some difficulties that arise in split-step when simulating slant paths and other complicated \(C_{n}^{2}\) profiles and how this is described within our model. All of the following computations were performed on a AMD Ryzen 5 3600 6-core CPU with 16 GB or RAM on a 64-bit OS. Our implementation specifically utilizes Python and the numpy/scipy libraries.
On the empirical side, we compare the speeds of split-step and the multi-aperture method for simulation of a grid of point sources. We then demonstrate the limitation in resolution for Zernike-based simulations and its reason, as well as outlining general principles towards a solution, with more details in Chimitt et al.[5].
### Comparisons to Angle-of-Arrival Results
To begin our comparisons, we first look at the comparison between previously reported angle-of-arrival results [1, 6] with our expression (25). Recalling (8), we note that the tilt expression in our framework is proportional to the sum
of the Zernike tilt coefficient variances. Therefore, we may write the result for angle-of-arrival using our results by summation of the terms corresponding to the tilt Zernike terms within (25). We contrast the analytical form of the correlation here to those given in [1, 6]. We observe this to be a nearly identical match between the expressions provided in these two works. These expressions are developed for the case of spherical waves, matching our development.
To compare the two, we can evaluate the angle-of-arrival integral directly using the Python library scipy's integration class. Specifically, we use the triple integration method 'tplquad', writing the angle-of-arrival expression with lambda functions. Therefore, no elements of our \(C_{n}^{2}\)-slice concept are a component of the angle-of-arrival integral evaluation. For generating the curves predicted by our analysis, we may instead evaluate (25). To save on a bit of time, we find the values for the \(C_{n}^{2}\)-slices by taking \(10\times\) the number of points needed and averaging over groups of ten (thus an approximation of (23)). For example, if we require 10 \(C_{n}^{2}\)-slices, these values are estimated from 100 samples of the \(C_{n}^{2}\) profile via local averaging.
The results of the constant \(C_{n}^{2}\) profile are presented in Figure 5, while the path-varying turbulence profile results can be seen in Figure 6. Here we normalize by the isoplanatic angle on the x-axis, given as
\[\theta_{0}=58.1\times 10^{-3}\lambda^{6/5}\left[\int_{0}^{L}z^{5/3}C_{n}^{2}(z )dz\right]^{-3/5}. \tag{30}\]
We observe a convincing match for all parameters within our tests. That is, the angle-of-arrival integral appears to be similar to the results predicted by this analysis. We further note that all evaluations of (25) were performed with the number of \(C_{n}^{2}\)-slices kept constant at 200. We have noticed some minor improvement with an increase of slices (i.e. our curves match the angle-of-arrival curves more closely), but we find 200 to be sufficient for the purposes of this comparison. We again note that the angle-of-arrival results from Ref. [1] come from a separate analysis and are evaluated with methods separate from ours.
### The Problem with Small Numbers of Phase Screens
When analyzing or simulating propagation through turbulence using the discrete phase screen approach [26], the number of phase screens can be chosen to match various requirements of the application. For analysis which utilizes phase screens, the number of phase screens is typically left to be unspecified so long as assumption of phase screen independence holds. In simulation, 10 phase screens are used in Hardie et al. [11] which we deem to be relatively standard, though one may use more/less if the situation dictates. This leads to a difference in analysis vs. simulation: one uses a large number of phase screens whereas in simulation a small number is used.
Figure 5: Theoretical comparison of our expression for tilt correlation compared to Basu et al. (now Bose-Pillai) [1] for varying aperture diameters. These results are for a constant \(C_{n}^{2}\) profile of \(C_{n}^{2}=2\times 10^{-15}\) m\({}^{-2/3}\).
To quantify this difference in analysis and simulation, one aspect we may study is the tilt correlation. We may ask: Does a simulation with a small amount of phase screens match the analytic prediction with a larger amount of independent phase screens? To answer this question without reliance on empiricism, we first note that a properly performed simulation should perform identically to its analytic counterpart. Therefore, we may analyze simulation by a representative analytical model.
The model we choose for our purposes is similar to the \(C_{n}^{2}\)-slice model, though the values of the Riemann sum terms are chosen to optimize the objective function provided in Hardie et al. [11]. We note that this was done to choose phase screen parameters to closely match isoplanatic angle, Fried parameter, and log amplitude variance in a least-squares fashion. Thus, the model we choose is
\[C_{n}^{2}(z)=\sum_{m=1}^{M}\delta\left(z-\frac{Lm}{M+1}\right)\tilde{C}_{n,m}^ {2}, \tag{31}\]
where \(\tilde{C}_{n,m}^{2}\) is the optimally chosen \(C_{n}^{2}\) value of the \(m\)th phase screen (optimized according to Hardie et al. [11]). We will refer to (31) as the phase screen model.
Therefore, studying the impact of \(M\) on the accuracy of the tilt correlation may be studied analytically as we have the tilt correlation integral (22) and the phase screen model (31) for this case. We present the results for two somewhat challenging cases in Figure 7 and Figure 8. In these cases, we provide both the angle-of-arrival correlations[1] (via direct scipy integration) for reference along with the case of 200 \(C_{n}^{2}\)-slices using (25). In this case, we observe that if we are interested in a narrow field of propagation (corresponding to a small value on the x-axis of Figure 7 and Figure 8) then we may use a small number of phase screens to model the situation. However, if we are concerned with proper correlations in the more anisoplanatic case, a larger number of phase screens may be required depending on the desired accuracy.
In addition to these more complicated cases, we also use simpler profiles which match those of Hardie et al. [11]. We find there to be a match with their reported empirical results when using their reported values for \(\tilde{C}_{n,m}^{2}\) within the phase screen model (31). We show these examples for two reasons: (1) we use this as supporting evidence that (31) correctly models a properly performed split-step simulation; (2) that errors in tilt correlation are not a fault of split-step, but rather a function of either the optimization chosen to select the \(C_{n}^{2}\) parameters or number of phase screens. With respect to (2), we note this particularly as stated in Hardie et al. [11] that the mismatch in their reported tilt correlation was to be investigated in future work. Along similar lines as previously, the number of phase screens may be increased in order to match the tilt correlation function more closely.
Figure 6: Theoretical comparison of the expression for our tilt correlation compared to [1]. The results shown is for a path-varying turbulence profile that is high at the aperture \(C_{n}^{2}(z)=2(z/L)\times 10^{-15}\) m\({}^{-2/3}\) and with varying aperture sizes.
Figure 8: [Left] For the Hufnagel-Valley \(C_{n}^{2}\) profiles (shown in the log domain) we can plot the [Right] tilt correlation for each using (1) angle-of-arrival correlations, (2) \(C_{n}^{2}\)-slice correlations, (3) the phase screen model correlations. We note that the phase screen model’s \(C_{n}^{2}\) values are optimized as described in Hardie et al.[11]. We can see that with increasing the number of phase screens, the phase screen model approaches the angle-of-arrival and high \(C_{n}^{2}\)-slice curves.
Figure 7: [Top] For two different \(C_{n}^{2}\) profiles (shown in the log domain) we can plot the [Bottom] tilt correlation for each using (1) angle-of-arrival correlations, (2) \(C_{n}^{2}\)-slice correlations, (3) the phase screen model correlations. We note that the phase screen model’s \(C_{n}^{2}\) values are optimized as described in Hardie et al.[11]. We can see that with increasing the number of phase screens, the phase screen model approaches the angle-of-arrival and high \(C_{n}^{2}\)-slice curves.
### Zernike-based Simulation with \(C_{n}^{2}\)-slices
#### 4.3.1 Limitations and Approximations for Zernike-based Simulation
To begin, we feel it important to discuss the limitations which one should keep in mind when interpreting the Zernike-based results. This variety of simulation is mainly motivated by speed - the ability to sample points using FFT-based random sampling techniques [4, 16, 5] allows for a high degree of speed. The Zernike coefficient realizations, however, cannot directly be sampled with such FFT-based methods. The primary reason for this is that FFT-based sampling techniques require the assumption of wide-sense stationarity (WSS). It may seem that the previous results (i.e. Equation (25)) may indeed be WSS. The WSS property requires that the correlation be a function of a difference, which is satisfied for \(i=j\). To see this, one may note that (25) is a function of spatial separation \(\mathbf{x}-\mathbf{x}^{\prime}\). However, when \(i\neq j\), (25) cannot be said to be WSS.
This imposes a serious restriction on Zernike-based simulation. We need _multiple_ coefficients to represent the turbulent phase distortions, of which 36 is the number chosen in some methods[16, 5]. If we are only interested in simulating a handful of points (say, a \(4\times 4\) spatial grid) then this is not be too serious of a restriction, as Cholesky decomposition is possible in this case. Cholesky decomposition only requires the covariance matrix to be positive semi-definite, thus there is no issue with a lack of WSS. However, for a grid of spatial points in the object plane of size \(H\times W\), one will require a correlation matrix the size of \(36HW\times 36HW\) if using 36 Zernike coefficients[5]. This will often cause us encounter memory issues, a point we shall return to later.
At present, the way to circumvent this limitation is to utilize the fact that the covariance structure is near-diagonal [20]. If we may settle for an approximation, FFT-based generation may be allowed. This is a problem tackled in Chimitt et al. [5] in which the approximation takes on the form of a 2-stage FFT-based generation: independent coefficient generation
Figure 10: [Left] A set of phase screen \(r_{0}\) values which model the \(C_{n}^{2}\) profile, \(C_{n}^{2}(z)=2(1-z/L)\) as given by Hardie et al.[11]. [Right] The deviation in evaluation of our correlation integral with the phase screen model matches previously reported results[11].
Figure 9: [Left] A set of phase screen \(r_{0}\) values which model the \(C_{n}^{2}\) profile, \(C_{n}^{2}=0.25\times 10^{-15}\) m\({}^{-2/3}\) as given by Hardie et al.[11]. [Right] The deviation in evaluation of our correlation integral with the phase screen model matches previously reported results[11].
followed by a mixing step. This approach is inspired by the fact that for a single coefficient the random process is WSS. The result is an approximation of the complete correlation structure of the Zernike. We wish to highlight this for the following reason: in the speed comparisons we shall make, this approximation will be utilized. The loss in accuracy is quantified in Ref. [5], with the speed gaining a factor of nearly \(1000\times\) given certain configurations. Depending on the goal of the simulation, this may be an acceptable trade-off for a minimal drop in accuracy.
Finally, we wish to again remind the reader that aside from these sampling issues, the issue of generalization to amplitude effects is unclear. Therefore, split-step should still be regarded as more general with respect to the types of effects that may be incorporated into the methodology.
#### 4.3.2 Accuracy
The accuracy of Zernike-based simulation depends on the (1) validity of the approximation of Chimitt et al.[5] and (2) whether or not the correlation kernels themselves are positive definite. The curves shown in previous Figures (such as Figure 5, Figure 6) will be empirically replicated upon averaging over random draws. The reason for this is that FFT-based generation is utilized - as long as the functions are positive semi-definite, the empirical curves will match their analytic counterparts. We suggest the interested reader to Chimitt et al. [5] for more details regarding the approximation and its impact on the accuracy. That being said, for higher order Zernike correlations, one may expect slight deviation from the predicted curves due to the approximation (assuming one is using the described approximation).
An added benefit of the fact that previous versions of the Zernike-based simulation was shown to be the case of a single \(C_{n}^{2}\)-slice at the halfway point of propagation means that we may replace the correlation kernel in these previous simulations with the correlation kernel in this paper. The result is maintaining the high degree of speed while dramatically improving the accuracy.
#### 4.3.3 Speed
Comparing the speed of Zernike-based simulation and split-step leads us to consider what is a fair comparison. For the purposes of comparisons of speed, we consider only the time to actually generate the phase distortions, assuming all phase screens and integrals have been evaluated. In the case of split-step, this translates to the time to perform numerical wave propagation, while for multi-aperture we measure the time to generate the random fields using FFT-based generation and the approximation by Chimitt et al. [5]. The integral (22) may be computed _once_ offline at a very high resolution. This admittedly takes a good amount of computation. However, the result can then be stored and utilized repeatedly. Therefore, the time to compute the integral is unreasonable to include in such a comparison. Inclusion of the loading time for the integrals (and the subsequent evaluation of (25)) could be included, however, this will be done once per configuration. One could still generate an infinite amount of data (all with different realizations) from one evaluation of (25). That being said, we still do include this timing measurement for completeness (though, the current operation is sequential with a sure possibility of being sped up through vectorized computation). This leaves the question as to whether or not phase screen generation time for split-step should be included. This would be unfair to a degree, as the same could be said for the phase screens which could be computed offline and stored. To make the comparison fair in this sense, we only consider the time it takes once everything has been precomputed. That being said, our method offers the benefit over phase screen generation due to the fact that the integral need only be evaluated once; the phase screens must be generated for every single independent run of split-step.
We feel this is the most fair comparison, as the two methods do not have a simple one-to-one correspondence of what they must initially generate. We present the results of this in Figure 11. We primarily have used the Python library 'PyTorch' for both implementations as we have found in practice its FFT package to be more reliably faster than numpy. Note that we do not include PSF formation or application in this comparison. There are two reasons for this: (1) the methods both effectively generate phase realizations, thus the PSF generation and application would be identical, adding an equal upward shift to both methods (2) the series of publications which describes the multi-aperture method [4, 16, 5] utilizes engineering tricks to speed up exactly this process of PSF generation and application. The same tricks may not apply to split-step exactly, however, it is reasonable to expect some similar variety of these tricks may benefit split-step in terms of speed. Being that this is not the focus of this present paper, we do not consider them to be important parts of the comparison and thus do not include them.
#### 4.3.4 Resolution and Memory Issues
Zernike-based simulations such as [4, 16] are harshly limited by the number of point sources they may simulate. This is due to the entire set of Zernike correlations being non-WSS as previously described. This large correlation matrix, and its decomposition, enforces the upper limit on the generation of high resolution spatial statistics. With \(N_{z}\) Zernike
coefficients and a grid of point sources of size \(H\times W\), the required matrix will be of size \((N_{z}HW\times N_{z}HW)\). With high accuracy in the phase domain being a common requirement, \(N_{z}\) will be large, significantly limiting the size on the spatial grid \(H\times W\).
For the purposes of generating a single or independent Zernike coefficient fields, one may use FFT-based methods When extending to an image and multiple Zernike coefficients, this limit in sampling must be taken into account. To quantify the trade-off that exists within the current state of Zernike-based simulations for high-resolution images, we present the results in Figure 12, which shows the trade-off in accuracy that may be retained for an \(N\times N\) grid of point sources. Accuracy here is measured relative to the total energy in the first 1275 Zernike coefficients. We then compare the energy of Zernike representations of \(N_{z}\) coefficients to this total energy. The maximum resolution \(N\times N\) is dictated by memory-limitations, in our case 16 GB; we set the maximum resolution according to a decomposition and generation that we are capable of performing on the described PC. This suggests Zernike-based simulations must rely on some additional approximation or a sampling methodology which does not require the covariance matrix to be used explicitly or instead does not rely on the WSS assumption. In the case of Mao et al. [16], a grid of \(64\times 64\) with 36 Zernike coefficients each is then interpolated to the resolution of the image.
Figure 11: [Left] The time to load the precomputed integrals and form (25) in the case of 36 Zernike polynomials is shown (utilizing approximation in Chimitt et al.[5]). Specifically, this is done for a grid of \(64\times 64\) points (representing the maximum case of the plot on the right.) [Right] The time comparison between our implementation between split-step and Zernike-based simulation for varying point source grids. This implementation of split-step uses 9 non-zero phase screens while the Zernike-based simulation again uses the approximation listed in Chimitt et al.[5] and 36 Zernike coefficients.
Figure 12: The proportion of total energy represented by the maximum allowable Zernike coefficients at the given grid size. The principle limitation here is the memory – which is 16 GB in our case
## 5 Conclusion
In this work, we've presented an alternative derivation for the correlations of Zernike coefficients for anisoplanatic turbulence. This enables previous versions of Zernike-based simulations to achieve a higher degree of accuracy while minimally compromising on speed. The correlations presented in this work have been shown to match the known angle-of-arrival correlations as well as explain some of the behavior in the mismatch of split-step's empirical correlations. Finally, we have outlined a problem facing Zernike-based simulations which restrict it from being directly used with FFT-based sampling.
## Appendix A Definition of Zernike Correlation Functions.
Here we detail the function \(f_{ij}\) which characterizes the correlations of the Zernike polynomials. The form of their equation is rather cumbersome and somewhat difficult to interpret for certain values of Noll indices. The purpose of this discussion is to simplify the resulting equation for the ease of interpretation and further study of these results. Following Takato and Yamaguchi [29], we first define the function
\[I_{a,b,c}(s,k_{0})=\int dx\frac{J_{a}(sx)J_{b}(x)J_{c}(x)}{x(x^{2}+k_{0})^{2}}, \tag{32}\]
with \(J_{k}\) as the \(k\)th order Bessel function of the first kind. With Noll indices \((n_{i},m_{i})\to i\), we then define
\[n^{+} =n_{i}+n_{j}, \tag{33}\] \[n^{-} =n_{i}-n_{j},\] (34) \[m^{+} =m_{i}+m_{j},\] (35) \[m^{-} =m_{i}-m_{j}. \tag{36}\]
We also define an indicator function,
\[h(i,j)=\begin{cases}1&m_{i}\neq 0;m_{j}\neq 0;i+j\text{ even}\\ 2&m_{i}\neq 0;m_{j}\neq 0;i+j\text{ odd}\\ 3&m_{i}=0;j\text{ even}\oplus m_{j}=0;i\text{ even}\\ 4&m_{i}=0;j\text{ odd}\oplus m_{j}=0;i\text{ odd}\\ 5&m_{i}=0;m_{j}=0\end{cases}, \tag{37}\]
with \(\oplus\) denoting the XOR function. First, we will present a form in line with that of [29], though using some of this notation and the appropriate simplifications. For a displacement \(\mathbf{s}=(s,\varphi)\) written in polar form, we can write the expression in [29] as in (38)
\[f_{ij}(\mathbf{s},k_{0})=\begin{cases}\pm(-1)^{(n^{+}-m^{+})/2}\cos(m^{+} \varphi)I_{m^{+},n_{i}+1,n_{j}+1}(2s,2\pi Rk_{0})\\ \quad+(-1)^{(n^{+}+2m_{i}+|m^{-}|)/2}\cos(m^{-}\varphi)I_{[m^{-},]n_{i}+1,n_{j }+1}(2s,2\pi Rk_{0})&h(i,j)=1\\ (-1)^{(n^{+}-m^{+})/2}\sin(m^{+}\varphi)I_{m^{+},n_{i}+1,n_{j}+1}(2s,2\pi Rk_{ 0})\\ \quad+(-1)^{(n^{+}+2m_{i}+|m^{-}|)/2}\sin(m^{-}\varphi)I_{[m^{-},]n_{i}+1,n_{j }+1}(2s,2\pi Rk_{0})&h(i,j)=2\\ (-1)^{(n^{+}-m^{+})/2}\sqrt{2}\cos(m^{+}\varphi)I_{m^{+},n_{i}+1,n_{j}+1}(2s,2\pi Rk_{0})&h(i,j)=3\\ (-1)^{(n^{+}-m^{+})/2}\sqrt{2}\sin(m^{+}\varphi)I_{m^{+},n_{i}+1,n_{j}+1}(2s,2\pi Rk_{0})&h(i,j)=4\\ (-1)^{(n^{+}-m^{+})/2}I_{m^{+},n_{i}+1,n_{j}+1}(2s,2\pi Rk_{0})&h(i,j)=5\end{cases}, \tag{38}\]
with the \(\pm\) corresponding to \(+\) if both \((i,j)\) are even, and \(-\) if they are both odd.
However, our notation highlights further possible simplification. We can therefore write the function as
\[f_{ij}(\mathbf{s},k_{0}) =(-1)^{(n^{+}-m^{+})/2}\Theta^{(1)}(i,j)I_{m^{+},n_{i}+1,n_{j}+1} (2s,2\pi Rk_{0})\] \[+(-1)^{(n^{+}+2m_{i}+|m^{-}|)/2}\Theta^{(2)}(i,j)I_{|m^{-}|,n_{i} +1,n_{j}+1}(2s,2\pi Rk_{0}), \tag{39}\]
with functions
\[\Theta^{(1)}(i,j)=\begin{cases}(-1)^{j}\cos(m^{+}\varphi)&h(i,j)=1\\ \sin(m^{+}\varphi)&h(i,j)=2\\ \sqrt{2}\cos(m^{+}\varphi)&h(i,j)=3\\ \sqrt{2}\sin(m^{+}\varphi)&h(i,j)=4\\ 1&h(i,j)=5\end{cases} \tag{40}\]
and,
\[\Theta^{(2)}(i,j)=\begin{cases}\cos(m^{-}\varphi)&h(i,j)=1\\ \sin(m^{-}\varphi)&h(i,j)=2\\ 0&h(i,j)=3\\ 0&h(i,j)=4\\ 0&h(i,j)=5\end{cases}, \tag{41}\]
contributing the angular terms.
#### Funding
The research is based upon work supported in part by the Intelligence Advanced Research Projects Activity (IARPA) under Contract No. 2022-21102100004, and in part by the National Science Foundation under the grants CCSS-2030570 and IIS-2133032. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.
|
2305.06305 | Self-Supervised Instance Segmentation by Grasping | Instance segmentation is a fundamental skill for many robotic applications.
We propose a self-supervised method that uses grasp interactions to collect
segmentation supervision for an instance segmentation model. When a robot
grasps an item, the mask of that grasped item can be inferred from the images
of the scene before and after the grasp. Leveraging this insight, we learn a
grasp segmentation model to segment the grasped object from before and after
grasp images. Such a model can segment grasped objects from thousands of grasp
interactions without costly human annotation. Using the segmented grasped
objects, we can "cut" objects from their original scenes and "paste" them into
new scenes to generate instance supervision. We show that our grasp
segmentation model provides a 5x error reduction when segmenting grasped
objects compared with traditional image subtraction approaches. Combined with
our "cut-and-paste" generation method, instance segmentation models trained
with our method achieve better performance than a model trained with 10x the
amount of labeled data. On a real robotic grasping system, our instance
segmentation model reduces the rate of grasp errors by over 3x compared to an
image subtraction baseline. | YuXuan Liu, Xi Chen, Pieter Abbeel | 2023-05-10T16:51:36Z | http://arxiv.org/abs/2305.06305v1 | # Self-Supervised Instance Segmentation by Grasping
###### Abstract
Instance segmentation is a fundamental skill for many robotic applications. We propose a self-supervised method that uses grasp interactions to collect segmentation supervision for an instance segmentation model. When a robot grasps an item, the mask of that grasped item can be inferred from the images of the scene before and after the grasp. Leveraging this insight, we learn a grasp segmentation model to segment the grasped object from before and after grasp images. Such a model can segment grasped objects from thousands of grasp interactions without costly human annotation. Using the segmented grasped objects, we can "cut" objects from their original scenes and "paste" them into new scenes to generate instance supervision. We show that our grasp segmentation model provides a 5x error reduction when segmenting grasped objects compared with traditional image subtraction approaches. Combined with our "cut-and-paste" generation method, instance segmentation models trained with our method achieve better performance than a model trained with 10x the amount of labeled data. On a real robotic grasping system, our instance segmentation model reduces the rate of grasp errors by over 3x compared to an image subtraction baseline.
## I Introduction
Instance segmentation is often the basis of many robotic applications, including object grasping, manipulation, and placement. Given an image, the goal of instance segmentation is to predict the set of pixels that belong to each object. After objects are detected, a robot can use segmentation masks to execute more generalizable grasping and manipulation policies. A robust robotic application must be able to recognize thousands of new objects on an ongoing basis.
Much of the recent advances in instance segmentation [1, 2, 3, 4] assume that large-scale labeled datasets of objects from known classes are available [5, 6]. This assumption, however, does not hold for many robotics applications that must handle a constant stream of new objects. Collecting and annotating such a dataset is also costly and expensive. How can a robot learn to segment a diverse range of objects with only limited labeled data?
An object can be defined as a contiguous group of pixels that move together [7]. When the robot successfully grasps an object, the pixels of that object are removed from the scene. The mask of the grasped object can then be inferred from the grasp location, before image, and after image of the scene. Leveraging this insight, we propose a grasp segmentation model that predicts the mask of the grasped object. A grasp segmentation model differs from an instance segmentation model in that it only needs to predict one object (the grasped one), and has additional information (before/after images and the grasp location) that enables it to generalize better even when trained on a small dataset.
Traditional methods of segmenting grasp interactions often use image and background subtraction which are not robust to occlusions, reflections, and other objects moving [8]. We find that our learned model overcomes these limitations and can robustly segment grasped objects, even when other objects in the scene have shifted during grasping. In our experiments, we show that our grasp segmentation model is significantly more accurate at segmenting grasped objects than traditional image subtraction approaches.
Once we have a grasp segmentation model, we can run this on thousands of unlabeled grasps to get self-supervised object masks. To reduce noise, we use a suction gauge to keep only successful grasps and propose an uncertainty-aware filtering method to keep high-accuracy masks. With these filtered object masks, we can then "cut" objects from their original scenes and "paste" them into new scenes with
Fig. 1: Overview of our method: the grasp segmentation model takes before and after images of the grasp to predict the mask of the grasped object. We use the grasp segmentation model to get object masks for thousands of grasps in a self-supervised manner. These grasped objects can be “cut” and “pasted” to generate diverse training supervision for the instance segmentation model.
random augmentations to generate instance supervision [9, 10]. We combine this generation scheme with inpainting to generate a diverse set of photo-realistic scenes with infinite variation of scale, rotation, and occlusion. This allows our self-supervised robotic system to continually learn to segment new objects, and improve on known objects, without human annotation. Figure 1 shows an overview of our self-supervised instance segmentation method.
The key contributions of this paper are as follows:
1. We propose a self-supervised robotic grasping system that can continually learn and improve its instance segmentation, on new and known objects, without human annotation.
2. Our novel grasp segmentation model uses before and after grasp images to segment grasped objects with 5x less error than traditional approaches, while being robust to occlusions, reflections, and other object moving in the scene.
3. We introduce a "cut-and-paste" and inpaint method to generate supervision for instance segmentation models that outperforms the same model trained with 10x the amount of labeled data.
4. On a robotic grasping task, we show that models trained with our method can reduce the rate of grasping failures by over 3x compared to an image subtraction baseline.
## II Related work
### _Instance Segmentation_
There has been a significant amount of recent advances in the field of instance segmentation, with many approaches focusing on supervised learning on large datasets such as COCO [5]. Detect-then-segment is among the earliest learned approaches that use a two-stage object detection and segmentation architecture [2]. More recently, single-stage models that use transformer attention mechanisms, such as Mask2Former, have demonstrated strong performance [11, 12, 13, 1]. However, all of these methods rely on the availability of large-scale annotated datasets, which may be costly to obtain for robotics applications that must handle a constant stream of new objects.
### _Self-Supervised Segmentation_
To address this issue, there have been several approaches that propose self-supervised or semi-supervised methods for instance segmentation. Cut and paste methods have been shown to improve instance segmentation performance [9, 10]. Prior work [8, 14] proposed self-supervised approaches that use image subtraction on before and after grasps to provide instance segmentation supervision. However, masks recovered from naive image subtraction are imperfect, resulting in worse performance at higher IOU thresholds, which is insufficient for high-performing robotic applications. Optical flow methods can also infer contiguous groups of pixels that move together as the robot is pushing them around [15, 7, 16]. This may be an impractical approach if continuous high-bandwidth video is not available from the robot camera, or if the robot is expected to be grasping objects instead of pushing them in a production environment.
### _Moving Object Detection_
Another class of methods can detect moving objects in video sequences, such as traffic and surveillance footage. These methods can use a background subtraction approach by modeling a static background scene with a mixture of Gaussians [17, 18]. Pixels that are noticeably different from the background model, as determined at the pixel level or with local features [19], are segmented as moving objects. More recently, convolutional neural networks have been applied to learn this background subtraction with 2D and 3D convolutions [20, 21]. Moving object detection models, however, are not directly applicable for grasp segmentation since they segment all objects that have moved instead of the one that was grasped. If the robot moves an adjacent object to the grasped object, both objects would be segmented as one under moving object detection, providing incorrect instance segmentation supervision. Moreover, methods that learn a background model will have limited data since the background scene changes with every grasp.
### _Representation Learning_
Other approaches have proposed learning object _representations_ for robotics instead of instance segmentation directly. In Grasp2Vec [22], representations of the object and scene are learned to satisfy arithmetic consistency. These representations can then be used to learn policies that manipulate and grasp objects. Similarly, pixel-wise descriptors can be learned for each object using a contrastive loss [23]. While learned object representations can be used for robot manipulation, they have yet to be proven effective for learning instance segmentation. The clear semantics of instance segmentation may be desirable for some robotic applications such as counting the number of objects, or ensuring that grasps only occur on a single object.
## III Instance Segmentation by Grasping
Our method uses grasp interactions to collect segmentation data for objects in a self-supervised manner. We propose a grasp segmentation model that can robustly segment grasped objects from before and after grasp images. We show how our model can predict masks that are robust to occlusions, reflections, and other objects in the scene changing. Then, by combining our grasp segmentation model with our uncertainty-aware filtering method, we can collect a dataset of grasped object masks from unlabeled grasp images.
### _Grasp Segmentation Model_
Our grasp segmentation model is inspired by prior work that uses grasp interactions to infer object masks and representations based on sets of pixels that move together [8, 22]. Given a before grasp image \(i_{b}\), an after grasp image \(i_{a}\), and a grasp mask \(g\), our model predicts the visible mask of the grasped object \(m_{v}\). This is similar to image subtraction which subtracts the before and after images \(|i_{b}-i_{a}|>t\) and
uses a threshold \(t\) to determine the object mask. However, image subtraction is a very brittle segmentation method with very few tunable parameters that limit its robustness to occlusions, reflections, and other moving objects.
Our model, on the other hand, uses a neural network to predict both the visible \(m_{v}\) and amodal mask \(m_{a}\) (including occlusions) [24] of the grasped object. By explicitly reasoning for occlusions, we can filter out objects that would have been partially visible and providing incorrect supervision for downstream instance segmentation. A trained model will also learn to ignore inputs, such as reflections and other moving objects, that are not relevant to segmenting the grasped object.
We base the grasp segmentation model architecture on a Resnet-50 [25] with a Feature Pyramid Network (FPN) [26], which has been shown to be effective at segmenting objects at multiple scales. To initialize the model with features that are amenable for object detection, we load weights from a Mask-RCNN [2] pre-trained on COCO [5]. We include image subtraction \(i_{b}-i_{a}\) as an input to the model since it provides a good inductive bias for which pixels have changed. Altogether, we concatenate the inputs \([i_{b},i_{a},i_{b}-i_{a},g]\) along the channel dimension before passing them into the Resnet-50 backbone. Since the input dimensions are different than the usual RGB inputs for Resnet, we randomly initialize the first layer of the Resnet-50 while initializing all other weights from the pre-trained Mask-RCNN.
To make predictions at the original input resolution, we take the highest resolution, stride-4 feature layer from the FPN, and apply a series of convolutions and transposed convolutions to upsample the features. We use two blocks of gated residual convolutions followed by a transposed convolution for upsampling. Similar to regular residual blocks \(y=x+C_{1}(x)\), gated residual blocks use an additional learned gating operation \(y=x+\sigma(C_{2}(x))C_{1}(x)\) where \(\sigma\) is the sigmoid function and \(C\) are learned convolution operators, like in LSTMs [27]. For our application, this enables the model to easily ignore irrelevant features such as using the reflective object features to mask out the predicted object mask. Finally, we make a 2-channel prediction \(\hat{m}_{v}\), \(\hat{m}_{a}\) corresponding to the visible and amodal object masks respectively. Figure 2 illustrates our model architecture overview along with sample inputs and predictions on how our model can be more robust than image subtraction.
We supervise all predictions using a binary cross entropy loss with weighting \(w\)
\[CE(\hat{m},m,w)=\frac{1}{n}\sum_{i=1}^{n}w_{i}(m_{i}\log\hat{m}_{i}+(1-m_{i}) \log(1-\hat{m}_{i})) \tag{1}\]
Due to the imbalanced nature of the classification task, we use multiple weightings to ensure the neural network gives the appropriate attention to the relevant pixels. The first two weights, \(w^{(1)}=m,w^{(2)}=\neg m\) provide a balanced weighting. Since it's important for the pixels near the mask
Fig. 2: Our grasp segmentation model takes before, after, grasp, and subtraction images as input to predict the mask of the grasped object. We build upon a Resnet-50 backbone with a Feature Pyramid Network (FPN) and two additional learned upsampling convolution blocks to predict masks at the image resolution. Traditional image subtraction methods (on the right) fail to properly handle reflections and other objects moving in the scene. Our learned approach can correctly segment the grasped object and express uncertainty via the entropy of the prediction.
boundary to be accurate, we use \(w^{(3)...(6)}=maxpool(m,k)\) with kernel sizes \(k=(11,51,101,201)\). This enlarges the region around the object and focuses the model's loss to predict an accurate object boundary.
We train with the total loss
\[L(\hat{m},m)=\sum_{j=1}^{6}CE(\hat{m},m,w^{(j)}) \tag{2}\]
using Adam with learning rate 5e-6 with a batch size of 8 until convergence. We train the grasp segmentation model using supervised learning on a small dataset of 100-200 labeled grasp images. To improve generalization, we use augmentations during training including random crops, resizes, blurs, and color adjustments. Even with such a small training set, the model can generalize to provide accurate segmentation masks for thousands of grasps on new objects.
### _Supervising Instance Segmentation_
Once our grasp segmentation model is trained, how can we learn an instance segmentation model? We propose a 4-step approach: first, we use the grasp segmentation model to collect and filter object masks across a large variety of grasped objects. Then, we "cut-and-paste" object masks using augmentations to generate cluttered scenes of objects. We use inpainting on object boundaries to reduce pasting artifacts, and use the inpainted images to train a state-of-the-art instance segmentation model. Figure 3 illustrates an overview of our approach.
#### Iii-B1 Collecting Accurate Object Masks
First, we use the grasp segmentation model on unlabeled grasp image pairs to predict the mask of the grasped object. To collect a reliable set of unoccluded object crops, we apply a several filters on the prediction. We compute the sums of visible and amodal masks \(S_{v}=\sum\hat{m}_{v},S_{a}=\sum\hat{m}_{a}\), and compare the ratio \(\frac{S_{u}}{S_{a}}>t_{occ}\) with a threshold. We use a threshold \(t_{occ}=0.95\) to filter out objects the model predicts as not fully visible.
The grasp segmentation model could also predict masks that are wrong or uncertain. To filter out uncertain masks, we can use entropy thresholding where
\[\mathcal{H}(\hat{m})=-\hat{m}\log\hat{m}-(1-\hat{m})\log(1-\hat{m}) \tag{3}\]
is the binary entropy. We can threshold average relative entropy of the visible mask prediction \(\frac{\sum\mathcal{H}(\hat{m}_{v})}{\sum\hat{m}_{v}}<t_{ent}\) to filter out uncertain predictions. We found \(t_{ent}=0.1\) to provide a reasonable trade-off between precision and recall.
Finally, predicting discontinuous masks can indicate that more than one object was detected or the model is predicting spurious blobs. To avoid this kind of prediction error, we use OpenCV's findContours function to find contiguous groups of contours. Then we count the number of contours with at least 1000 pixels and only keep objects with 1-2 contours.
#### Iii-B2 Image Generation Augmentation
Once we have filtered out uncertain predictions and obtained a set of accurate object masks, we can use these masks to generate supervision for instance segmentation. We use object masks to "cut" images of the objects and "paste" them onto annotated images to generate instance supervision [9, 10].
To train the model to be robust to a diverse set of objects, we randomize select 0-25 objects to be pasted on a random training image. We apply different geometric transformations to the objects such as randomly rotating up to 360 degrees, scaling between 0.75x-1.25x, and randomly selecting a position. This can help the model learn to better handle variations in the object orientations, positions, and occlusions in the images. With a small amount of labeled data, we can augment with self-supervised grasp segmentation to generate a large dataset of instance segmentation supervision
#### Iii-B3 In-Painting With Diffusion
While naive cut-and-paste can generate diverse supervision, the augmented images
Fig. 3: By applying the grasp segmentation model to thousands of grasps, we can collect a large dataset of grasp segmentation objects in a self-supervised manner. Then we can generate instance segmentation supervision by taking a random subset of segmented grasped objects and “pasting” them onto any training image. We apply augmentations such as rotation, scale, and offset to generate cluttered scenes of objects. Then, we use a pre-trained diffusion inpaint model to smooth out the pasted object boundaries to be more photorealistic. The resulting inpainted image and pasted masks can be used to train any off-the-shelf instance segmentation model.
may contain artifacts such as wrong object boundaries and unrealistic shadows. These artifacts can lead a model to learn features that are not present in natural images. To bring the generated images closer to the natural image space, we use a pre-trained Stable Diffusion in-painting model that is trained on a large-scale image dataset [28].
First, we take the visible boundary of the objects that are pasted and dilate the boundary by 5 pixels on all sides to cover the region on the object boundary. We use this mask as the in-paint mask input to a pre-trained Stable Diffusion in-painting model and run the denoising process for 4 steps on the masked region. This provides a more photo-realistic boundary between the background and the object.
#### Iii-A4 Instance Segmentation Model
Since our method directly generates instance segmentation supervision, it's compatible with any off-the-shelf instance segmentation model. This enables greater flexibility of our method since we can take advantage of any past and future advances in model architecture. We chose a state-of-the-art model, Mask2Former [1], as the main instance segmentation model for all of our experiments. Mask2Former uses a multi-scale, masked-attention transformer decoder with learned queries and a Hungarian matching loss on the masks. We use the official public implementation of Mask2Former with a Resnet-50 as the backbone and default parameters (100 queries).
## IV Evaluating Grasp Segmentation
First, we'll evaluate the performance of our grasp segmentation model compared to traditional methods such as image subtraction. Since grasp segmentation provides object masks for instance segmentation supervision, the accuracy of the grasp segmentation will influence the performance of downstream instance segmentation.
### _Grasp Data Collection_
For our grasping experiments, we use an ABB-1200 robot with a 5 suction cup end-effector as shown in Figure 1. The robot uses RGB-D cameras to plan pick and place motions that cycle the objects between two bins. We record the image (640x960 pixels) before the grasp \(i_{b}\), the image after the grasp \(i_{a}\), and the grasp mask \(g\) corresponding to the pixels of the active suction cups. After grasping, we use suction gauges of each cup to determine whether the object was indeed picked and whether the grasp was successful. While this data collection method requires an instance segmentation model with decent performance to generate grasps, we use the suction gauge to keep only successful grasps in the dataset which can bootstrap even a poorly performing model.
We collect 110k grasp image pairs across a variety of training and test objects. We label 1k images from the training set and 6k images from the test objects with instance segmentation labels. This enables us to train and evaluate both grasp and instance segmentation. The test objects and the training objects are distinct and have no overlap.
### _Background and Image Subtraction_
In order to evaluate the performance of our grasp segmentation model, we will compare it to traditional methods such as image and background subtraction [8, 17, 18, 19]. Image subtraction is a simple method that subtracts the before-grasp image from the after-grasp image to obtain the grasp mask. This method assumes that changed pixels correspond to the grasped object. To make the image subtraction more robust, we apply greyscale and Gaussian blur before the subtraction, \(G(I)=GaussianBlur(Greyscale(I))\). Then we use the thresholded difference as the segmentation mask
\[\hat{m}=|G(I_{a})-G(I_{b})|>t\]
We also consider background subtraction methods such as mixture of gaussians MOG [17], MOG2 [18] and Local SVD binary pattern (LSBP) [19]. MOG models the background with a mixture of gaussians while LSBP uses local image features to detect changes. With all image and background subtraction methods, we can apply the same OpenCV contour approximation from Section III-B1 as a filter.
### _Evaluation Results_
We train the Grasp Segmentation model on a subset of 100 and 200 grasps from the training set, until convergence as described in Section III-A. Then we evaluate on the test set of hold-out object grasps, using mean intersection over union (mIOU) as the evaluation metric. We also report the relative error rate, which is the number of incorrectly predicted pixels divided by the number of pixels in the ground truth mask. For methods that use filtering described in Section III-B1, we calculate the recall, which is the percentage of mask predictions not filtered out. When using filtering, the mIOU and Error rates are calculated only on the kept data after filtering. If a method makes no pixel predictions for a grasp, this prediction will be filtered as well.
#### Iv-C1 How Do Our Methods Compare With Baselines?
The results of our evaluation are summarized in Table I. All grasp segmentation models perform better than image and background subtraction baselines (Subtract, MOG, MOG2, LSBP). As shown in Figure 2, these traditional approaches are unable to account for other objects moving in the scene and will confuse pixels that change with similar intensity.
#### Iv-A2 How Does Performance Vary With Training Set Size?
We can see that the grasp segmentation model with filtering trained on 200 grasps, Grasp-200-Filter, performs the best. Training with 100 grasps and filtering (Grasp-100-Filter) achieves slightly worse mIOU, error rate, and less than half the recall of Grasp-200-Filter). This suggests that increasing the amount of training data improves the accuracy and the number of scenes the model can confidently segment.
#### Iv-A3 What's The Impact Of Filtering And Augmentations?
Filtering improves the mIOU and error rates while decreasing recall for all methods. We also see that not using data augmentation, Grasp-100-NoAug-Filter, achieves lower accuracy metrics but higher recall. This suggests that fewer objects are filtered out and the model is confidently wrong since it has overfit to the small training set. Data augmentation is important to training a robust grasp segmentation model.
## V Evaluating Instance Segmentation
To evaluate instance segmentation performance, we use the Mask2Former model with R50 backbone and default hyperparameters as described in Section III-B4. We train and evaluate on the dataset from Section IV-A and use early stopping to prevent overfitting. We use 100 labeled training images for most evaluations, while benchmarking on 1000 labeled training images as a reference. We evaluate instance segmentation models using standard metrics, such as overall Average Precision (AP), IOU thresholded AP ([email protected], [email protected]), and object size breakdowns (AP\({}^{L}\), AP\({}^{M}\)).
For Paste methods, we take the corresponding model trained in Section IV-C to generate object crops. We apply the same inference and filtering for each method on the 110k unlabeled grasp dataset. Using the filtered object crops, we then paste them randomly onto the training set following the procedure described in Section III-B2. As an ablation, we also paste from training image crops to evaluate our augmentation scheme without grasp data (Paste-Train).
### _Baseline Methods_
#### V-A1 Single Object Supervision
One intuitive way to use grasp segmentation data for instance segmentation is to supervise only on the grasped object. With the Mask2Former loss, this can be done by using only the predicted grasp segmentation in the Hungarian matching and ignoring the "no object" loss term. We combine single-object supervision from the Grasp-100 model with full supervision from the training set at a 50:50 ratio (Single-Object-100).
#### V-A2 Robust Set Loss
To overcome errors in noisy masks generated by image subtraction, prior work [8] proposed a robust set loss, which requires the predicted mask to be within only a margin of the ground truth mask. Given a predicted mask, the robust set loss will use a discrete optimization to find the closest target mask that is within some IOU margin with the ground truth. We use the publicly available implementation of robust set loss with IOU threshold 0.7 and replace the Mask2Former mask loss with the robust set loss.
### _Evaluation Results_
#### V-B1 How Do Baseline Methods Compare?
Table II summarizes the results of our evaluation. Training with single object supervision, Single-Object-100, actually results in worse performance than just training on the full supervision dataset (Mask2Former-100). We suspect that there isn't enough negative supervision from the single-object data since there are only labels for a single object in each image. We find that pasting with objects from filtered image subtraction (Paste-Subtract), provides a performance improvement over just supervised learning (Mask2Former-100). Adding the robust set loss (Paste-Subtract-Robust) further improves the segmentation performance which is consistent with [8]. Pasting from the labeled training objects, Paste-Train, outperforms Paste-Subtract-Robust on AP. This suggests that the accuracy of cropped objects in the training set outweighs the diversity of the less correct crops from image subtraction. Moreover, pasting provides a strong augmentation for learning rotational, scale, and occlusion invariant instance segmentation.
#### V-B2 Does Robust Set Loss Always Improve Performance?
Using the robust set loss with object masks from the more accurate, grasp segmentation model (Paste-Grasp-Robust) actually hurts performance, unlike in the image subtraction case. This suggests that the robust set loss is beneficial only when there is a significant amount of error in the grasp segmentation, such as with image subtraction's 44.5% error. With the 9.05% error rates of our grasp segmentation model, supervising with a normal cross entropy loss is better.
#### V-B3 What's The Impact Of Filtering?
Using object masks from our learned Grasp Segmentation model (Paste-Grasp) can further improve instance segmentation performance. Paste-Grasp and Paste-Grasp-Filter have similar results
which suggest there is some trade-off between the error of the object mask and how many objects are filtered out. Recall from Table I that Grasp-100 achieves 22.6% error with 99.8 recall, while Grasp-100-Filter has 9.05% error with 29.4 recall. The non-filtered model (Paste-Grasp) performs slightly better which suggests that the diversity of the objects seen outweighs the accuracy of the object segmentation here.
#### V-C4 How Does Inpainting Affect Performance?
Finally, we see that using inpainting with the grasp segmentation model (Paste-Grasp-Inpaint), achieves the best performance, outperforming a Mask2Former model trained on 10x the amount of labeled images on all but one metric. This suggests that inpainting the pasted objects can produce more realistic supervision for learning instance segmentation features.
## VI Robot Grasping Evaluation
### _Experimental Setup_
To evaluate the effectiveness of our method in a real robotic application, we use our trained instance segmentation models to detect objects for grasping. Using the same grasping setup from Section IV-A and the models trained in Section V-B, we compare grasping success with different instance segmentation models. A grasp attempt is counted as a success if it picks up exactly one object and places it into the other bin without dropping. If the predicted mask is not an actual object, or contains multiple objects, this would be a grasp failure. The grasping system will try to land as many cups as possible on a detected object mask and plan a collision-free path to reach that object. By keeping all other parts of the grasping system constant and only changing the segmentation model, we can isolate the effects of the segmentation model on grasp performance.
### _Evaluation Results_
We compare the top segmentation models from each method: supervised learning (Mask2Former-1000), robust image subtraction (Paste-Subtract-Robust), and our grasp segmentation model with inpainting (Paste-Grasp-Inpaint). We perform 700 grasps on the same object set for each segmentation model. The results of our robotic grasping evaluation are shown in Table III.
Overall we found the grasping evaluation to be consistent with the instance segmentation evaluation in Section V-B. Paste-Subtract-Robust performs the worst with most of its grasp failures due to predicting masks that don't belong to any object, such as reflections in the wall or empty areas of the bin. Our Paste-Grasp-Inpaint model achieves the lowest grasp error rate that is over 3x better than the image subtraction baseline and comparable to a model trained with 10x the amount of labeled data, Mask2Former-1000.
### _Failure Analysis_
Figure 4 highlights three failure categories caused by incorrect instance segmentation that we will discuss here.
Fig. 4: Visualizations of different types of grasp failures from real robot execution caused by wrong segmentation. The first column shows the executed grasp with active cups colored in blue and inactive cups colored in red. Columns 2-4 show the predicted segmentation from each of the 3 models used in the evaluation. The first row shows an example of a grasp on a non-object that was incorrectly predicted by the segmentation model. The second row is a grasp on an unstable part of the object that leads to a drop; the segmentation model splits the object into two masks, causing an unstable grasp to be executed. The third row is a grasp on two objects; this is caused by the segmentation model grouping two objects into one mask.
#### Vi-B1 Grasp On Non-Objects
In the first row, the robot grasps on the reflection in the bottom left corner of the bin, which fails since it's not on an object. Grasping on non-objects such as reflections and empty areas of the bin is a common failure of the Paste-Subtract-Robust model. We suspect image subtraction may have incorrectly segmented the reflection or bin as an object, which then provided misguided supervision to the instance segmentation model. Our learned model, on the other hand, is robust to these errors and does not make the same mistakes.
#### Vi-B2 Splitting Objects Into Two
In the second row, an unstable grasp on the top part of the bottle leads to the robot dropping the item. Here the segmentation model incorrectly split one object into two masks, resulting in a grasp that is not centered on the object. All three models fail to segment the bottle correctly, however our learned Paste-Grasp-Inpaint model has the least amount of errors on other objects.
#### Vi-B3 Grouping Two Object As One
Finally, when the segmentation model incorrectly groups two objects as one, this can lead to a grasp on two objects. Only the Paste-Subtract-Robust model suffers from this failure case, which suggests that image subtraction incorrectly grouped two objects that moved as one object. This incorrectly grouped object was then used to supervise the model, leading to this grasp error.
## VII Conclusion
In this work, we proposed a novel method for instance segmentation that utilizes self-supervised grasp images to generate object masks for training. We showed that our grasp segmentation model can accurately detect objects with high mIOU and low error rate, even when trained on a small number of labeled images. We then use the grasp object masks to train an instance segmentation model with a inpainting augmentation method, which outperforms a model trained with 10x the amount of labeled data. We have also shown that our method leads to improved grasping performance in a real-world robotic application. This work highlights the potential of using self-supervised grasp images for learning instance segmentation models, and opens up new possibilities for training such models in a wide range of robotic applications.
|
2307.08551 | On the Fly Neural Style Smoothing for Risk-Averse Domain Generalization | Achieving high accuracy on data from domains unseen during training is a
fundamental challenge in domain generalization (DG). While state-of-the-art DG
classifiers have demonstrated impressive performance across various tasks, they
have shown a bias towards domain-dependent information, such as image styles,
rather than domain-invariant information, such as image content. This bias
renders them unreliable for deployment in risk-sensitive scenarios such as
autonomous driving where a misclassification could lead to catastrophic
consequences. To enable risk-averse predictions from a DG classifier, we
propose a novel inference procedure, Test-Time Neural Style Smoothing (TT-NSS),
that uses a "style-smoothed" version of the DG classifier for prediction at
test time. Specifically, the style-smoothed classifier classifies a test image
as the most probable class predicted by the DG classifier on random
re-stylizations of the test image. TT-NSS uses a neural style transfer module
to stylize a test image on the fly, requires only black-box access to the DG
classifier, and crucially, abstains when predictions of the DG classifier on
the stylized test images lack consensus. Additionally, we propose a neural
style smoothing (NSS) based training procedure that can be seamlessly
integrated with existing DG methods. This procedure enhances prediction
consistency, improving the performance of TT-NSS on non-abstained samples. Our
empirical results demonstrate the effectiveness of TT-NSS and NSS at producing
and improving risk-averse predictions on unseen domains from DG classifiers
trained with SOTA training methods on various benchmark datasets and their
variations. | Akshay Mehra, Yunbei Zhang, Bhavya Kailkhura, Jihun Hamm | 2023-07-17T15:31:58Z | http://arxiv.org/abs/2307.08551v1 | # On the Fly Neural Style Smoothing for Risk-Averse Domain Generalization
###### Abstract
Achieving high accuracy on data from domains unseen during training is a fundamental challenge in domain generalization (DG). While state-of-the-art (SOTA) DG classifiers have demonstrated impressive performance across various tasks, they have shown a bias towards domain-dependent information, such as image styles, rather than domain-invariant information, such as image content. This bias renders them unreliable for deployment in risk-sensitive scenarios such as autonomous driving where a misclassification could lead to catastrophic consequences. To enable risk-averse predictions from a DG classifier, we propose a novel inference procedure, Test-Time Neural Style Smoothing (TT-NSS), that uses a "style-smoothed" version of the DG classifier for prediction at test time. Specifically, the style-smoothed classifier classifies a test image as the most probable class predicted by the DG classifier on random re-stylizations of the test image. TT-NSS uses a neural style transfer module to stylize a test image on the fly, requires only black-box access to the DG classifier, and crucially, abstains when predictions of the DG classifier on the stylized test images lack consensus. Additionally, we propose a neural style smoothing (NSS) based training procedure that can be seamlessly integrated with existing DG methods. This procedure enhances prediction consistency, improving the performance of TT-NSS on non-abstained samples. Our empirical results demonstrate the effectiveness of TT-NSS and NSS at producing and improving risk-averse predictions on unseen domains from DG classifiers trained with SOTA training methods on various benchmark datasets and their variations.
## 1 Introduction
The objective of Domain Generalization (DG) [75] is to develop models that demonstrate remarkable resilience to domain shifts during testing, even without prior knowledge of the test domain during training This represents a challenging problem, as it is impractical to train a model to be robust to all potential variations that may arise at test time. For example, previous works [27, 30, 2, 7, 11] have demonstrated that variations in styles/textures, weather changes, etc., unseen during training can drastically reduce the classifier's performance. Recent works [5, 27, 35, 56] brought to light the fact that predictions from state-of-the-art (SOTA) neural networks are biased towards the information unrelated to the content of the images but are dependent on the image styles, a characteristic that can vary across domains. Due to the vast practical implications of this problem many works have studied this problem both analytically [8, 9, 10, 41, 51, 62, 84] and empirically [24, 28, 54, 59, 78, 85, 1, 28]. However, in scenarios such as in autonomous driving, medical diagnoses, or rescue operations involving drones, where misclassifications can have severe consequences, it becomes essential to augment classifiers with abstaining mechanisms or involve humans in the decision-making process [19, 61]. In this work, we focus on the problem of image classification under distribution shifts which comprise of differences in image styles.
To safeguard the classifier against risky misclassification (and enable risk-averse predictions) we augment the classifier with a capability to defer making a prediction on samples, when it lacks confidence. However, since the softmax score of the classifier is known to be uncalibrated [32, 34, 29] on data from unseen domains, we propose a novel test-time method that uses neural style information to estimate classifier's confidence in its prediction under style changes. Our inference procedure, Test-Time Neural Style Smoothing (TT-NSS), depicted in Fig. 1, first transforms a classifier (base classifier) into a style-smoothed classifier and then uses it to either predict the label of an incoming test sample or abstain on it. Specifically, the prediction of the style smoothed classifier, \(\psi\), constructed from a base classifier \(f\), on a test input \(x\) is defined as the class that the base classifier \(f\) predicts most frequently on stylized versions of the input. TT-NSS uses a style transfer network based on AdaIN [36] to produce stylized versions of the test input in real-time. While AdaIN can transform the style of \(x\) to any arbitrary style, we specifically transform it into the style of the data from the domains used for training. This choice is based on the assumption that \(f\) can be made agnostic to the
styles of the data from domains used for training. Moreover, changing the styles of \(x\) to arbitrary styles, unknown to \(f\), can worsen the classifier's performance due to a widened distribution shift.
TT-NSS can be used to evaluate any DG classifier with only black-box access to it, i.e., it does not require the knowledge of weights, architecture, or training procedure used to train the classifier and only needs its predictions on stylized test samples. However, computing the prediction of a style-smoothed classifier requires computing the probability with which the base classifier classifies the stylized images of \(x\). Following works in Randomized Smoothing [18], we propose a Monte Carlo algorithm to estimate this probability. When this estimated probability exceeds a set threshold it implies that the predictions of the classifier \(f\) on stylized images of \(x\) achieve a desired level of consensus and the prediction is reliable. In other cases, TT-NSS abstains due to a lack of consensus among the predictions of the base DG classifier. Recently, test-time adaptation [39, 83] (TTA) approaches have been shown to be effective in the DG setup which adapts some or all parameters of the classifier using multiple incoming data samples from the unseen domains. However, our work differs significantly from these since we consider a black-box setting where parameters of the classifier are not accessible at test time making our approach much more practically useful compared to TTA approaches.
Furthermore, we propose a novel training procedure based on neural style smoothing (NSS) to improve the consistency of the predictions of the DG classifier on stylized images. The improved consistency leads to improved performance of the DG classifier on non-abstained samples at lower abstaining rates making them more reliable. Our training method creates a style-smoothed version of the soft base DG classifier and uses stylized versions of the source domain data (generated by stylizing the source domain images into random styles of other source domain images) to train the base DG classifier. Similar to previous works [40, 65, 66], we incorporate consistency regularization during training to further boost the performance of the classifier on non-abstained samples at various abstaining rates. Similar to TT-NSS which can be used with any classifier, our NSS-based training losses can be combined with any training method and can help improve the reliability of the classifier's predictions without significantly degrading their accuracy or requiring access to auxiliary data from unseen domains [16, 33]. We present results of using our inference and training procedures on PACS [47], VLCS [22], Office-Home [72] and their variations generated by applying style changes and common corruptions, in both single and multiple source domain settings. Our results show the effectiveness of our proposed methods at enabling and improving risk-averse predictions from classifiers trained with SOTA DG methods on data from unseen domains. Our main contributions are summarized below:
* We focus on the problem of obtaining risk-averse predictions in a DG setup with black-box access to the classifier. We propose an efficient inference procedure relying on AdaIN-based style transfer and a style-smoothed classifier for classification and abstaining.
* To improve the quality of risk-averse predictions, we propose losses that enforce prediction consistency on the random stylization of the source data and can be seamlessly combined with losses of any DG method.
* We demonstrate the effectiveness of our inference and training methods on benchmark datasets and their variations generated by stylizing and using corruptions.
Figure 1: Overview of our Test-Time Neural Style Smoothing (TT-NSS) inference procedure for obtaining risk-averse predictions. TT-NSS works by stylizing a test sample into source domain styles and classifies the sample as the most probable class assigned by the base DG classifier to the stylized samples if that class is much more likely than the other classes. Otherwise, it abstains from making a prediction and refers the sample to an expert thereby avoiding a risky misclassification.
## 2 Related work
**Domain generalization:** The goal of domain generalization (DG) is to produce classifiers whose accuracy remains high when faced with data from domains unseen during training. Many works have proposed to address this problem by capturing invariances in the data by learning a representation space that reduces the divergence between multiple source domains thereby promoting the use of only domain invariant features for prediction [1, 24, 28, 59, 78, 85]. Another line of work learns to disentangle the style and content information from the source domains and trains the classifier to be agnostic to the styles of the source domains [3, 20, 55, 81]. Yet another line of research focuses on diversifying the source domain data to encompass possible variations that may be encountered at test time [12, 34, 44, 66, 74]. Unlike previous works which focus on improving classifier accuracy on unseen domains, we focus on making DG risk-averse on data from unseen domains.
**Certified robustness via randomized smoothing:** Many works have demonstrated the failure of SOTA machine learning classifiers on adversarial examples [14, 15, 38, 68, 77]. In response, many works proposed to provide empirical [4] and provable [18, 45, 52, 60, 80, 46] robustness to these examples. Among them, Randomized Smoothing (RS) [18, 45, 46] is a popular method which considers a smoothed version of the original classifier and certifies that no adversarial perturbation exists within a certified radius (in \(\ell_{2}\) norm) around a test sample that can change the prediction of the classifier. RS uses Gaussian noise to produce a smoothed version of the base classifier and classifies a test sample to be the class most likely to be predicted by the base classifier on Gaussian perturbations of the test sample. While RS was proposed to certify the robustness to additive noise, the idea has been extended to certify robustness to parameterized transformations of the data such as geometric transformation [23, 48] where the noise is added to the parameters of the transformations. Our neural style smoothing procedure is similar to RS with crucial differences. Firstly, we use neural styles for smoothing (which cannot be parameterized) instead of adding Gaussian noise to the input or parameters of specific transformations. Secondly, our goal is not to provide certified robustness guarantees against style changes but to provide a practical method to produce reliable predictions on test samples and an abstaining mechanism to curb incorrect predictions.
**Neural style transfer:** Following [25], which demonstrated the effectiveness of using the convolutional layers of a convolutional neural network for style transfer, several ways have been proposed to improve style transfer [21, 26, 70, 71, 76, 42]. AdaIN [36] is a popular approach that allows style transfer by changing only the mean and variance of the convolutional feature maps. Other ways of generating stylized images include mixing [89] or exchanging [69, 86] styles, or using adversarial learning [63, 88].
**Test-time adaptation (TTA):** Recent works have demonstrated the effectiveness of using TTA for improving generalization to unseen domains, where the classifier is updated partially or fully using incoming batches of test samples [67, 73, 83]. This approach has also been shown to be effective in the DG setup [39]. Our approach is different from these methods since we do not assume access to the parameters of the DG classifier or assume that data from unseen domains arrive in batches.
**Classification with abstaining:** A learning framework allowing a classifier to abstain on samples has been studied extensively [13, 17, 57, 6, 19]. Two main approaches in these works include a confidence-based rejection where the classifier's confidence is used to abstain based on a predefined threshold and a classifier-rejector approach where the classifier and rejector are trained together. Our work is closer to the former since we do not train a rejector and abstain when the top class is not much more likely than other classes.
## 3 Neural style smoothing
### Background
**Domain Generalization (DG) setup:** Given data samples \(\mathcal{D}^{i}_{\mathrm{source}}=\{(x_{j}^{i},y_{j}^{i})\}_{j=1}^{N^{i}}\), with \(N^{i}\) samples, from \(N_{S}\) source domains each following a distribution \(P^{i}_{S}(X,Y)\), the goal of DG is to learn a classifier \(f(X)\) whose performance does not degrade on a sample from an unseen test domain with distribution \(P_{T}(X,Y)\neq P^{i}_{S}(X,Y)\), for all \(i\in\{1,\cdots,N_{S}\}\). Depending on the number of source domains available during training the setup can be termed as single or multi-domain. The lack of information about the target domain makes the problem setup challenging and many previous works have proposed training methods focusing on capturing domain invariant information from source domain data to improve performance on unseen domains at test time. In the multi-domain setup, learning a classifier by minimizing its empirical risk on all available source domains achieves competitive performance on various benchmark datasets [28].
**Neural style transfer with AdaIN [36]:** Given a content image, \(x_{c}\) and a style image \(x_{s}\), AdaIN generates an image having the content of \(x_{c}\) and style of \(x_{s}\). AdaIN works by first extracting the intermediate features (output of block4_conv1) of the style and content image by passing them through a VGG-19 [64] encoder, \(g\), pretrained on Imagenet. Using these features AdaIN aligns the mean (\(\mu\)) and variance (\(\sigma\)) of the two feature maps using
\[t =\mathrm{AdaIN}(g(x_{c}),g(x_{s})) \tag{1}\] \[=\sigma(g(x_{s}))\left(\frac{g(x_{c})-\mu(g(x_{c}))}{\sigma(g(x_{ c}))}\right)+\mu(g(x_{s})).\]
A decoder, \(h\), is then used to map the AdaIN-generated
feature back to the input space to produce a stylized image \(x_{\mathrm{stylized}}=h(t)\). We follow the design of the decoder as proposed in [36] and train the decoder to minimize the content loss between the features of the stylized image, \(g(x_{\mathrm{stylized}})\) and the AdaIN transformed features of the content image, i.e.
\[\mathcal{L}_{\mathrm{content}}=\|g(x_{\mathrm{stylized}})-t\|_{2}^{2}, \tag{2}\]
along with a style loss that measures the distance between the feature statistics of the style and the stylized image using \(L\) layers of the pretrained VGG-19 network, \(\phi\). In particular, the style loss is computed as
\[\mathcal{L}_{\mathrm{style}} =\sum_{i=1}^{L}\|\mu(\phi_{i}(x_{s}))-\mu(\phi_{i}(x_{stylized}) \|_{2}^{2} \tag{3}\] \[+\sum_{i=1}^{L}\|\sigma(\phi_{i}(x_{s}))-\sigma(\phi_{i}(x_{ \mathrm{stylized}})\|_{2}^{2}.\]
We measure the style loss, using block1_convl, block2_convl, block3_convl, and block5_convl layers of the VGG-19 network. We pre-train the decoder with MS-COCO [49] images as content and Wikiart [58] images as style.
### Neural style smoothing-based inference
Consider a classification problem from \(\mathbb{R}^{d}\) to the label space \(\mathcal{Y}\). Neural style smoothing produces an output, for a test image \(x\), that a base DG classifier, \(f:\mathbb{R}^{d}\rightarrow\mathcal{Y}\) is most likely to return when \(x\) is stylized into the style of the source domain data, i.e., the data used for training \(f\). Formally, given a base DG classifier \(f\), we construct a style-smoothed classifier \(\psi:\mathbb{R}^{d}\rightarrow\mathcal{Y}\), whose prediction on a test image \(x\) is the most probable output of \(f\) on \(x\) converted into the style of the source domain data, i.e.,
\[\psi(x):=\arg\max_{y\in\mathcal{Y}}\ \mathbb{P}(f(h(t))=y), \tag{4}\]
where \(t=\mathrm{AdaIN}(g(x),g(x_{s}))\), \(x_{s}\sim P_{S}\), and \(P_{S}\) is the distribution of the source domain. When data from multiple source domains are available we combine the data from all the domains and use the combined data as source domain data. If the base DG classifier, \(f\), correctly classifies the test image \(x\) when stylized into the styles of the source domain, then the style-smoothed classifier also correctly classifies that sample. However, computing the actual prediction of the style-smoothed classifier requires computing the exact probabilities with which the base DG classifier classifies the stylized test samples into each class. Thus, following [18], we propose a Monte Carlo algorithm to estimate these probabilities and the prediction of the style-smoothed classifier. The first step in estimating the prediction of the style-smoothed classifier on a test image \(x\) is to generate stylized versions of the image using the styles from the source domain. To achieve the style conversion in real-time, we use the AdaIN framework described previously with the content image as the test image \(x\) and \(n\) randomly chosen images from the dataset used for training the DG classifier as style images. The style transfer network then transforms \(x\) into \(n\) stylized images, each having the style of the source domain data, as illustrated in Fig. 1. The stylized images are then passed through the \(f\) and the class that is predicted the most often (majority class) is returned as the prediction of the test image. This procedure of Test-Time Neural Style Smoothing (TT-NSS) is detailed in Alg. 1.
```
Input: Test image \(x\), base DG classifier \(f\), VGG-19 encoder \(g\), \(\mathrm{AdaIN}\) decoder \(h\), number of source style images \(n\), \(\mathcal{D}_{\mathrm{styles}}=\{x_{s}^{i}\}_{i=1}^{n}\), threshold \(\alpha\). Output: Prediction for \(x\) or \(\mathrm{ABSTAIN}\). Initialize class-wise counts \(\mathrm{class\_counts}\) to zeros
```
```
Generate \(n\) stylized images from \(x\) using \(\mathcal{D}_{\mathrm{styles}}\) for\(i=1,\ \cdots,n\)do \(t=\mathrm{AdaIN}(g(x),g(x_{s}^{i}))\) \(x_{\mathrm{stylized}}=h(t)\) \(\mathrm{prediction}=f(x_{\mathrm{stylized}})\) \(\mathrm{class\_counts}[\mathrm{prediction}]+=1\) endfor
```
Get the top predicted class on stylized images \(c_{\mathrm{max}}\) = index of \(\mathrm{class\_counts}\) with highest count \(n_{\mathrm{max}}=\mathrm{class\_counts}[c_{\mathrm{max}}]\) # Predict or ABSTAIN if\(\frac{n_{\mathrm{max}}}{n}<\alpha\)then return \(\mathrm{ABSTAIN}\) else return \(c_{\mathrm{max}}\) endif ```
**Algorithm 1** Test-Time Neural Style Smoothing (TT-NSS)
To ascertain that the prediction returned by TT-NSS is reliable, we estimate the confidence of the style-smoothed classifier in its prediction. In particular, we compute the proportion of the re-stylized test images that are classified as a particular class by the base DG classifier and obtain the counts of how often each class is predicted. Based on these counts, we compute the class which has the highest occurrence and if the proportion of the highest class exceeds a threshold \(\alpha\), TT-NSS classifies the test image as this class. However, if the proportion remains less than the threshold, then TT-NSS abstains due to a lack of consensus among the predictions. The abstained samples can then be sent for further processing to experts and save the system from returning a potentially incorrect prediction. A high value of \(\alpha\) in
TT-NSS improves the accuracy on non-abstained samples but it also increases the number of abstained samples. On the other hand, a low value of \(\alpha\) leads to decreased abstaining with an increased chance that the DG classifier may not be confident in its prediction, leading to a risky misclassification. In our empirical analysis in Sec. 4, we use various values of \(\alpha\) ranging from \(0\) to \(1\) and show how the accuracy on non-abstained samples and the proportion of abstained samples change as the value of \(\alpha\) is varied.
### Neural style smoothing-based training
The performance of our inference procedure, TT-NSS, relies on the assumption that the base classifier, \(f\), can classify the test image stylized into the source domain styles correctly and consistently. This requires that the base classifier be accurate on the images generated by the decoder used in the AdaIN-based neural style transfer network. However, our empirical evaluation of using TT-NSS on classifiers trained with existing DG methods on benchmark datasets shows a relatively low accuracy on non-abstained samples at smaller abstaining rates. This suggests that the base classifier cannot accurately classify the stylized images generated through the AdaIN decoder. Thus, we propose a new training procedure based on neural style smoothing (NSS) that enables consistent and accurate predictions from the classifiers when evaluated using TT-NSS. The proposed loss functions can be combined with any DG training algorithm and can be used to improve the reliability of the predictions from classifiers when evaluated with TT-NSS. To achieve this, we propose to augment the losses of an existing DG method with two additional loss functions. The first loss penalizes misclassification of the stylized images w.r.t. the label of the content image i.e., given a sample \((x,y)\sim\mathcal{D}_{\mathrm{source}}\), the stylized misclassification loss is
\[\mathcal{L}_{stylized\_aug}=\mathbb{E}_{x_{s}\sim P_{S}}[\ell(f(h(t)),y)], \tag{5}\]
where \(t=\mathrm{AdaIN}(g(x),g(x_{s}))\) and \(\ell\) is the cross entropy loss. Specifically, we first stylize a sample \(x\) from the source domain using multiple randomly sampled style images from the source domain and then penalize the misclassification loss of the classifier \(f\) on these stylized images. For a single source domain problem, even though all images from a domain may be considered as being in the same broad set of styles such as Art or Photos, individually the images have different non-semantic information such as textures, colors, patterns, etc., and thus stylizing an image into the styles of other source domain images is still effective and meaningful. The second loss which helps improve the trustworthiness of the predictions enforces consistency among the predictions of the stylized versions of the content image, generated using AdaIN. Previous works [40, 65, 66, 87], have also demonstrated the effectiveness of enforcing consistency among the predictions of the classifier to be helpful in various setups such as semi-supervised learning and randomized smoothing. To define the style consistency loss, let \((x,y)\sim\mathcal{D}_{\mathrm{source}}\), \(F:\mathbb{R}^{d}\rightarrow\Delta^{K-1}\) be the softmax output of the classifier such that the prediction of the base classifier \(f(x)=\arg\max_{k\in\mathcal{Y}}F(x)\), \(\Delta^{K-1}\) be the probability simplex in \(\mathbb{R}^{K}\), \(\overline{F}(x)=\mathbb{E}_{x_{s}\sim P_{S}}[F(h(t))]\) with \(t=\mathrm{AdaIN}(g(x),g(x_{s}))\) be the average softmax output of the classifier on stylized images, \(\mathrm{KL}(\cdot\|\cdot)\) be the Kullback-Leibler divergence (KLD) [43] and \(\mathrm{H}(\cdot)\) be the entropy. Then the style consistency loss is given by
\[\begin{split}\mathcal{L}_{consistency}=\mathbb{E}_{x_{s}\sim P_{ S}}[\mathrm{KL}(\overline{F}(x)\|F(h(t)))]\\ +\mathrm{H}(\overline{F}(x),y).\end{split} \tag{6}\]
In practice, we minimize the empirical version of the two losses using multiple-style images sampled randomly from the available source domain data. The trained classifier can then be evaluated using TT-NSS as in Alg. 1 to gauge the reliability of their predictions on unseen domains.
## 4 Experiments
In this section, we present the evaluation results of using our inference and training procedures for obtaining and improving the risk-averse predictions from DG classifiers. We present evaluations and comparisons with three popular DG methods, namely Empirical Risk Minimization (ERM), Style Agnostic Networks (SagNet), [56] and networks trained with Representation Self-Challenging (RSC) [37]. Our evaluation includes three popular benchmark datasets, namely PACS [47], VLCS [22] and OfficeHome [72], all of which contain four domains (see Appendix B). We also create and present evaluations on variations of these datasets generated by stylizing the images into the styles of Wikiart [58] and changing styles based on changes in weather, lighting, blurring, and addition of noise by using common corruptions [31] including {frost, fog, brightness, contrast, gaussian blur, defocus blur, zoom blur, gaussian noise, shot noise, impulse noise}. These variations allow us to evaluate the performance of DG classifiers on realistic changes that do not affect the semantic content of the images. To generate images from benchmark datasets stylized into the style of Wikiart, we use an AdaIN decoder pre-trained using images from MS-COCO [49] as content images and images from Wikiart [58] as style images. To create corrupted versions, we follow [31] and use corruption with severity levels 3 and 5. For reporting results over corrupted versions we use a subsample of the test set described in App. B.2 where as for original/wikiart styles we report results on the entire test set.
Following previous works [28], we used ResNet50 pre-trained on the ImageNet dataset as our backbone network augmented with a fully connected layer with softmax activation. We use this network for training ERM and for neural
-style smoothing (combined with ERM as the DG method). For other baselines, we train the classifiers using the source codes from the official repositories of RSC [37] and SagNet [56]. For all experiments in the single source domain setup, we train the classifiers with a single source domain and evaluate the performance of the remaining three domains. For multi-domain setup, we train the classifiers with three domains and test on the fourth unseen domain.
We compare the performance of TT-NSS (Alg. 1) with an abstaining mechanism that uses the classifier's max confidence on the original test sample for abstaining. In this method, we abstain if the highest softmax score for a sample is below a set threshold. We note that, compared to TT-NSS, which only requires prediction of the classifier on a sample the confidence-based mechanism additionally requires the classifier's confidence in the prediction and hence has access to more information than that available to TT-NSS, making TT-NSS more practically viable. For TT-NSS we use 10 randomly sampled style images (\(n=10\)) for the single source domain setup and 15 for the multiple source domain setup (see Sec. 4.4). We present the accuracy of the DG classifier on non-abstained samples as a function of the proportion of abstained samples and the area under this curve (AUC) to demonstrate the effectiveness of TT-NSS (Alg. 1) and the confidence-based abstaining mechanism for producing risk-averse predictions. A higher AUC is desired since it indicates that the accuracy of the DG classifier at different abstaining rates remains high suggesting that whenever the inference procedure does not abstain, it is likely that the prediction is correct. This improves the reliability of the predictions from a DG classifier. We present additional experimental results in App. A followed by dataset and implementation details in App. B. Our codes are present at [https://github.com/akshaymehra24/RiskAverseDG](https://github.com/akshaymehra24/RiskAverseDG)
### TT-NSS improves the reliability of the predictions from existing DG classifiers
In this section, we demonstrate the effectiveness of TT-NSS at producing reliable predictions from classifiers trained with ERM, RSC, and SagNet when evaluated on domains unseen during training. The results in Fig. 2 and Figs. 7, 6 (in the Appendix) show the advantage of using the style-smoothed classifier over the confidence of the original classifier for producing risk-averse predictions on a test sample on PACS and VLCS datasets in both single and multiple source domain setting. This superiority of TT-NSS is also evident from the results in Tables 3, 5, 4, 6 (in the Appendix) which show the area under the curve for accuracy versus percentage of abstained samples for different settings. The high accuracy of the classifiers with TT-NSS at the same abstaining rates compared to the confidence-based strategy shows the advantage of TT-NSS at producing better risk-averse predictions. This advantage of TT-NSS becomes more apparent on stylized and corrupted variants of the PACS dataset where the standard accuracy of the classifier drops significantly and necessitates abstaining for safeguarding against risky misclassifications. The classifier's high confidence incorrect predictions on unseen domains is the primary reason that prevents the confidence-based strategy from producing risk-averse predictions. This is in line with the findings from previous works which have shown that a classifier can produce high-confidence misclassification on samples from unseen domains [29, 32, 50, 79, 82]. On the other hand, using the confidence of the style-smoothed classifier, by stylizing the test sample into source domain styles, can mitigate the classifier's bias to non-semantic information in the test samples and produce better quality pre
Figure 2: Comparison of TT-NSS (solid lines) and confidence-based abstaining method (dashed lines) at producing risk-averse predictions in a **single** source domain setup on classifiers trained with SOTA DG methods. The graphs show accuracy vs abstained points on different variants of the **PACS** dataset ((a) original, (b) wikiart, (c,d) corrupted). In most domains, the accuracy of TT-NSS is higher than the corresponding accuracy of the confidence-based method for most of the range of the percentage of abstained samples demonstrating the superiority of TT-NSS at producing risk-averse predictions. (Note: The source domain from PACS used for training is denoted in the title.)
dictions even without abstaining. This is evident from Fig. 2 and Figs. 7, 6 (in the Appendix) where TT-NSS (solid lines) achieve higher accuracy even at an abstaining rate of 0%.
Another crucial insight obtained from our evaluation on variations of benchmark datasets created by style changes is the significant decrease in the performance of the DG classifiers compared to the evaluation on original styles of the benchmark datasets both with confidence-based abstaining and TT-NSS. This suggests that classifiers trained with existing DG methods are susceptible to non-semantic variations in the data and improving the performance on these benchmark datasets while important may not be enough to achieve the goal of DG. However, while data augmentation and style diversification methods have been shown to be effective at improving the performance of DG methods on potential variations, it is not practical to train classifiers to be robust to all possible variations. Due to this limitation, improving the test time methods which either adapt the classifier to unseen domains or abstain from making predictions such as TT-NSS by explicitly transforming the test sample into known styles are essential for DG.
### Effectiveness of NSS at improving risk-averse predictions from DG classifiers
Here we demonstrate the advantage of using the NSS training procedure for improving the reliability of the classifier's predictions. Specifically, we use the NSS losses with that of the ERM-based DG method and minimize the misclassification loss on source domain samples along with minimizing the style misclassification and style consistency losses. For training NSS with ERM we used four randomly sampled style images to compute the style smoothed losses in our experiments since we did not observe any significant performance difference with using more images. The use of a small number of style-transformed images during NSS training allows us to train DG classifiers without significantly increasing the computational cost compared to that of training with ERM. The stylized images were generated by using the AdaIN-based decoder pre-trained using data from MS-COCO [49] as content and Wikiart as style. Our results in Table 1 and Table 7 (in the Appendix) show that classifiers trained with NSS achieve a significantly better area under curve compared to classifiers trained with ERM on PACS, VLCS and OfficeHome datasets in both single and multiple source domain settings. The improvements in AUC become more evident on variations of these datasets generated by changing to Wikiart style or using common corruptions. This boost in the AUC is attributed to the style randomization and consistency losses used during NSS training that acts as regularizers and prevents the classifiers from overfitting to specific image styles.
Results in Fig. 3 and Figs. 8, 9, 10 (in the Appendix) show that classifiers trained with NSS, when evaluated with TT-NSS, achieve better accuracy on non-abstained samples for different abstaining rates and in most cases achieve competitive performance with classifiers trained with RSC and SagNet. While in our work we used NSS with ERM, it can be combined with any other DG method such as RSC or SagNet to improve their accuracy on non-abstained samples at different abstaining rates. Moreover, training the classifiers with NSS improves the performance of the confidence-based abstaining mechanism as shown in Tables 8 and 9 (in the Appendix) but even then TT-NSS remains superior in case of severe shifts (such as severity 5 corruptions).
### Predictions on abstained samples
Here we evaluate the effectiveness of TT-NSS in correctly abstaining on samples that could lead to misclassifications. We show this by showing the accuracy of the DG classifier on the test samples that were abstained. Results
\begin{table}
\begin{tabular}{|l|c c c c|c c c c|c c c c|} \hline & \multicolumn{4}{c|}{PACS} & \multicolumn{4}{c|}{VLCS} & \multicolumn{4}{c|}{OfficeHome} \\ \hline Alg. & A & C & P & S & C & L & S & V & A & C & P & R \\ \hline \multicolumn{11}{|c|}{Original Style} \\ \hline ERM & 0.875 & 0.878 & 0.662 & 0.702 & 0.567 & **0.724** & 0.851 & 0.751 & 0.689 & 0.553 & 0.549 & 0.685 \\ NSS & **0.884** & **0.911** & **0.694** & **0.745** & **0.619** & 0.685 & 0.853 & **0.796** & **0.727** & **0.683** & **0.675** & **0.767** \\ \hline \multicolumn{11}{|c|}{Wikart Style} \\ \hline ERM & 0.854 & 0.816 & 0.643 & 0.626 & 0.477 & 0.682 & 0.785 & 0.704 & 0.552 & 0.344 & 0.321 & 0.5 \\ NSS & 0.855 & **0.888** & **0.71** & **0.706** & **0.528** & **0.673** & **0.845** & **0.788** & **0.696** & **0.643** & **0.625** & **0.725** \\ \hline \multicolumn{11}{|c|}{Corrupted with severity 3} \\ \hline ERM & 0.886 & 0.812 & 0.622 & 0.545 & 0.468 & 0.551 & 0.689 & 0.471 & 0.573 & 0.358 & 0.312 & 0.54 \\ NSS & **0.901** & **0.853** & **0.717** & **0.683** & **0.573** & **0.686** & **0.775** & **0.608** & **0.625** & **0.576** & **0.56** & **0.67** \\ \hline \multicolumn{11}{|c|}{Corrupted with severity 5} \\ \hline ERM & 0.834 & 0.708 & 0.519 & 0.468 & 0.411 & 0.439 & 0.567 & 0.415 & 0.445 & 0.235 & 0.196 & 0.383 \\ NSS & **0.871** & **0.792** & **0.682** & **0.606** & **0.512** & **0.61** & **0.722** & **0.537** & **0.545** & **0.478** & **0.466** & **0.565** \\ \hline \end{tabular}
\end{table}
Table 1: Effectiveness of NSS at producing a better AUC score compared to classifiers trained with ERM in a **single** source domain setting on PACS, VLCS, and OfficeHome datasets and their variations when evaluated with TT-NSS. The source domain used for training is denoted in the columns. (In all tables, the best result is marked in bold if the difference in the AUC is at least 0.01.)
in Fig. 4 show that using a small value of the threshold \(\alpha\) where TT-NSS abstains on few samples, the accuracy on abstained samples is significantly lower for classifiers trained with ERM and NSS in both single and multiple source domain settings on the PACS dataset (original style). This is in comparison to the standard accuracy of the classifier (recovered at 100% abstaining rate). The low accuracy on abstained samples suggests that TT-NSS correctly refrains from making predictions on ambiguous samples. Moreover, the accuracy on abstained samples decreases for most test domains for classifiers trained with NSS compared to classifiers trained with ERM, suggesting that NSS improves the ability of TT-NSS to identify risky samples.
### Effect of number of styles
Here we evaluate the effect of using different numbers of re-stylizations of a single test image, \(n\), in TT-NSS using a subsample (see App. B.2) of the PACS dataset (original style). Results in Fig. 5 show that in both single and multi-source domain settings, using a large value of \(n\) leads to only a small improvement in the accuracy of non-abstained samples at higher abstaining rates whereas performance at lower abstaining rates remains similar for different values of \(n\). Since using a larger value of \(n\) can slow down the inference, we use \(n\) as 10 and 15 (5 per domain) in the single and multiple source domain settings. Evaluating a single test sample with TT-NSS using 15 styles increases the inference cost by mere 0.26 seconds on our hardware, showing the potential of TT-NSS at producing risk-averse predictions without sacrificing inference efficiency.
## 5 Discussion and conclusion
Our work proposed and demonstrated the effectiveness of incorporating an abstaining mechanism based on NSS to improve the reliability of a DG classifier's predictions on data from unseen domains. Using advances in neural style transfer, our inference procedure uses the prediction consistency of the classifier on stylized images to predict or abstain on a test sample and requires only black-box access to the DG classifier. Moreover, we proposed a training procedure to improve the reliability of a classifier's prediction at different abstaining rates and demonstrated its effectiveness on various datasets and their variations. We note that while NSS is effective at gauging the reliability of a classifier's prediction on test samples, ascertaining the robustness
Figure 4: Accuracy on samples abstained from a prediction by TT-NSS in single (SD) (a, b) and multiple (MD) (c,d) domain settings on the PACS dataset. (Test domains are denoted in the legend.)
Figure 5: The performance of TT-NSS is not significantly affected by the value of \(n\) beyond \(n=10\) for single (SD) (a, b) and \(n=15\) in multiple (MD) (c, d) source domain settings. For the SD setting, the classifier is trained on the Cartoon domain and evaluated on the remaining domains in PACS, and for the MD setting, the classifier is evaluated on the Cartoon domain after training on the rest.
Figure 3: Effectiveness of using NSS (with ERM) (solid lines) at producing better risk-averse predictions when evaluated with TT-NSS in comparison to that of other DG methods (dashed lines) in a **single** domain setup. NSS-trained classifiers achieve significantly better accuracy on non-abstained samples compared to classifiers trained with ERM and achieve competitive performance to classifiers trained with RSC and SagNet at different abstaining rates on variants of the **PACS** dataset. (See Fig. 2 for the explanation of setting.)
of this prediction to arbitrary style changes is an important open problem and will be the focus of future works.
## 6 Acknowledgment
This work was supported by the NSF EPSCoR-Louisiana Materials Design Alliance (LAMDA) program #OIA-1946231 and was performed under the auspices of the U.S. Department of Energy by the Lawrence Livermore National Laboratory under Contract No. DE- AC52-07NA27344 and was supported by the LLNL-LDRD Program under Project No. 23-ERD-030.
|
2305.19225 | Learning Decision-Focused Uncertainty Sets in Robust Optimization | We propose a data-driven technique to automatically learn the uncertainty
sets in robust optimization. Our method reshapes the uncertainty sets by
minimizing the expected performance across a family of problems subject to
guaranteeing constraint satisfaction. Our approach is very flexible and can
learn a wide variety of uncertainty sets while preserving tractability. We
solve the constrained learning problem using a stochastic augmented Lagrangian
method that relies on differentiating the solutions of the robust optimization
problems with respect to the parameters of the uncertainty set. Due to the
nonsmooth and nonconvex nature of the augmented Lagrangian function, we apply
the nonsmooth conservative implicit function theorem to establish convergence
to a critical point, which is a feasible solution of the constrained problem
under mild assumptions. Using empirical process theory, we show finite-sample
probabilistic guarantees of constraint satisfaction for the resulting
solutions. Numerical experiments show that our method outperforms traditional
approaches in robust and distributionally robust optimization in terms of
out-of-sample performance and constraint satisfaction guarantees. | Irina Wang, Cole Becker, Bart Van Parys, Bartolomeo Stellato | 2023-05-30T17:18:05Z | http://arxiv.org/abs/2305.19225v4 | # Learning for Robust Optimization
###### Abstract
We propose a data-driven technique to automatically learn the uncertainty sets in robust optimization. Our method reshapes the uncertainty sets by minimizing the expected performance across a family of problems while guaranteeing constraint satisfaction. We learn the uncertainty sets using a novel stochastic augmented Lagrangian method that relies on differentiating the solutions of the robust optimization problems with respect to the parameters of the uncertainty set. We show sublinear convergence to stationary points under mild assumptions, and finite-sample probabilistic guarantees of constraint satisfaction using empirical process theory. Our approach is very flexible and can learn a wide variety of uncertainty sets while preserving tractability. Numerical experiments show that our method outperforms traditional approaches in robust and distributionally robust optimization in terms of out of sample performance and constraint satisfaction guarantees. We implemented our method in the open-source package LROPT.
## 1 Introduction
Over the past years, robust optimization (RO) has become a widely adopted efficient tool for decision-making under uncertainty. The idea behind RO is to define an uncertainty set where the uncertainty lives and, then, optimize against the worst-case realizations of the uncertainty in this set; see [1, 1] and survey papers [1, 2] for a thorough review. The choice of the uncertainty sets is a crucial component of RO, and can have a significant impact on the solution quality and robustness. While well-chosen uncertainty sets can lead to optimal solutions with high performance and robustness, poorly chosen sets may result in solutions that are overly conservative or that have low probability of constraint satisfaction. For this reason, a large amount of existing literature studies how to bound the probability of constraint satisfaction based on the size and shape of the uncertainty sets [12, 1, 1]. In this vein, many approaches to designing uncertainty sets assume structural information of the unknown distributions, and rely on these a priori assumptions to build guarantees of constraint satisfaction [1, 2, 13]. However, these assumptions may be unrealistic or difficult to verify in practice, and the resulting theoretical guarantees may be too conservative experimentally [14, 15].
In contrast, the recent explosion in the availability of data has lead to new paradigms where uncertainty sets are designed directly from data [1]. By combining a priori assumptions with the confidence regions of statistical hypothesis tests on the data [1], or by approximating a high-probability-region using quantile estimation [14], recent techniques construct data-driven uncertainty sets that yield less conservative solutions while retaining robustness. However, these methods still rely on strong a-priori assumptions on the probability distribution of the uncertainty, such as finite support or independent marginals. In addition, the majority of the RO literature relies on building uncertainty sets that contain the vast majority of the probability mass. However, that's only a sufficient (and often restrictive) condition to guarantee high probability of constraint satisfaction [1, page 33]. While Bertsimas et al. [1] avoid such restrictions by exploiting the dependence of the uncertain constraint on uncertain parameter set, their technique can only deal with joint chance constraints via union-bounds, which can be conservative. Instead, we turn our attention to RO techniques that can directly tackle joint chance constraints using direct hyperparameter tuning.
While cross-validation has been long-standing in machine learning and related fields for hyperparameter selection [15], recently, implicit differentiation techniques have led to the possibility of tuning high-dimensional hyperparameters through gradient-based approaches, which are much more efficient than grid search or manual tuning [1]. However, the RO community has seen limited efforts in automating the choice of the uncertainty sets [1, 14] most of which do not exploit recent hyperparameter tuning approaches. That's why, most techniques manually calibrate the uncertainty sets using grid-search and cross-validation by varying a single parameter, often the size of the uncertainty set. In this work, we propose a technique to learn multiple parameters of the uncertainty sets at the same time, including shape and size.
Finally, most RO formulations suffer from the fact that we calibrate uncertainty sets for a _specific problem instance_, while we often solve a _family_ of similar optimization problems with varying parameters. This is a common scenario in many applications, including inventory management, where we solve similar optimization problems with varying initial inventory levels while satisfying uncertain demand with high probability.
In this work, we propose an automatic technique to learn the RO uncertainty sets from data. Our method minimizes an expected objective over a _family_ of parametrized problems, while ensuring data-driven constraint satisfaction guarantees. We tune the uncertainty sets with a stochastic augmented Lagrangian approach relying on implicit differentiation techniques [1, 2] to compute the derivative of the solution of robust optimization problems with respect to key parameters of the uncertainty sets. By incorporating the problem objective and the data-driven constraint satisfaction into the learning problem, we control the tradeoff between performance and robustness. Lastly, to show finite sample probabilistic guarantees of constraint satisfaction, we use empirical process theory and a covering number argument, which have been common in distributionally robust optimization (DRO) literature [13, 14, 15].
### Our contributions
In this work, we present a new approach to automatically learn the uncertainty sets in RO to obtain high-quality solutions across a family of problems while ensuring probabilistic guarantees of constraint satisfaction.
* We formulate the problem of finding the uncertainty set using bi-level optimization. At the outer level, we minimize the expectation of the objective function over the family of parametric optimization problems, while ensuring an appropriate probability level of constraint satisfaction. At the lower level, we represent the decisions as the solution of the resulting RO problem.
* To solve the above problem and find the optimal parameters of the uncertainty set, we develop a stochastic augmented Lagrangian approach. We also show its convergence and finite sample probabilistic guarantees of constraint satisfaction.
* We implement our technique in Python package learning for robust optimization (LROPT), which allows users to easily model and tune uncertainty sets in RO. The code is available at [https://github.com/stellatogrp/lropt](https://github.com/stellatogrp/lropt).
* We benchmark our method on various examples in portfolio optimizatio, multi-product newsvendor, and multi-stage inventory management, outperforming data-driven robust and distributionally robust approaches in terms of out-of-sample objective value and probabilistic guarantees, and execution time after training.
### Related work
Data-driven robust optimization.With the recent explosion of availability of data, data-driven robust optimization, has gained wide popularity. These techniques often approximate the unknown data-generating distribution in order to construct data-driven uncertainty sets. Using hypothesis testing, Bertsimas et al. [1] pair a priori assumptions on the distribution with different statistical tests, and obtain various uncertainty sets with different shapes, computational properties, and modeling power. Similar approaches construct data-driven sets using quantile estimation [14] and deep learning clustering techniques [1]. All these approaches rely on a two-step procedure which separates the construction of the uncertainty set from the resulting robust optimization problem. Because of the possible suboptimality caused by this separation, Costa and Iyengar [15] propose an end-to-end distributionally robust approach in the context of portfolio construction, where the solution to the robust optimization problem and the selection of the ambiguity set are trained together in an end-to-end fashion. They integrate a _prediction layer_ that predicts asset returns with data on financial features and historical returns, and use these predictions in a _decision layer_ that incorporates the robust optimization problem. Our approach follows this idea of an end-to-end approach, but provides a more general framework for a larger class of robust optimization problems and is able to adjust the shape of the uncertainty sets.
Modeling languages for robust optimizationSeveral open-source packages facilitate building models for RO, including as ROME [11] in Matlab, RSOME [12] in Python, JuMPeR [13] in Julia, and ROmodel [24] for the Pyomo modeling language in Python. These packages mostly support RO problems with affine uncertain constraints, which are solved via reformulations and/or cutting plane procedures. Schiele et al.[14] recently introduced a Python extension of CVXPY [15] for solving the larger class of convex-concave saddle point problems, by automating a conic dualization procedure reformulating min-max type problems into min-min problems which can then be solved efficiently [16]. In the paper, they describe a set of rules termed disciplined saddle programming (DSP) detailing when a saddle point optimization problem can be equivalently formulated as convex optimization ones, and efficiently be solved. In this work, we build a Python extension to CVXPY tailored to formulating and solving RO problems. While some RO problems can be described in terms of the DSP framework, our library augments DSP capabilities in RO by supporting max-of-concave uncertain constraint and structure to intuitively describe well known uncertainty sets for ease of use. Finally, with our method automatically learns of the uncertainty set through differentiable optimization via a tight integration with [1, 2].
Differentiable convex optimization.There has been extensive work on embedding differentiable optimization problems as layers within deep learning architectures [1]. The authors of [1] propose an approach to differentiating through disciplined convex programs and implement their methodology in CVXPY, and additionally implement differentiable layers for disciplined convex programs in PyTorch and TensorFlow 2.0, Cvxpylayers. These developments have enabled the popular _end-to-end_ approach of training a predictive model to minimize loss on a downstream optimization task [1, 2, 1, 1, 2, 3, 4], for which we also point to the survey [13]. These approaches with the Smart _Predict, then Optimize_ framework [1] have been shown to improve performace as opposed to traditional _two-stage_ approaches, where the model is trained separately from the optimization problem [1]. Our approach thus adopts the idea of end-to-end learning by linking the process of constructing the uncertainty set to the inner robust optimization problem.
Automatic hyperparameter selection.Implicit differentiation techniques can greatly accelerate tuning the regularization parameters in regression and classification problems [1, 2, 3]. However, most literature focuses on tuning hyperparameters of the objective function in regression and classification tasks, such as hyper-parameters for Lasso regularization [3]. In this work, we tune various hyperparameters of the uncertainty sets in RO using implicit differentiation techniques.
### Layout of the paper
In Section 2, we introduce the notion of the reshaping parameters with a motivating example, and in Section 3, we describe the data-driven problem and the algorithm for learning reshaping parameters. In Section 4, we discuss the probabilistic guarantees implied by the aforementioned formulations. In Section 5, we give a high-level overview of our LROPT package, and in Section 6, we present various numerical examples problems. For completeness, we give example convex reformulations of robust optimization problems for common uncertainty sets in the appendices.
## 2 The robust optimization problem
We consider a _family_ of RO problems parametrized by a _family parameter_\(y\in\mathbf{R}^{p}\) with distribution \(\mathbf{P}_{y}\). Each problem has the form
\[x(\theta,y)\in\begin{array}{ll}\operatorname*{argmin}&f(x,y)\\ \operatorname*{subject\ to}&g(x,u,y)\leq 0\quad\forall u\in\mathcal{U}(\theta), \end{array} \tag{1}\]
where \(x\in\mathbf{R}^{n}\) is the optimization variable, \(u\in\mathbf{R}^{m}\) is the uncertain parameter, \(f:\mathbf{R}^{n}\times\mathbf{R}^{p}\to\mathbf{R}\) is the objective function, and \(g:\mathbf{R}^{n}\times\mathbf{R}^{m}\times\mathbf{R}^{p}\to\mathbf{R}^{d}\) is the uncertain constraint which we assume to be the maximum of concave functions, _e.g._, \(g(x,u,y)=\max_{l=1,\ldots,L}g_{l}(x,u,y)\). This structure allows us to express \(L\) joint uncertain constraints using the maximum of each component \(l\). The uncertain parameter \(u\) takes values from \(\mathcal{U}(\theta)\subseteq\mathbf{R}^{m}\), a convex uncertainty set parametrized by \(\theta\in\mathbf{R}^{q}\). The family parameter \(y\) differs from the uncertain parameter \(u\) in that it is known when we make the decision. We refer to _instances_ of the family as the realization of problem (1) where \(y\) takes a specific value. Note that, if \(y\) is supported on a single value, we obtain the usual RO formulation.
Our goal is to find a parameter \(\theta\) such that the solutions \(x(\theta,y)\) perform well in terms of objective value and constraint satisfaction guarantees across the family of problems. As learning an uncertainty set is costly, we don't learn a separate uncertainty set for every instance of the family. Instead, we construct a set that works for all members of the family simultaneously. More formally, we choose the uncertainty set such that the solution \(x(\theta,y)\) from (1) implies that the uncertain constraint is satisfied with high probability across instances of the problem family:
\[\mathbf{P}_{(u,y)}(g(x(\theta,y),u,y)\leq 0)\geq 1-\eta, \tag{2}\]
for a given \(\eta>0\), where \(\mathbf{P}_{(u,y)}\) is the joint distribution of \(u\) and \(y\). We can formulate this constraint in terms of the corresponding _value at risk_ being nonpositive, _i.e._,
\[\mathbf{VaR}(g(x(\theta,y),u,y),\eta)=\inf\{\gamma\mid\mathbf{P}_{(u,y)}(g(x( \theta,y),u,y)\leq\gamma)\geq 1-\eta\}\leq 0.\]
Unfortunately, except in very special cases, the value at risk function is intractable [20]. To get a tractable approximation, we can adopt the _conditional value at risk_[20, 20],
defined as
\[\mathbf{CVaR}(g(x(\theta,y),u,y),\eta)=\inf_{\alpha}\{\mathbf{E}_{(u,y)}((1/\eta) (g(x(\theta,y),u,y)-\alpha)_{+})+\alpha\}, \tag{3}\]
where \((a)_{+}=\max\{a,0\}\). It is well known from [20] that, for any \(x\in\mathbf{R}^{n}\), the relationship between these probabilistic guarantees of constraint satisfaction is
\[\mathbf{CVaR}(g(x,u,y),\eta)\leq 0\ \implies\ \mathbf{VaR}(g(x,u,y),\eta)\leq 0 \iff\ \mathbf{P}(g(x,u,y)\leq 0)\geq 1-\eta. \tag{4}\]
Therefore, if the solution \(x(\theta,y)\) of (1) satisfies \(\mathbf{CVaR}(g(x(\theta,y),u,y),\eta)\leq 0\), we have the desired probabilistic guarantee (2).
By constructing uncertainty sets that contain at least \(1-\eta\) probability mass, _i.e._, \(\mathbf{P}(u\in\mathcal{U}(\theta))\geq 1-\eta\), we can ensure that any feasible solution of (1) implies a probabilistic guarantee (2). However, this condition is only sufficient, and may lead to overly conservative solutions [1, page 33]. In this work, instead, we avoid such conservatism by taking into account the cost of the robust solutions \(x(\theta,y)\) while constructing the uncertainty sets.
### Motivating example
We consider a newsvendor problem where, at the beginning of each day, the vendor orders \(x\in\mathbf{R}^{n}\) products at price \(k\in\mathbf{R}^{n}\), with \(n=2\). These products will be sold at the prices \(p\in\mathbf{R}^{n}\), where \(p>k\), until either the uncertain demand \(u\) or inventory \(x\) is exhausted. The objective function to minimize is the sum of the ordering cost minus the revenue:
\[k^{T}x-p^{T}\min\{x,u\}.\]
To account for uncertainty in the demand \(u\), we introduce a new variable \(\tau\) to write the objective in epigraph form, obtaining the RO problem
\[\begin{array}{rl}\text{minimize}&\tau\\ \text{subject to}&k^{T}x+\max\{-p^{T}x,-p^{T}u\}\leq\tau\quad\forall u\in \mathcal{U}(\theta)\\ &x\geq 0.\end{array} \tag{5}\]
The uncertain parameter \(u\) is distributed as a log-normal distribution, where the underlying normal distribution has parameters
\[\mu=\left[\begin{array}{c}0.9\\ 0.7\end{array}\right],\quad\Sigma=\left[\begin{array}{cc}0.6&-0.4\\ -0.3&0.1\end{array}\right].\]
We would like to construct uncertainty sets without knowing the distribution. Instead, we have access to \(N=50\) realizations of \(u\).
Parametric family of problems.Let the parameter \(y=(k,p)\). The problem is param-tralized by the buying and selling prices of the products, which the vendor knows before making the orders. We consider the parameter \(y\) with finite support; in particular, we consider \(8\) possible values of \(y\) defined as \(y_{j}=(k_{j},p_{j})\) for \(j=1,\ldots,8\). We generate these values as follows. Each component of \(k_{j}\) is drawn from a uniform distribution on \([2,6]\), and each component of \(p_{j}\) is equivalent to \(k_{j}+r\), where \(r\) is drawn from a uniform distribution on \([2,4]\).
Standard uncertainty set.Standard methods in RO methods construct uncertainty sets based on the empirical mean \(\hat{\mu}\) and covariance \(\hat{\Sigma}\) of the uncertainty,
\[\mathcal{U}(\theta)=\{\hat{\mu}+\hat{\Sigma}^{1/2}u\mid\|u\|_{2}\leq\rho\}=\{u \mid\|A^{\text{st}}u+b^{\text{st}}\|_{2}\leq\rho\}, \tag{6}\]
where the parameter \(\rho\) represents the size of the uncertainty set, and,
\[A^{\text{st}}=\hat{\Sigma}^{-1/2}=\left[\begin{array}{cc}0.35&0.16\\ 0.16&1.03\end{array}\right],\quad b^{\text{st}}=-\hat{\Sigma}^{-1/2}\hat{\mu}= \begin{bmatrix}-1.65\\ -2.63\end{bmatrix}.\]
Figure 1: Top: each plot shows the uncertainty set corresponding to the labeled empirical probability of constraint violation, \(\hat{\eta}\). White dots: training data points. Green dot: training data mean. Red ellipsoid: standard uncertainty set. Purple lines: lines corresponding to a constraint function value of 0; for each plot, there is a line for each data driven solution \(x(\theta,y_{j})\). Middle: out-of-sample tradeoff curves for the objective value vs. empirical probability of constraint violation, averaged across all problem instances. The shaded regions represent the 0.25 to 0.75 quantiles. Each marker corresponds to an \(\rho\) value. The uncertainty sets depicted above and below are denoted by the green and black markers, respectively, and achieves the probabilities of constraint violation \(\hat{\eta}\) given by the red dotted lines. Bottom: the same plots as the ones on top, for reshaped uncertainty sets.
Reshaped uncertainty set.Consider now the same set as in (6) with different shape, _i.e._, \(\mathcal{U}(\theta)=\{u\mid\|A^{\mathrm{re}}u+b^{\mathrm{re}}\|_{2}\leq\rho\}\), where
\[A^{\mathrm{re}}=\left[\begin{array}{cc}0.79&-0.07\\ 0.30&0.98\end{array}\right],\quad b^{\mathrm{re}}=\left[\begin{array}{c}-1.8 4\\ -2.80\end{array}\right].\]
This set is obtained by reshaping the standard set. In the following sections, we detail the learning procedure to obtain this set.
Comparison.We now compare the two uncertainty sets where we vary \(\rho\) while \(A^{\mathrm{st}},b^{\mathrm{st}}\) and \(A^{\mathrm{re}},b^{\mathrm{re}}\) are fixed. For each value of \(\rho\) we compare the average out-of-sample objective value at the optimizers \(x(\theta,y)\), where the average is taken across problem instances \(y\)'s, against the out-of-sample empirical probability of constraint violation, also averaged across instances. Figure 1 shows that the reshaped uncertainty set achieves data-driven solutions that give, on average, better tradeoffs between the out-of-sample objective value and empirical probability of constraint violation. While the standard uncertainty set conforms to the shape of data, it might be conservative because it does not take into account the structure of the optimization problem. In contrast, the reshaped uncertainty set gives better (lower) worst-case costs for the same probability of constraint satisfaction, even though it does not conform as well to the shape of the data. For some target values of empirical probability of constraint violation, \(\hat{\eta}\), we plot the constraint values evaluated at \(x(\theta,y)\) for all \(y\), _i.e._ the contour lines \(k^{T}x(\theta,y)-p^{T}\min\{x(\theta,y),u\}-\tau=0\), for both the standard and reshaped sets. In the top and bottom plots of Figure 1, we notice that although the reshaped set is much smaller than the standard set, it is still tangent to all contour lines corresponding to the constraint value of \(0\), ensuring that the constraints are satisfied across all instances of \(y\). In this way, the optimal uncertainty set is not biased towards a single optimization problem, but instead trained for the entire family of problems parametrized by \(y\).
To this end, in this work we propose an automatic technique to obtain the shape and size of the uncertainty sets, parametrized as \(\theta\), such that the problem objective is minimized while guaranteeing constraint satisfaction across all members of our problem family.
## 3 Learning the uncertainty set
### The data-driven problem and bi-level formulation
Suppose we are given a \(\mathtt{RO}\) problem (1) and a dataset \(U^{N}=\{d_{i}\}_{i=1}^{N}\subseteq\mathcal{D}_{u}\) of a set of \(N\) independent samples of the uncertain parameter \(u\), governed by \(\mathbf{P}^{N}\), its product distribution. We are also given a dataset \(Y^{J}=\{y_{i}\}_{j=1}^{J}\subseteq\mathcal{D}_{y}\) of a set of \(J\) independent samples of the family parameter \(y\), governed by \(\mathbf{P}^{J}\), its product distribution. With these datasets, we formulate combined samples \(w_{ij}=(d_{i},y_{j})\), which leads to the combined dataset \(W_{N\times J}=U^{N}\times Y^{J}\), with product distribution \(\mathbf{P}^{N\times J}\). We use these datasets to determine the optimal \(\theta\), and for which the corresponding \(\mathcal{U}(\theta)\) implies a certain _finite-sample probabilistic guarantee_
\[\mathbf{P}^{N\times J}\left\{\mathbf{P}_{(u,y)}(g(x(\theta,y),u,y)\leq 0) \geq 1-\eta\right\}\geq 1-\beta. \tag{7}\]
Here, \(u\) are realizations of the constructed uncertainty set, \(y\) are the family parameters, and \(\mathbf{P}_{(u,y)}\) denotes the joint distribution of \((u,y)\). In order to formalize the learning task, we define an outer problem minimizing the expected value of the objective function \(f(x(\theta,y),y)\) of (1), subject to the following constraint,
\[\mathbf{CVaR}(g(x(\theta,y),u,y),\eta)=\kappa,\]
where \(\kappa\leq 0\) is a target value, and the \(\mathbf{CVaR}\) is defined as in (3). As mentioned in Section 2, this then implies a probabilistic guarantee of constraint satisfaction (4). The training problem becomes the \(\mathbf{CVaR}\) constrained bi-level problem,
\[\begin{array}{ll}\text{minimize}&\mathbf{E}_{w}[\ell(z,w)]\\ \text{subject to}&\mathbf{E}_{w}[h(z,w)]=0,\end{array} \tag{8}\]
where \(z=(\theta,\alpha)\), \(w=(u,y)\), and,
\[\ell(z,w)=f(x(\theta,y),y),\ \ h(z,w)=\frac{(g(x(\theta,y),u,y)-\alpha)_{+}}{ \eta}+\alpha-\kappa,\]
and \(x(\theta,y)\) are solutions of the lower level problems, defined as
\[\begin{array}{ll}x(\theta,y)\in\Phi(\theta,y)=&\underset{x}{\text{argmin}}&f (x,y)\\ &\text{subject to}&g(x,u,y)\leq 0\ \ \ \forall u\in\mathcal{U}(\theta).\end{array} \tag{9}\]
The expectation constraint corresponds to the \(\mathbf{CVaR}\) constraint where \(\alpha\) is shifted to become a minimization variable. When the function \(\ell\) in the objective is convex, this shift in \(\alpha\) has no impact on the optimal solution [10]. With this formulation, we tune \(\theta\) and \(\alpha\) in the outer level problem depending on the results of the lower level problem, and any set of optimal solutions \(x(\theta,y)\in\Phi(\theta,y)\) implicitly depends on the uncertainty set parameterization \(\theta\). A _good_\(\theta\) then, should lend solutions \(x(\theta,y)\) which performs well with respect to the outer problem. Due to the nonconvex nature of the constraints and the stochasticity of the problem, (8) is difficult to solve. We therefore tackle it using a stochastic augmented Lagrangian method.
### Training the uncertainty set
#### 3.2.1 Augmented Lagrangian method
In order to learn the optimal \(z\), we would like to transform the constrained outer function in (8) using the augmented Lagrangian method [14, Section 17.3]. For simplicity of notation, from here onwards we adopt the shorthands
\[F(z)=\mathbf{E}_{w}[\ell(z,w)],\ \ \ H(z)=\mathbf{E}_{w}[h(z,w)].\]
We, then, create the unconstrained function
\[L(z,\lambda,\mu)=F(z)+\lambda(H(z))+\frac{\mu}{2}\|H(z)\|^{2},\]
where \(\lambda\) and \(\mu\) parameters that we update throughout the iterations of the learning algorithm. Our procedure estimates the derivative of \(L(z,\lambda,\mu)\) using a subgradient \(G(W,z,\lambda,\mu)\) computed over a subset \(W\) of the data points. A key component to evaluating this subgradient lies in computing \(\nabla_{\theta}x(\theta,y)\). For this we make use of the results [1], which enable us to find gradients of optimal solutions for convex problems by differentiating through the KKT optimality conditions.
The procedure to learn the optimal \(z\) is described in Algorithms 1 and 2. Algorithm 1 details the outer level Lagrangian updating procedure for convergence to an \(\epsilon\)-KKT point in expectation to (8), while Algorithm 2 details obtaining \(\epsilon\)-stationary points in expectation of \(L\). An \(\epsilon\)-KKT point in expectation is defined as follows.
**Definition 3.1** (\(\epsilon\)-KKT point in expectation [21]).: _Given \(\epsilon>0\), a point \(z=(\theta,\alpha)\in\mathbf{R}^{m\times m}\times\mathbf{R}^{m}\times\mathbf{R}\) is an \(\epsilon\)-KKT point in expectation to (8) if there is a vector \(\gamma\in\mathbf{R}\) such that_
\[\mathbf{E}[\|H(z)\|^{2}]\leq\epsilon^{2},\quad\mathbf{E}[\mathbf{dist}(0, \partial F(z)+J_{H}(z)^{T}\gamma)^{2}]\leq\epsilon^{2},\]
_where \(J_{H}(z)\) is the Jacobian of function \(H\) at \(z\)._
While these true expectations cannot be computed, we can guarantee the conditions above at convergence [21].
```
1:given\(z^{0}=(A^{\mathrm{init}},b^{\mathrm{init}},\alpha^{0})\), \(\lambda^{0},\mu^{0},\kappa,\sigma,\gamma_{\mathrm{max}},\epsilon\)
2:for\(k=1,\ldots,k_{\mathrm{max}}\)do
3: obtain \(z^{k}\) satisfying \(\mathbf{E}[\mathbf{dist}(0,\partial_{z}L(z^{k},\lambda^{k-1},\mu^{k-1}))^{2}] \leq\epsilon^{2}\)\(\triangleright\) using Algorithm 2
4:\(\lambda^{k}\leftarrow\lambda^{k-1}+\min\{\mu^{k-1}(H(z^{k})),\gamma_{\mathrm{ max}}\}\)\(\triangleright\) update Lagrange multipliers, with all data
5:\(\mu^{k}\leftarrow\sigma\mu^{k-1}\)
6:return\(z^{k_{\mathrm{max}}}\)
```
**Algorithm 1** Stochastic augmented Lagrangian algorithm to solve (8)
For each outer iteration \(k\), we call Algorithm 2 with dataset \(W^{k}=U^{k}\times Y^{k}\subseteq W_{N\times J}\), where \(U^{k}\subseteq U^{N}\) and \(Y^{k}\subseteq Y^{J}\), sampled uniformly and with overall size \(|W^{k}|=M_{1}\). We initialize \(\lambda=\lambda^{k-1}\), \(\mu=\mu^{k-1}\). The values of \(\gamma\), and \(\delta\) are initialized as detailed in Appendix A.1. For each subsequent inner iteration \(t\), we use subsets \(W^{t}\) of the corresponding outer iterations' datasets, with overall size \(|W^{t}|=M_{2}\).
```
1:given\(\hat{z}^{0},\lambda,\mu,\kappa,\gamma,\delta,W,Y\)
2:\(x^{0}\leftarrow\) solve inner problem (9) over \(x\) for \(y\in Y\)
3:\(v^{0}\gets G(W,\hat{z}^{0},\lambda,\mu)\),
4:for\(t=1,\ldots,t_{\max}\)do
5: sample uniformly datasets \(W^{t}\subset W\) and \(Y^{t}\subset Y\)
6:\(\hat{z}^{t}\leftarrow\hat{z}^{t-1}-\gamma v^{t-1}\)
7:\(x^{t}\leftarrow\) solve inner problem (9) over \(x\) for all \(y\in Y^{t}\)
8:\(v_{1}^{t}\gets G(W^{t},\hat{z}^{t-1},\lambda,\mu)\)
9:\(v_{2}^{t}\gets G(W^{t},\hat{z}^{t},\lambda,\mu)\)
10:\(v^{t}\gets v_{2}^{t}+(1-\delta)(v^{t-1}-v_{1}^{t})\)
11: Choose \(\hat{z}\) uniformly at random from \(\{\hat{z}^{1},\ldots,\hat{z}^{t_{\max}}\}\)
12:return\(\hat{z}\)
```
**Algorithm 2** Subroutine to obtain \(\epsilon\)-stationary points in expectation, \(z^{k}\), for all \(k\)
#### 3.2.2 Convergence of the Augmented Lagrangian method
Our formulation fits in the framework of the nonconvex expectation constrained problem of [22, 23] and [24]. Under standard bounded gradient, smoothness, and regularity conditions, detailed further in Appendix A.1, Algorithm 1 converges to an \(\epsilon\)-KKT point in expectation to (8). In particular, we require the following regularity condition.
**Assumption 3.1** (regularity condition, [22, 21]).: _Given the finite and bounded domain \(\Theta\) of the decision variable \(z=(\theta,\alpha)\), there exists a constant \(v>0\) such that for any \(z\in\Theta\),_
\[v\|F(z)\|\leq\mathbf{dist}(-J_{H}(z)^{T}H(z),\mathcal{N}_{\Theta}(z)),\]
_where \(\mathcal{N}_{\Theta}(z)\) is the normal cone of \(\Theta\) at \(z\), defined as_
\[\mathcal{N}_{\Theta}(z)=\{s\mid s^{T}(y-z)\leq 0,\quad\forall y\in\Theta\}.\]
This condition ensures that when the penalty \(\mu\) is large, a near-stationary point of the augmented Lagrangian function is near feasible to (8). This assumption is neither stronger or weaker than other common regularity conditions such as the Slater's condition and the MFCQ condition, and has been proven for many applications [21, 22].
We can now give an informal theorem on the convergence of Algorithm 1.
**Theorem 3.1** (algorithm convergence (informal)).: _Let \(\epsilon\) be a small enough positive number. Then, under standard smoothness conditions and regularity Assumption 3.1, Algorithm 1 needs at most \(k_{\max}\) outer iterations to find an \(\epsilon\)-KKT point in expectation of (8), where_
\[k_{\max}=O(\log_{\sigma}(1/\epsilon)).\]
_In addition, the number of inner iterations \(t_{\max}^{k}\) needed by Algorithm 2 is_
\[t_{\max}^{k}=O(1/\epsilon^{3}).\]
See Appendix A.1 for the exact terms. This result is an application of [22, Theorem 1, Theorem 2] and [24, Corollary 2].
Probabilistic guarantees of constraint satisfaction
To satisfy the probabilistic guarantee (7) for out-of-sample data, we make use of empirical process theory and a covering number argument. We begin with a boundedness assuption on the function \(h\), defined in (8).
**Assumption 4.1**.: _The function \(h(z,w)\) maps to values in \((-C_{1},C_{2})\), where \(0\leq C_{1},C_{2}<\infty\), for all \(z,w\) in its support._
We normalize function \(h\) by \(C=C_{1}+C_{2}\) and define
\[\psi_{s}(u,y)=\frac{1}{C}\left(C_{1}+\frac{(g(x,u,y)-\alpha)_{+}}{\eta}+\alpha \right), \tag{10}\]
where \(s=(x,\alpha)\in\mathbf{R}^{n+1}\). The function \(\psi_{s}\) then maps to values between \(0\) and \(1\), and we consider the entire function class
\[\Psi=\{\psi_{s}\}_{s\in\mathcal{S}},\]
where we also impose the following assumption on \(\mathcal{S}\). These conditions lead to our finite sample probabilisitc guarantee in Theorem 4.1.
**Assumption 4.2**.: _The set \(\mathcal{S}\) is bounded, i.e. \(\mathcal{S}=\{s=(x,\alpha)\in\mathbf{R}^{n+1}:\|s\|\leq R\},\;0<R<\infty.\)_
**Theorem 4.1** (Finite sample probabilistic guarantee).: _For function class \(\Psi\) defined above, if the following condition on the covering number \(N\) holds for some constants \(V\) and \(K\), for all probability measures \(Q\) and the \(L_{2}(Q)\) norm,_
\[\sup_{Q}N(\delta,\Psi,L_{2}(Q))\leq\left(\frac{K}{\delta}\right)^{V}, \tag{11}\]
_for every \(0<\delta<K,\) then when Algorithm 1 converges to an \(\epsilon\)-KKT point, we have the finite sample probabilistic guarantee_
\[\mathbf{P}^{N\times J}\left(\mathbf{P}_{(u,y)}(g(x,u,y)\leq 0)\geq 1-\eta \right)\geq 1-\beta, \tag{12}\]
_where \(\beta=(D\sqrt{N}C\tau/\sqrt{V})^{V}\exp(-2NC^{2}\tau^{2})\), \(D\) is a constant that depends only on \(K\), and \(\tau\leq(\kappa+\epsilon)/C\). This is stronger than and implies the probabilistic guarantee given in (7), as it holds for all \(x\)._
Proof.: By the definition of the function class \(\Psi\) and Assumptions 4.2, 4.1, we can apply [23, Theorem 2.14.9] to get
\[\mathbf{P}^{N\times J}\left(\sup_{\psi_{s}\in\Psi}\left\|\frac{1}{J}\frac{1}{ N}\sum_{j=1}^{J}\sum_{i=1}^{N}\psi_{s}(d_{i},y_{j})-\mathbf{E}_{(u,y)}[\psi_{s}(u,y)] \right\|\leq\tau\right)\geq 1-\left(\frac{D\sqrt{N}C\tau}{\sqrt{V}} \right)^{V}\exp(-2NC^{2}\tau^{2}).\]
By the convergence of Algorithm 1 to an \(\epsilon\)-KKT point in expectation, we have that the empirical **CVaR** will be close to \(\kappa\), which given the Equation (10), means
\[\frac{1}{NJ}\sum_{j=1}^{J}\sum_{i=1}^{N}\psi_{s}(d_{i},y_{j})\leq(1/C)(\kappa+ \epsilon).\]
Therefore, we have
\[\mathbf{P}^{N\times J}\left(\sup_{\psi_{s}\in\Psi}\mathbf{E}_{(u,y)}[\psi_{s} (u,y)]\leq(1/C)(\kappa+\epsilon)+\tau\right)\geq 1-\beta,\]
where \(\beta=(D\sqrt{N}C\tau/\sqrt{V})^{V}\exp(-2NC^{2}\tau^{2})\). If \(\kappa\) is set such that \((1/C)(\kappa+\epsilon)\leq-\tau\), we then have
\[\mathbf{P}^{N\times J}\left(\mathbf{CVaR}(g(x,u,y),\eta)\leq 0\right)\geq 1-\beta,\]
which implies
\[\mathbf{P}^{N\times J}\left(\mathbf{P}_{(u,y)}(g(x,u,y)\leq 0)\geq 1-\eta \right)\geq 1-\beta. \tag{13}\]
Covering number for \(L\)-Lipschitz functions.When \(\psi_{s}\) is \(L\)-Lipschitz in \(s\) for all \(u\) and \(y\), _i.e._
\[|\psi_{s}(u,y)-\psi_{s^{\prime}}(u,y)|\leq L\|s-s^{\prime}\|_{2},\]
we can apply standard bounds on the covering number of Lipschitz losses [13, 14, 15]. In this case, the covering number condition (11) reduces to
\[\sup_{Q}N(\delta,\Psi,L_{2}(Q))\leq N(\delta/L,\mathcal{S},\|\cdot\|_{2})\leq \left(1+\frac{RL}{\delta}\right)^{n+1}\leq\left(\frac{K}{\delta}\right)^{n+1},\]
for every \(0<\delta<K\), where \(K=\delta+RL\). Then, \(\beta=(D\sqrt{N}C\tau/\sqrt{n+1})^{n+1}\exp(-2NC^{2}\tau^{2})\). In practice, affine and max-of-affine \(g\) functions fit in this function class.
## 5 The LROPT package
We introduce the Python package LROPT which builds off CVXPY [13] and Cvxpylayers [1], to solve robust optimization problems while learning the optimal parametrization \(\theta\) of the uncertainty set. We automatically handle the constraint reformulation process to remove depenency on the uncertain parameter \(u\) and set \(\mathcal{U}(\theta)\), in order to transform the robust problem 1 into an equivalent convex one. The package is available at:
[https://stellatogrp.github.io/lropt](https://stellatogrp.github.io/lropt).
In each LROPT problem, the user may define the uncertainty set \(\mathcal{U}(\theta)\) by electing to either:
1. Explicitly define the parameterization \(\theta\) upfront, or
2. Provide a dataset \(U^{N}\) of past realizations of \(u\), and a set \(Y^{J}\) of instances of \(y\), leaving \(\theta\) to be automatically learned via the procedure described in Section 3.2.1.
As long as the user-specified robust problem follows the defined rules in Section 5.3, LROPT will automatically dualize and solve it. Example formulations are given in Appendices A.2.1.3 with proofs provided in Appendix A.5. For code details and examples, see Section 5.2.
### Uncertainty sets
For ease of use, LROPT is equipped with structure for describing array of common uncertainty sets you might need. For this section, we suppress the dependency of the problem parameter \(y\), abbreviating \(f(x,y)\) as \(f(x)\) and \(g(x,u,y)\) as \(g(x,u)\) as the \(y\)'s are fixed data for each problem, and independent of the dualization procedure. As described, every RO problem is written with uncertainty parameter \(u\) and an uncertainty set \(\mathcal{U}(\theta)\), working together to describe an uncertain constraint as in (1)
\[g(x,u)\leq 0\quad\forall u\in\mathcal{U}(\theta). \tag{14}\]
Here, we give an example set, and describe its parameterization \(\theta\). We leave the description and reformulations of the other supported uncertainty sets to Appendices A.2 and A.3 and the codebase.
Ellipsoidal uncertainty.Arguably the most common uncertainty sets used in RO is the ellipsoidal uncertainty set, which we write as
\[\mathcal{U}_{\text{ellip}}(\theta)=\{u\mid\|Au+b\|_{2}\leq 1\},\]
where \(\theta=(A,b)\). For a simple affine constraint
\[g(x,u)=(Pu+a)^{T}x, \tag{15}\]
we obtain the following convex reformulation for the original RO problem.
\[\begin{array}{ll}\text{minimize}&f(x)\\ \text{subject to}&a^{T}x-b^{T}\gamma+\|\gamma\|_{2}\leq 0\\ &A^{T}\gamma=P^{T}x,\end{array} \tag{16}\]
where \(\gamma\in\mathbf{R}^{m}\) is an auxiliary variable.
#### 5.1.1 Training the sets
To learn the uncertainty set from data, the user provides the datasets \(U^{N}\) and \(Y^{J}\) to approximate the augmented Lagrangian outer function \(L(z,\lambda,\mu)\) detailed in Section 3.2.1. The learning procedure is then handled by pytorch to build a computational graph enabling _autograd_, in addition to Cvxpylayers [1] to differentiate through the KKT conditions of the inner problem (9). For more details on how Cvxpylayers embeds an optimization problem as a layer in a neural network, see [1]. Our package applies the stochastic augmented Lagrangian Algorithm 1, with all parameters user-specifiable, including learning-rate schedulers.
### Automatic canonicalization procedure
One of the core features of LROPT is the novel canonicalization process which converts the human-readable RO problem with uncertain parameters into a convex optimization problem. The procedure is depicted in Figure 2.
The reformulation of the uncertain constraints into a convex CVXPY problem is done via a canonicalization process whose steps are described in more detail in Appendix A.6. To illustrate this process, consider the same \(g(x,u)=(Pu+a)^{T}x\) from Equation (15) andthe ellipsoidal uncertainty set \(\mathcal{U}_{\text{ellip}}(\theta)=\mathcal{U}_{\text{ellip}}(A,b)=\{u\ |\ ||Au+b||_{2}\leq 1\}\). In LROPT syntax, this problem is described in Listing 1, where \(A\) and \(b\) could be pre-defined and passed to the uncertainty set.
```
x=cp.Variable(n) u=lropt.UncertainParameter(n, uncertainty_set=lropt.Ellipsoidal(A=A, b=b)) prob=lropt.RobustProblem(cp.Minimize(c@x), [(P@u+a)@x<=0]) prob.solve()
```
Figure 2: LROPT’s procedure for solving robust problems
If instead we wanted to _learn_ the parameters \((A,b)\), we would import a dataset of previous realizations of the uncertain parameter \(u\), then call the train function. This procedure is given in Listing 2.
```
#data=...datasetsforyandu y=Parameter(n,data=data_y) u=UncertainParameter(n,uncertaintyset=Ellipsoidal(data=data_u)) x=cp.Variable(n) objective=cp.Minimize(a x) constraints=[x@u+y@x<=c] prob=RobustProblem(objective,constraints) prob.train() prob.solve()
```
**Listing 2** LROPT example for training uncertainty set parameters \(\theta\)
#### 5.2.1 Procedure overview
By defining \(c(u)=||Au+b||_{2}\), we can write the uncertain constraint in Equation (1) as
\[\min_{\lambda\geq 0}\max_{u\in\mathcal{U}}g(x,u)-\lambda(c(u)-1)\leq 0,\]
This constraint can be rewritten as
\[[-g]^{*}(x,z_{1})+[\lambda c]^{*}(z_{2})+\lambda\leq 0,\quad z_{1}+z_{2}=0,\]
where \(z_{1},z_{2}\) are new decision variables, and \([-g]^{*}(x,z_{1})\) is the conjugate function of \(-g(x,u)\) in parameter \(u\) (see Appendix A.5). In terms of the toy constraint, this is equivalent to writing
\[[-g_{1}]^{*}(x,z_{1})+[\lambda c]^{*}(z_{2})+a^{T}x+\lambda\leq 0,\quad z_{1}+z_ {2}=0, \tag{17}\]
where \(g_{1}(x,u)=u^{T}P^{T}x\). In essence, LROPT works by in two main steps
1. **Separate uncertainty.** LROPT breaks down the original uncertain constraint \(g(x,u)\) from (1) into a sum of subexpressions with and without uncertain parameters. This way LROPT can to take the appropriate conjugates of the more manageable uncertain subexpressions such as in (17), while ignoring the terms not affected by the uncertainty.
2. **Remove uncertainty.**LROPT resolves the above conjugate functions in (17) to obtain explicit constraints of the following form, \[a^{T}x-b^{T}\gamma+\lambda\leq 0\] \[z_{1}=-P^{T}x,\quad z=A^{T}\gamma\] \[z_{1}+z_{2}=0,\quad||\gamma||_{2}\leq\lambda,\] which simplifies exactly to (16).
### LROPT ruleset
Here we describe a set of rules on the types of functions \(g(x,u,y)\) that LROPT can reformulate and solve. Similar to disciplined convex programming (DCP), and disciplined parameterized programming (DPP) rules enforced by Cvxpy [1] for solving convex optimization problems, or DSP rules derived from [13] and implemented by [11] for solving convex-concave saddle point problems, the LROPT ruleset is necessary for a RO problem to have a convex reformulation, but they are sufficient and indeed cover a broad range of robust problems.
We say that \(g(x,u,y)\) is LROPT-compliant if it can be written as a sum of smaller subexpressions
\[g(x,u,y)=\sum_{i=1}^{n}g_{i}(x,u,y), \tag{18}\]
where each subexpression \(g_{i}(x,u,y)\) is DPP in \(y\) and either
* DPP in the parameter \(u\), and DCP convex in variable \(x\)
* A non-negative scaling of LROPT atoms described in Section 5.3.1
* A maximum over any number of the previous expressions
Broadly speaking, a RO problem has a convex reformulation if \(g(x,u)\) is concave in \(u\) and convex in \(x\), or is represented by the maximum of expressions which are concave in \(u\) and convex in \(x\).
#### 5.3.1 LROPT atoms
The LROPT package introduces the following new atoms which are convex in \(x\) and concave in \(u\).
**Matrix-vector product.**: The matrix-vector multiplication syntax \(\mathtt{\bar{G}}\) is supported for writing affine expressions in \(u\) such as x@P@u.
**Quadratic form.**: The atom quad_form(u,A*x) represents the function \(g_{i}(x,u)=(u^{T}Au)x\) where \(A\in\mathbf{R}^{m\times m}\), \(A\preceq 0\), and \(x\in\mathbf{R}\) is a scalar variable multiplying the quadratic form.
**Weighted log-sum-exp.**: The atom log_sum_exp(u,x) represents the function \(\log\left(\sum_{i=1}^{n}u_{i}e^{x_{i}}\right)\). Here \(x\in\mathbf{R}^{m}\), and \(u\in\mathbf{R}^{m}\) must be of the same dimension.
**Weighted \(l_{2}\) norm.**: The atom weighted_norm2 represents the function \(\left(\sum_{i=1}^{n}u_{i}x_{i}^{2}\right)^{\frac{1}{2}}\). Again \(x\in\mathbf{R}^{m}\), and \(u\in\mathbf{R}^{m}\) must be of the same dimension.
**Matrix quadratic form.**: The atom mquad_form(U,x) reprsents the function \(g(x,u)=x^{T}Ux\) where \(U\succeq 0\), is constrained by LROPT to be PSD.
#### 5.3.2 Multiple uncertain constraints
LROPT allows for expressing multiple uncertain constraints as a single maximum of concave constraint. In particular, given a set of constraints
\[g_{l}(x,u,y)\leq 0,\quad l=1,\ldots,L,\]
where each \(g_{l}\) satisfies the ruleset defined above, we can write a single joint uncertain constraint,
\[g(x,u,y)=\max_{l}g_{l}(x,u,y)\leq 0, \tag{19}\]
as the joint constraint is still compliant of the ruleset. We also allow for disjoint constraints. In this case, a separate \(h\) can be defined for each uncertain constraint, and the expectation constraint in (8) and the corresponding multiplier \(\lambda\) becomes vector-valued.
### Default training parameters
Algorithms 1 and 2 rely on various parameters that the user can specify. We set the default values as follows:
1. The reshaping parameters \(\theta=(A,b)\) are initialized as \(A=\hat{\Sigma}^{-1/2}\), \(b=-A\hat{\mu}\), where \(\hat{\Sigma}\) is the empirical standard deviation of the data, and \(\hat{\mu}\) is the empirical mean.
2. The **CVaR** parameters are: risk \(\eta=0.05\) and target value \(\kappa=-0.015\). The initial value of the **VaR** proxy is \(\alpha^{0}=0\).
3. The Lagrangian multipliers and penalty are initialized as \(\lambda^{0}=0\) and \(\mu^{0}=1\). The penalty update is \(\sigma=1.01\), and the maximum \(\lambda\) update value is \(\gamma_{\max}=100\).
4. The subsets of the training data used at each iteration have sizes \(M_{1}=NJ\), \(M_{2}=NJ/10\).
5. We use \(t_{\max}=10\) inner iterations and \(k_{\max}=1000\) outer iterations.
6. We set the step sizes to \(\gamma=0.0001\) and \(\delta=0.01\).
## 6 Examples
The following experiments are run on an M1 Macbook with 8 GB memory. The learning procedure relies on the Python package Cvxpylayers [1], and is implemented using the LROPT package. The code can be accessed at
[https://github.com/stellatogrp/lropt/](https://github.com/stellatogrp/lropt/).
For each experiment, we compare the following methods.
**Standard RO.**: We perform standard RO, where the shape of the uncertainty set is determined by the empirical mean and variance of the data [1]. We use grid search to find the size parameter \(\rho\) that gives an in-sample empirical probability of contraint violation of \(0.03\).
**Wasserstein DRO.**: We also perform Wasserstein DRO[1], where the uncertainty set is constructed around individual data points. We again use grid search to find the size parameter \(\rho\) that gives an in-sample empirical probability of constraint violation of \(0.03\).
**LROPT.**: For LROPT, we apply the training method given in Section 3.2.1, with parameters set as in Section 5.4. The step sizes \(\gamma\) are chosen problem-wise.
**LROPT + Fine Tuning.**: Upon completion of the LROPT training procedure, we also fine tune the size of the uncertainty set using grid search to achieve an in-sample empirical probability of contraint violation of \(0.03\).
For each setting, we repeat the experiment \(100\) times, each with a new independent dataset. For each method, we evaluate the performance metrics in Table 1.
### Portfolio management
Problem description.We consider the classic portfolio management problem where we select a portfolio \(x\in\mathbf{R}^{n}\) of stocks to maximize returns. Our selected stock should not deviate too far from our previous holdings, denoted \(x^{\text{prev}}\in\mathbf{R}^{n}\). We parametrize the optimization problem by \(y=x^{\text{prev}}\), obtaining the follow RO formulation
\[\begin{array}{ll}\text{minimize}&t+\eta\|x-x^{\text{prev}}\|_{1}\\ \text{subject to}&-u^{T}x\leq t\quad\forall u\in\mathcal{U}(\theta)\\ &\mathbf{1}^{T}x=1,\quad x\geq 0.\end{array}\]
Therefore, the uncertain constraint is
\[g(t,x,u,x^{\text{prev}})=-u^{T}x-t\leq 0,\quad\forall u\in\mathcal{U}(\theta).\]
\begin{table}
\begin{tabular}{l l} \hline \hline Name & Description \\ \hline Obj. & Objective value estimate for problem (8) \\ \(\hat{\eta}\) & Out-of-sample empirical probability of constraint violation \\ \(\hat{\beta}\) & Estimated confidence, given as \(\mathbf{P}(\hat{\eta}>\eta)\) \\ Time & Average solution time of the final robust problem after training \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance metrics in numerical examples, averaged over \(100\) runs.
Data.We set \(n=15\) stocks and \(N=500\) data points for both the training and testing sets. We generate uncertain returns \(u\) from 3 normal distributions, to simulate 3 uncertain market environments. For distribution \(l\), \(\mu_{i}=0.1+\gamma_{l}0.05i\) for all \(i=1,\ldots,f\), with scaling parameters \(\gamma=(1,2,3)\). The variance is \(\sigma_{i}=0.02^{2}+(0.08i)^{2}\) for all distributions. This means that the higher the index of the stock, the higher returns and higher variance it has. To enforce holdings summing up to 1, we set each instance of the previous holdings \(x^{\text{prev}}\) using the Dirichlet distribution, with parameter value \((2.5,1,.5,.5,.4,.3,.3,.2,.2,.15,.1,.1,.1,.1)\), corresponding to the 15 different stocks. The decreasing values by index indicate less holdings of the riskier stocks.
Results.In Table 2, we compare the performance of our LROPT method with standard RO and Wasserstein DRO, as described in the beginning of this section. We note that the final learned reshaped uncertainty set achieves a low empirical probability of constraint violation while also ensuring a low out-of-sample objective value. In comparsion, standard RO and Wasserstein DRO provides worse trade-offs.
In Figure 3, for one of the 100 runs of the experiment, we show the trade-off between the average objective value and empirical probability of constraint violation for the standard and reshaped uncertainty sets. The graphs are obtained by varying the size \(\rho\) of the two sets. We notice that the tradeoff improves for the reshaped sets.
### Multi-product newsvendor
Problem description.We consider a newsvendor problem where two series of correlated products are sold in conjuction. When both series are available, they will be sold as a set, and when only series one (the main product) is available, only that series will be sold. For simplicty, we allow fractional orders. At the beginning of each day, the vendor produces \(x_{1}\in\mathbf{R}^{n}\) products of the first series, and \(x_{2}\in\mathbf{R}^{n}\) products of the second series, with respective costs \(k_{1}\in\mathbf{R}^{n}\). \(k_{2}\in\mathbf{R}^{n}\). As the supplementary products (series two) will not
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Method & LROPT & LROPT + Fine Tuning & RO & Wass. DRO \\ \hline Obj. & \(-1.33\times 10^{-1}\) & \(-1.92\times 10^{-1}\) & \(-1.80\times 10^{-1}\) & \(-1.78\times 10^{-1}\) \\ \(\hat{\eta}\) & \(1.65\times 10^{-2}\) & \(3.23\times 10^{-2}\) & \(2.84\times 10^{-2}\) & \(3.03\times 10^{-2}\) \\ \(\hat{\beta}\) & 0 & \(2.80\times 10^{-2}\) & \(1.60\times 10^{-2}\) & \(2.00\times 10^{-2}\) \\ Time & \(6.12\times 10^{-4}\) & \(5.90\times 10^{-4}\) & \(5.96\times 10^{-4}\) & \(6.62\times 10^{-2}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Portfolio management: comparing the performance of various methods, averaged across 100 iterations. LROPT: learning the shape of the uncertainty set. LROPT + Fine Tuning: tuning the size of the learned uncertainty set. RO: standard RO where only the size is tuned. Wass. DRO: Wasserstein DRO. For the last three methods, the in-sample empirical probability of constraint violation are tuned to be 0.03. Obj.: the average out-of-sample objective value. Time: final solve-time once the uncertainty sets are constructed, not including training time. For all methods, the same number of data points for testing and training is used.
be sold on their own, we must have \(x_{1}\geq x_{2}\). These products will be sold at the prices \(p_{1}>k_{1},p_{2}>k_{2}\), until either the demand \(u\) or inventory \(x\) is exhausted. We parametralize the problem with \(y=p_{1}\). For each problem instance, the total cost to minimize is then
\[c(x)=k_{1}^{T}x_{1}-p_{1}^{T}\min\{x_{1},u\}+k_{2}^{T}x_{2}-p_{2}^{T}\min\{x_{2 },u\}.\]
Given a historical set of uncertain demands, we solve the data-driven RO problem
\[\begin{array}{ll}\text{minimize}&\tau+\tau_{1}\\ \text{subject to}&k_{1}^{T}x_{1}-p_{1}^{T}\min\{x_{1},u\}\leq\tau&\forall u\in \mathcal{U}(\theta)\\ &k_{2}^{T}x_{2}-p_{2}^{T}\min\{x_{2},u\}\leq\tau_{1}&\forall u\in\mathcal{U}( \theta)\\ &x_{1}\geq x_{2}\geq 0.\end{array}\]
As there are two uncertain constraints, we model it as a single maximum of concave constraints as in (19). Specifically, we have
\[g(x_{1},x_{2},u,p_{1})=\max(k_{1}^{T}x_{1}-p_{1}^{T}\min\{x_{1},u\}-\tau,k_{2} ^{T}x_{2}-p_{2}^{T}\min\{x_{2},u\}-\tau_{1})\leq 0.\]
Data.We set \(n=15\), and \(N=300\) data points for both the training and testing sets. We generate \(k_{1}\in\mathbf{R}^{n}\) uniformly on \([2,5]\), and \(k_{2}\in\mathbf{R}^{n}\) uniformly on \([1,3]\). The retail prices \(p_{1}\in\mathbf{R}^{n}\) are set as \(k_{1}+r_{1}^{\prime}\), where \(r_{1}^{\prime}\in\mathbf{R}^{n}\) is generated uniformly on \([1,3]\), and \(p_{2}\in\mathbf{R}^{n}\) is similarly set as \(k_{2}+r_{2}^{\prime}\), where \(r_{2}^{\prime}\in\mathbf{R}^{n}\) is generated uniformly on \([0,2]\). We consider 5 problem instances, where the retail price \(p_{1}\) is slightly perturbed. For each instance \(j\), we let \(y_{j}=p_{1j}=p_{1}+s_{j}\), where \(s_{j}\) follows a normal distribution with mean 0 and standard deviation 1. The unknown demand is distributed as a log-normal distribution, where the normal distribution has \(\mu\sim U[-0.2,2]^{n}\) and \(\Sigma=0.1FF^{T}\), where \(F\in\mathbf{R}^{n\times 2}\) has elements drawn from the standard normal distribution.
Results.In Table 3, we again compare the performance of our LROPT methods with Wasserstein DRO, and note that the reshaped uncertainty set outperforms the others. Visualising the performance of the learned uncertainty sets for one run of the experiment,
Figure 3: Left: average out-of-sample objective values versus empirial **CVaR** for both parametrizations of the uncertainty set, for one run of the experiment. This graph is obtained by varying the \(\rho\) value (size) for both the standard and reshaped sets. The average is taken across all problem instances, and the shaded region represents the 0.1 to 0.9 quantiles. Right: average out-of-sample objective values versus empirical probability of constraint violation, for one run of the experiment.
we observe in Figure 4 that the tradeoff between the average objective value and empirical probability of constraint satisfaction is much better for the reshaped set, thus motivating our approach.
### Inventory management
We implement an adjusted version of the inventory management example from [14][Section 5.2.1.], a two-stage adjustable RO problem. Instead of the deterministic polyhedral uncertainty set considered there, we assume the uncertainty set to be ellipsoidal, with parameters to be learned from data.
**Problem description.** We consider a retail network consisting of a single warehouse and \(n\) different retail points, indexed by \(i=1,\ldots,n\). For simplicity, only a single product is sold. There are a total of \(C\) units of the product available, and each retail point holds 0 inventory and is capable of stocking at most \(c_{i}\) units. The transportation costs for distributing the
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Method & LROPT & LROPT + Fine Tuning & RO & Wass. DRO \\ \hline Obj. & \(-9.62\times 10^{1}\) & \(-1.30\times 10^{2}\) & \(-4.41\times 10^{1}\) & \(-1.05\times 10^{2}\) \\ \(\hat{\eta}\) & \(3.00\times 10^{-4}\) & \(3.05\times 10^{-2}\) & \(3.05\times 10^{-2}\) & \(3.09\times 10^{-2}\) \\ \(\hat{\beta}\) & 0 & \(9.00\times 10^{-2}\) & \(6.00\times 10^{-2}\) & \(1.50\times 10^{-1}\) \\ Time & \(5.60\times 10^{-4}\) & \(5.30\times 10^{-4}\) & \(6.10\times 10^{-4}\) & \(1.53\times 10^{-1}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Newsvendor: comparing the performance of various methods, averaged across 100 iterations. LROPT: learing the shape of the uncertainty set. LROPT + Fine Tuning: tuning the size of the learned uncertainty set. RO: standard RO where only the size is tuned. Wass. DRO: Wasserstein DRO. For the last three methods, the in-sample empirical probability of constraint violation are tuned to be 0.03. Obj.: the average out-of-sample objective value. Time: final solve-time once the uncertainty sets are constructed, not including training time. For all methods, the same number of data points for testing and training is used.
Figure 4: Left: average out-of-sample objective values versus empirial **CVaR** for both parametrizations of the uncertainty set, for one run of the experiment. The average is taken across all problem instances, and the shaded region represents the 0.1 to 0.9 quantiles. Right: average out-of-sample objective values versus probability of constraint violation, for one run of the experiment.
items at the \(i\)th retail point is \(t_{i}\) currency units per unit of inventory, and the operating cost is \(h_{i}\) per unit. The revenues per unit is \(r_{i}\), and customer demand is uncertain, with a nominal demand plus an additional amount dependent on \(f\) market factors. The formula for the demand at retail point \(i\) is
\[a_{i}=\bar{a}_{i}+q_{i}^{T}u,\]
where \(\bar{a}_{i}\in\mathbf{R}\) denotes the nominal demand, and \(q_{i}\in\mathbf{R}^{f}\) denotes the exposure of demand \(a_{i}\) to the market factors. We denote with \(u\) the unknown market factors, which belong to an uncertainty set \(\mathcal{U}(\theta)\). The decision variable is the stocking decisions for all retail points, denoted \(s\in\mathbf{R}^{n}\). In addition, the problem involves a vector \(w(u)\) of realized sales at each point, which is dependent on stocking decision \(s\) and the unknown factors \(u\) affecting the demand. We have
\[w_{i}(u)=\min\{s_{i},a_{i}\},\]
for each retail point. The manager makes stocking decisions to maximize worst-case profits, which is equivalent to minimizing worst-case loss, denoted as \(P\). We parametrize the problem with \(y=r\). For each problem instance, the RO formulation is given as
\[\begin{array}{ll}\text{minimize}&P\\ \text{subject to}&-r^{T}w(u)+(t+h)^{T}s\leq P,\quad\forall u\in\mathcal{U}( \theta)\\ &w_{i}(u)\leq s_{i},\quad i=1,\ldots,n,\quad\forall u\in\mathcal{U}(\theta)\\ &w_{i}(u)\leq\bar{a_{i}}+q_{i}^{T}u,\quad i=1,\ldots,n,\quad\forall u\in \mathcal{U}(\theta)\\ &\mathbf{1}^{T}s=C\\ &0\leq s\leq c,\end{array} \tag{20}\]
where variables \(s\in\mathbf{R}^{n}\), \(w\in\mathbf{R}^{n}\), and \(P\in\mathbf{R}\). To solve this, we first replace the adjustable decision \(w_{i}(u)\) with their affine adjustable robust counterparts \(z_{i}^{0}+v_{i}^{T}u\), where \(z_{i}^{0}\in\mathbf{R}\) and \(v_{i}\in\mathbf{R}^{f}\) are auxiliary introduced as part of the linear decision rules [1]. The new formulation is
\[\begin{array}{ll}\text{minimize}&P\\ \text{subject to}&-r^{T}z^{0}-r^{T}Vu+(t+h)^{T}s\leq P,\quad\forall u\in \mathcal{U}(\theta)\\ &z_{i}^{0}+v_{i}^{T}u\leq s_{i},\quad i=1,\ldots,n,\quad\forall u\in\mathcal{ U}(\theta)\\ &z_{i}^{0}+v_{i}^{T}u\leq\bar{a_{i}}+q_{i}^{T}u,\quad i=1,\ldots,n,\quad\forall u \in\mathcal{U}(\theta)\\ &\mathbf{1}^{T}s=C\\ &0\leq s\leq c,\end{array} \tag{21}\]
where \(V=[v_{1},\ldots,v_{N}]^{T}\). As there are multiple uncertain constraints, we again apply a maximum of concave formulation as in Section 5.3.2.
Data.As in [14], we consider the case where \(n=10\), and \(C=2000\). All other problem data are independently sampled from uniformly distributed random variables. Inventory capacities are between 300 and 500. Transportation and operation costs are between 1 and 3. Sales revenues \(r\) are between 20 and 40, and nominal demand values are between 100 and 200. We consider 5 instances for \(r\), perturbing each by \(r_{j}^{{}^{\prime}}\), where \(r_{j}^{{}^{\prime}}\) follows a normal
distribution with mean 0 and standard deviation 0.1. The number of market factors \(f\) is 4, and exposure parameters \(Q\) are between \(-2\) and 2. We generate uncertain market factors \(u\) from 3 normal distributions, to simulate 3 uncertain market environments. For distribution \(j\), \(\mu_{i}=\gamma_{j}0.3i\) for all \(i=1,\ldots,f\), with scaling parameters \(\gamma=(1,2,3)\). The variance is \(\sigma_{i}=0.02^{2}+(0.1i)^{2}\) for all distributions. The total training sample size is \(N=300\), with 100 data points from each distribution. The testing sample size is the same. We assume the uncertainty set to be ellipsoidal, with norm \(p=2\).
Results.In Table 4, we compare the performance of our LROPT method with standard RO and Wasserstein DRO, and again note the desirable tradeoff given by our learned reshaped set. In Figure 5, for one of the 100 runs of the experiment, we again show the trade-off between the average objective value and empirical probability of constraint violation for the standard and reshaped uncertainty sets, and observe an improved trade-off for the reshaped set. |
2303.05582 | Generalization analysis of an unfolding network for analysis-based
Compressed Sensing | Unfolding networks have shown promising results in the Compressed Sensing
(CS) field. Yet, the investigation of their generalization ability is still in
its infancy. In this paper, we perform generalization analysis of a
state-of-the-art ADMM-based unfolding network, which jointly learns a decoder
for CS and a sparsifying redundant analysis operator. To this end, we first
impose a structural constraint on the learnable sparsifier, which parametrizes
the network's hypothesis class. For the latter, we estimate its Rademacher
complexity. With this estimate in hand, we deliver generalization error bounds
for the examined network. Finally, the validity of our theory is assessed and
numerical comparisons to a state-of-the-art unfolding network are made, on
synthetic and real-world datasets. Our experimental results demonstrate that
our proposed framework complies with our theoretical findings and outperforms
the baseline, consistently for all datasets. | Vicky Kouni, Yannis Panagakis | 2023-03-09T21:13:32Z | http://arxiv.org/abs/2303.05582v1 | # Generalization analysis of an unfolding network for analysis-based Compressed Sensing
###### Abstract
Unfolding networks have shown promising results in the Compressed Sensing (CS) field. Yet, the investigation of their generalization ability is still in its infancy. In this paper, we perform generalization analysis of a state-of-the-art ADMM-based unfolding network, which jointly learns a decoder for CS and a sparsifying redundant analysis operator. To this end, we first impose a structural constraint on the learnable sparsifier, which parametrizes the network's hypothesis class. For the latter, we estimate its Rademacher complexity. With this estimate in hand, we deliver generalization error bounds for the examined network. Finally, the validity of our theory is assessed and numerical comparisons to a state-of-the-art unfolding network are made, on synthetic and real-world datasets. Our experimental results demonstrate that our proposed framework complies with our theoretical findings and outperforms the baseline, consistently for all datasets.
_Keywords--_ compressed sensing, deep unfolding, unfolding network, analysis sparsity, generalization error bounds, Rademacher complexity
## 1 Introduction
_Compressed Sensing_ (CS) is a modern technique for recovering signals \(x\in\mathbb{R}^{n}\) from incomplete, noisy observations \(y=Ax+e\in\mathbb{R}^{m\times n}\), \(m<n\). To date, various optimization algorithms are employed for tackling the CS problem [1], [2], [3], [4]. However, the fact that model-based methods may lack in terms of time complexity and/or reconstruction quality, has led researchers to develop data-driven approaches for dealing with CS [5], [6], [7]. In a recent line of research, the merits of iterative methods and deep neural networks are combined in _deep unfolding_[8], [9]. The latter constitutes a technique for interpreting the iterations of optimization algorithms as layers of a neural network, which reconstructs the signals of interest from their compressive measurements.
Deep unfolding networks (DUNs) for inverse problems [10], [11] are preferred to standard deep learning architectures, since they enjoy advantages like interpretability [12], prior knowledge of signal structure [13] and a relatively small number of trainable parameters [14]. The same holds true in the case of CS, where state-of-the-art (SotA) unfolding networks [15], [16], [17], [18], [19] typically learn a function called _decoder_, which reconstructs \(x\) from \(y\). In fact, unfolding networks based on the _iterative soft-thresholding algorithm_ (ISTA [2]) and the _alternating direction of multipliers method_ (ADMM [4]) seem to be the most popular classes of DUNs targeting the CS problem. Such networks can also learn - jointly with the decoder - a sparsifying transform for the data [20], [21], [22], [23], [24], [25], integrating that way a dictionary learning technique. Due to the advantages that the latter has offered when applied in model-based methods [26], [27], [28], it seems intriguing to examine its effectiveness when combined with DUNs.
Nevertheless, most of the aforementioned ISTA- and ADMM-based DUNs promote synthesis sparsity [29] in their framework, since the learnable sparsifying dictionary satisfies some orthogonality constraint. Distinct from its synthesis counterpart [30], the analysis sparsity model may be more advantageous for CS [31]. For example, it takes into account the redundancy of the involved analysis operators, leading to a more flexible sparse representation of the signals, as opposed to orthogonal sparsifiers [32] (see Section 2 for a detailed comparison between the two sparsity models). To our knowledge, only one SotA ADMM-based DUN [24] - which comprises the preliminary work of this article - solves the CS problem by entailing analysis sparsity, in terms of learning a sparsifying redundant analysis operator.
From the mathematical viewpoint, the generalization analysis of deep neural networks [33], [34] attracts significant research interest [35], [36], [37], [38]. Nevertheless, the estimation the generalization error of DUNs is still at an early stage. Particularly, generalization error bounds are mainly provided for the class of ISTA-based unfolding networks [20], [39], [40]. To our knowledge, the generalization ability of ADMM-based DUNs is not yet explained.
In this paper, distinct from the previous methods, we leverage a "built-in" characteristic of ADMM to impose specific structure on the learnable sparsifying redundant transform of a SotA ADMM-based DUN, namely ADMM-DAD [24]. For the latter, we estimate its generalization error, in terms of the Rademacher complexity of its associated hypothesis class. In the end, we present empirical evidence supporting our derived generalization error bounds. Our contributions are summarized below.
1. Inspired by recent representatives of the classes of ISTA- and ADMM-based unfolding networks [20], [22], [23], [24] (see Section 2 for a brief description of a subset of them), we address the generalization analysis of a SotA ADMM-based DUN, namely ADMM-DAD [24], which deals with analysis-based CS. Towards that direction, we first exploit inherent structure of the original ADMM algorithm and impose a structural constraint on the learnable sparsifier of ADMM-DAD. Our proposed framework - presented in Section 3.1 - induces a frame property in the learnable redundant analysis operator, which parametrizes/"enhances" the hypothesis class of ADMM-DAD. To our knowledge, we are the first to impose such a structure on the hypothesis class of a DUN solving the analysis-based CS problem.
2. In Section 3.4, we employ chaining to estimate the Rademacher complexity of the enhanced hypothesis class of ADMM-DAD. Our novelty lies on studying the generalization ability of this ADMM-based DUN, by means of the afore-stated
Rademacher estimate. The generalization error bounds for ADMM-DAD are presented in Section 3.5.
3. We verify our theoretical guarantees in Section 4, by numerically testing ADMM-DAD on a synthetic dataset and a real-world image dataset, i.e., MNIST [41]. We also compare the performance of ADMM-DAD to that of a recent variant of ISTA-net [20]. In all experiments, ADMM-DAD outperforms the baseline and its generalization ability conforms with our theoretical results.
_Notation._ For a sequence \(a_{n}\) that is upper bounded by \(M>0\), we write \(\{a_{n}\}\leq M\). For a matrix \(A\in\mathbb{R}^{m\times n}\), we write \(\|A\|_{2\to 2}\) for its operator/spectral norm and \(\|A\|_{F}\) for its Frobenius norm. The \(l_{2}\)-norm of a vector in \(\mathbb{R}^{n}\) is represented by \(\|\cdot\|_{2}\). For a family of vectors \((\phi_{i})_{i=1}^{N}\) in \(\mathbb{R}^{n}\), \(N>n\), its associated analysis operator is given by \(\Phi x:=\{\langle x,\phi_{i}\rangle\}_{i=1}^{N}\), where \(x\in\mathbb{R}^{n}\). The adjoint of \(\Phi\), i.e. \(\Phi^{T}\), is the synthesis operator. Moreover, \(\{\phi_{i}\}_{i=1}^{N}\) is a _frame_ for \(\mathbb{R}^{n}\) if it holds
\[\alpha\|x\|_{2}^{2}\leq\sum_{i=1}^{N}\lvert\langle x,\phi_{i}\rangle\rvert^{2} \leq\beta\|x\|_{2}^{2} \tag{1}\]
for all \(x\in\mathbb{R}^{n}\), for some \(0<\alpha\leq\beta<\infty\) (frame bounds); \(\alpha\) is the _lower frame bound_ and \(\beta\) is the _upper frame bound_. We denote with \(S\) the multiplication of a synthesis with an analysis operator, i.e. \(S=\Phi^{T}\Phi\in\mathbb{R}^{n\times n}\), and call it \(S\)-operator. Moreover, if \(S\) is invertible, then \((\phi_{i})_{i=1}^{N}\) is a frame and we call \(S\) the _frame operator_ associated with that frame. For the frame operator and its inverse, it holds \(\alpha\leq\|S\|_{2\to 2}\leq\beta\) and \(\beta^{-1}\leq\|S^{-1}\|_{2\to 2}\leq\alpha^{-1}\), respectively. For matrices \(A_{1},A_{2}\in\mathbb{R}^{N\times N}\), we denote by \([A_{1};A_{2}]\in\mathbb{R}^{2N\times N}\) their concatenation with respect to the first dimension, while we denote by \([A_{1}\,|\,A_{2}]\in\mathbb{R}^{N\times 2N}\) their concatenation with respect to the second dimension. We write \(O_{N\times N}\) for a real-valued \(N\times N\) matrix filled with zeros and \(I_{N\times N}\) for the \(N\times N\) identity matrix. For \(x\in\mathbb{R}\), \(\tau>0\), the soft thresholding operator \(\mathcal{S}_{\tau}:\mathbb{R}\mapsto\mathbb{R}\) is defined as
\[\mathcal{S}_{\tau}(x)=\mathcal{S}(x,\tau)=\begin{cases}\mathrm{sign}(x)(|x|- \tau),&|x|\geq\tau\\ 0,&\mathrm{otherwise},\end{cases} \tag{2}\]
or in closed form \(\mathcal{S}(x,\tau)=\mathrm{sign}(x)\max(0,|x|-\tau)\). For \(x\in\mathbb{R}^{n}\), the soft thresholding operator acts component-wise, i.e. \((\mathcal{S}_{\tau}(x))_{i}=\mathcal{S}_{\tau}(x_{i})\), and is \(1\)-Lipschitz with respect to \(x\). For \(y\in\mathbb{R}^{n}\), \(\tau>0\), the mapping
\[P_{G}(\tau;y)=\mathrm{argmin}_{x\in\mathbb{R}^{n}}\left\{\tau G(x)+\frac{1}{2} \|x-y\|_{2}^{2}\right\}, \tag{3}\]
is the _proximal mapping associated to the convex function G_. For \(G(\cdot)=\|\cdot\|_{1}\), (3) coincides with (2). For two functions \(f,g:\mathbb{R}^{n}\mapsto\mathbb{R}^{n}\), we write their composition as \(f\circ g:\mathbb{R}^{n}\mapsto\mathbb{R}^{n}\) and if there exists some constant \(C>0\) such that \(f(x)\leq Cg(x)\), then we write \(f(x)\lesssim g(x)\). For the ball of radius \(t>0\) in \(\mathbb{R}^{n}\) with respect to some norm \(\|\cdot\|\), we write \(B_{\|\cdot\|}^{n}(t)\). The covering number \(\mathcal{N}(T,\|\cdot\|,t)\) of a space \(T\), equipped with a norm \(\|\cdot\|\), at level \(t>0\), is defined as the smallest number of balls \(B_{\|\cdot\|}^{n}(t)\), required to cover \(T\).
Background on sparsity models and unfolding networks for CS
### Synthesis-based CS: unfolding ISTA and ADMM
CS aims at recovering \(x\in\mathbb{R}^{n}\) from \(y=Ax+e\in\mathbb{R}^{m}\), \(m<n\), with \(A\) being the measurement matrix and \(e\in\mathbb{R}^{m}\), \(\|e\|_{2}\leq\epsilon\), corresponding to noise. To do so, one can impose a synthesis sparsity model on \(x\)[29], [42], i.e., assume that there exists \(D\in\mathbb{R}^{n\times p}\) (\(n\leq p\)) such that \(x=Dz\), with the coefficients' vector \(z\in\mathbb{R}^{p}\) being sparse. In fact, \(D\) is typically chosen to be an orthogonal matrix, e.g. a wavelet or cosine transform. By incorporating synthesis sparsity in CS, one is called to solve the LASSO problem:
\[\min_{z\in\mathbb{R}^{n}}\frac{1}{2}\|y-\tilde{A}z\|_{2}^{2}+\lambda\|z\|_{1}, \tag{4}\]
with \(\tilde{A}=AD\) and \(\lambda>0\) being a regularization parameter. Two broad classes of algorithms that are commonly employed to solve (4) rely on ISTA and ADMM. These methods incorporate a proximal mapping (3) and yield iterative schemes which, under mild assumptions, output a minimizer \(\hat{z}\) of (4); then the desired reconstructed signal is simply given by \(\hat{x}=D\hat{z}\). If \(D\) is regarded unknown and learned from a sequence of training samples, the iterations of ISTA and ADMM are interpreted as layers of neural networks; such DUNs are usually coined ISTA-nets [20], [43] and ADMM-nets [22, 44], respectively. They jointly learn sparsifying transforms for the data and a decoder for CS, that is, a function reconstructing \(x\) from \(y\).
### Analysis-based CS: unfolding ADMM
The algorithms and corresponding DUNs we described so far rely on the synthesis sparsity model, since their framework incorporates some orthogonality constraint for the learnable sparsifers. A tractable counterpart of synthesis sparsity is the analysis sparsity model [45], [46], [32], (also known as cosparse model [30], [47]). In the latter, one assumes that there exists some _redundant analysis operator_\(\Phi\in\mathbb{R}^{N\times n}\), \(N>n\), such that \(\Phi x\) is sparse. Under the analysis sparsity model, the optimization problem for CS is formulated as a generalized LASSO problem, i.e.,
\[\min_{x\in\mathbb{R}^{n}}\frac{1}{2}\|Ax-y\|_{2}^{2}+\lambda\|\Phi x\|_{1}. \tag{5}\]
Particularly, analysis sparsity has gained research interest, due to some advantages it may offer compared to its synthesis counterpart. For example, the redundancy of an analysis operator associated to a frame [48] can provide greater - than orthonormal bases - flexibility in the sparse representation of signals [49]. Moreover, it is computationally more efficient to use sparsifying redundant transforms instead of orthogonal ones, since the iterative algorithm for CS may need less measurements \(m\) for perfect reconstruction [32]. Especially when \(D\in\mathbb{R}^{n\times p}\), \(n<p\), so that the optimization variable in (4) lies in \(\mathbb{R}^{p}\), one can argue that it is preferable to solve (5), since the dimension of the optimization problem is smaller [31]. Now, thresholding algorithms like ISTA cannot treat analysis sparsity, since the proximal mapping associated to \(\|\Phi(\cdot)\|_{1}\) does not have a closed-form solution. Therefore, we turn to ADMM, which can efficiently
solve (5) by means of the following iterative scheme:
\[x^{k+1} =(A^{T}A+\rho\Phi^{T}\Phi)^{-1}(A^{T}y+\rho\Phi^{T}(z^{k}-u^{k})) \tag{6}\] \[z^{k+1} =\mathcal{S}_{\lambda/\rho}(\Phi x^{k+1}-u^{k})\] (7) \[u^{k+1} =u^{k}+\Phi x^{k+1}-z^{k+1}, \tag{8}\]
\(z,u\in\mathbb{R}^{N}\). Let us suppose that the redundant analysis operator \(\Phi\) is unknown and learned from a set of i.i.d. training samples, i.e. \(\mathbf{S}=\{(x_{i},y_{i})\}_{i=1}^{s}\), drawn from an unknown distribution1\(\mathcal{D}^{s}\). Then, the updates in (6) - (8) can be interpreted as a neural network with \(L\in\mathbb{N}\) layers, coined ADMM Deep Analysis Decoding (ADMM-DAD) [24]. The output of the first and the \(k\)th layer are given by
Footnote 1: Formally speaking, this is a distribution over \(x_{i}\) and for fixed \(A,e\), we obtain \(y_{i}=Ax_{i}+e\)
\[f_{1}(y) =I^{\prime}b(y)+I^{\prime\prime}\mathcal{S}_{\lambda/\rho}(b(y)), \tag{9}\] \[f_{k}(v) =\bar{\Theta}v+I^{\prime}b+I^{\prime\prime}\mathcal{S}_{\lambda/ \rho}(\Theta v+b),\quad k=2,\ldots,L, \tag{10}\]
respectively, with \(R=A^{T}A+\rho\Phi^{T}\Phi\in\mathbb{R}^{n\times n}\), \(W=\rho\Phi R^{-1}\Phi^{T}\in\mathbb{R}^{N\times N}\), \(b=b(y)=\Phi R^{-1}A^{T}y\in\mathbb{R}^{N\times 1}\), \(\Lambda=(I-W\,|\,W)\in\mathbb{R}^{N\times 2N}\), \(\Theta=(-I-W\,|\,W)\in\mathbb{R}^{N\times 2N}\), \(\tilde{\Theta}=[\Lambda;O_{N\times 2N}]\in\mathbb{R}^{2N\times 2N}\), \(I_{1}=[I_{N\times N};O_{N\times N}]\in\mathbb{R}^{2N\times N}\), \(I_{2}=[-I_{N\times N};I_{N\times N}]\in\mathbb{R}^{2N\times N}\). For more details on the unrolling procedure, we refer the interested reader to [24].
The composition of \(L\) such layers (all having the same \(\Phi\)) is denoted by
\[f_{\Phi}^{L}(y)=f_{L}\circ\cdots\circ f_{1}(y) \tag{11}\]
and constitutes an _intermediate decoder_ - realized by ADMM-DAD - that reconstructs the _intermediate variable_\(v\) from \(y\). Motivated by (6), we acquire the desired solution \(\hat{x}\) by applying an affine map \(T:\mathbb{R}^{2N\times 1}\mapsto\mathbb{R}^{n\times 1}:v\mapsto Cv+\tau\) after the final layer \(L\), so that
\[\hat{x}:=Cv+\tau, \tag{12}\]
where
\[C =[-\rho R^{-1}\Phi^{T}\,|\,\rho R^{-1}\Phi^{T}]\in\mathbb{R}^{n \times 2N}, \tag{13}\] \[\tau =R^{-1}A^{T}y\in\mathbb{R}^{n}, \tag{14}\]
with \([u^{L};z^{L}]=v_{L}\). Finally, the application of an appropriate clipping function
\[\sigma(x)=\left\{\begin{array}{cc}x,&\|x\|_{2}\leq B_{\rm out}\\ B_{\rm out}\frac{x}{\|x\|_{2}},&\mbox{otherwise}\end{array}\right., \tag{15}\]
for some fixed constant \(B_{\rm out}>0\), so that the output is pushed inside a reasonable range of values, yields the desired decoder, i.e.,
\[\mathrm{dec}_{\Phi}^{L}(y)=\sigma(T(f_{\Phi}^{L}(y))), \tag{16}\]
implemented by ADMM-DAD.
Generalization Analysis of ADMM-DAD
### Enhancing the hypothesis class of ADMM-DAD
According to [24], the hypothesis class of ADMM-DAD consists of all the decoders that ADMM-DAD can implement and is parametrized by the learnable redundant analysis operator \(\Phi\):
\[\mathcal{H}^{L}=\{\sigma\circ h:\,\mathbb{R}^{m}\mapsto\mathbb{R}^{n}:h(y)=T(f^ {L}_{\Phi}(y)),\,\Phi\in\mathbb{R}^{N\times n},N>n\}. \tag{17}\]
However, the definition of (17) does not account for any particular structure on \(\Phi\), which in turn could explain the performance of ADMM-DAD. On the other hand, the \(x\)-update (6) of ADMM incorporates the term \(S=\Phi^{T}\Phi\), which is typically assumed to be an invertible matrix [4], [50]. Similarly, typical choices for the measurement matrix \(A\) consist of a (appropriately normalized) Gaussian matrix [29], [51], [52]. Therefore, we are inspired by the aforementioned facts and make some assumptions, which will hold for the rest of the paper.
**Assumption 3.1**.: _For an analysis operator \(\Phi\in\mathbb{R}^{N\times n}\) with \(N>n\), the matrix \(S=\Phi^{T}\Phi\) is invertible._
**Assumption 3.2**.: _For an analysis operator satisfying Assumption 3.1, and for appropriately chosen measurement matrix \(A\in\mathbb{R}^{m\times n}\) and penalty parameter \(\rho>0\), it holds \(\rho\|S^{-1}\|_{2\to 2}\|A\|_{2\to 2}<1\)._
**Remark 3.1**.: _From a theoretical perspective, it is reasonable to incorporate the invertibility of \(S\) in our framework, since the set of non-invertible matrices \(S\) of the form \(S=\Phi^{T}\Phi\) has zero Lebesgue measure. Additionally, Assumptions 3.1 and 3.2 are empirically confirmed (see Section 4.1.4), since ADMM-DAD learns a \(\Phi\) with associated \(S\)-operator satisfying \(S^{-1}S=I\) and \(\rho\|S^{-1}\|_{2\to 2}\|A\|_{2\to 2}<1\)._
Due to Assumption 3.1, we can further assume that there exists some \(0<\beta<\infty\), so that \(\|S\|_{2\to 2}\leq\beta\), which leads us to introduce the following definition.
**Definition 3.2**.: _We define \(\mathcal{F}_{\beta}\) to be the class of redundant analysis operators \(\Phi\in\mathbb{R}^{N\times n}\) for which the associated \(S\)-operator is invertible and has bounded operator norm by some \(0<\beta<\infty\)._
**Remark 3.3**.: _The invertibility of \(S\) in Definition 3.2 implies that the rows of \(\Phi\) constitute a frame for \(\mathbb{R}^{n}\). Hence, \(S\) is a frame operator and for some \(0<\alpha\leq\beta<\infty\), it holds \(\alpha\leq\|S\|_{2\to 2}\leq\beta\) and \(\|\Phi\|_{2\to 2}\leq\sqrt{\beta}\)._
We enhance the hypothesis class of ADMM-DAD with the framework we described so far, in order to account for a structural constraint on \(\Phi\).
**Definition 3.4**.: _The enhanced hypothesis class \(\mathbf{H}^{L}\) of ADMM-DAD is parameterized by \(\Phi\in\mathcal{F}_{\beta}\) and is defined as the space of all the decoders that ADMM-DAD can implement, i.e.,_
\[\mathbf{H}^{L}=\{h:\mathbb{R}^{m}\mapsto\mathbb{R}^{n}:\,h(y)=\sigma(T(f^{L}_{ \Phi}(y))),\,\Phi\in\mathcal{F}_{\beta}\}. \tag{18}\]
Given the hypothesis class (18) and the training set \(\mathbf{S}\), ADMM-DAD yields \(h\in\mathbf{H}^{L}\) such that \(h(y)=\hat{x}\approx x\). For a loss function \(\ell:\mathbf{H}^{L}\times\mathbb{R}^{n}\times\mathbb{R}^{m}\mapsto\mathbb{R}_{>0}\), we define the empirical loss of a hypothesis \(h\in\mathbf{H}^{L}\) as
\[\hat{\mathcal{L}}_{train}(h)=\frac{1}{s}\sum_{i=1}^{s}\ell(h,x_{i},y_{i}). \tag{19}\]
For the rest of the paper, we work with \(\ell(\cdot)=\|\cdot\|_{2}^{2}\), so that (19) transforms into the _training mean-squared error_ (train MSE):
\[\hat{\mathcal{L}}_{train}(h)=\frac{1}{s}\sum_{j=1}^{s}\|h(y_{j})-x_{j}\|_{2}^{2}. \tag{20}\]
We also define the _true loss_ to be
\[\mathcal{L}(h)=\mathbb{E}_{(x,y)\sim\mathcal{D}}(\|h(y)-x\|_{2}^{2}). \tag{21}\]
The difference between (20) and (21), i.e.,
\[\mathrm{GE}(h)=|\hat{\mathcal{L}}_{train}(h)-\mathcal{L}(h)|, \tag{22}\]
constitutes the _generalization error_ of ADMM-DAD and informs us about how well the network performs on unseen data. Since \(\mathcal{D}\) is unknown, we estimate (22) in terms of the _empirical Rademacher complexity_[33]:
\[\mathcal{R}_{\mathbf{S}}(\ell\circ\mathbf{H}^{L})=\mathbb{E}\sup_{h\in \mathbf{H}^{L}}\frac{1}{s}\sum_{i=1}^{s}\epsilon_{i}\|h(y_{i})-x_{i}\|_{2}^{2}, \tag{23}\]
with \(\epsilon\) being a Rademacher vector, i.e, a vector with i.i.d. entries taking the values \(\pm 1\) with equal probability. The following Theorem provides exactly what we need.
**Theorem 3.5** ([33, Theorem 26.5]).: _Let \(\mathcal{H}\) be a family of functions, \(\mathcal{S}\) the training set drawn from \(\mathcal{D}^{s}\), and \(\ell\) a real-valued bounded loss function satisfying \(|\ell(h,z)|\leq c\), for all \(h\in\mathcal{H},z\in Z\). Then, for \(\delta\in(0,1)\), with probability at least \(1-\delta\), we have for all \(h\in\mathcal{H}\)_
\[\mathcal{L}(h)\leq\hat{\mathcal{L}}_{train}(h)+2\mathcal{R}_{\mathcal{S}}(\ell \circ\mathcal{H})+4c\sqrt{\frac{2\log(4\delta)}{s}}. \tag{24}\]
In order to apply the latter in \(\mathbf{H}^{L}\), we prove that \(\|\cdot\|_{2}^{2}\) is bounded by some constant \(c>0\). Towards that direction, we make two typical - for the machine learning literature - assumptions for \(\mathbf{S}\). Let us suppose that with overwhelming probability it holds:
\[\|y_{i}\|_{2}\leq\mathrm{B}_{\mathrm{in}}, \tag{25}\]
for some constant \(\mathrm{B}_{\mathrm{in}}>0\), \(i=1,2,\ldots,s\). We also assume that for any \(h\in\mathbf{H}^{L}\), with overwhelming probability over \(y_{i}\) chosen from \(\mathcal{D}\), it holds
\[\|h(y_{i})\|_{2}\leq B_{\mathrm{out}}, \tag{26}\]
by definition of \(\sigma\), for some constant \(B_{\mathrm{out}}>0\), \(i=1,2,\ldots,s\). Then, we have \(\|h(y_{i})-x_{i}\|_{2}^{2}\leq(B_{\mathrm{in}}+B_{\mathrm{out}})^{2}\), for all \(i=1,2,\ldots,s\). Hence, \(c=(B_{\mathrm{in}}+B_{\mathrm{out}})^{2}\).
We also simplify the quantity \(\mathcal{R}_{\mathbf{S}}(\|\cdot\|_{2}^{2}\circ\mathbf{H}^{L})\), by using the (vector-valued) contraction principle:
**Lemma 3.6** ([53, Corollary 4]).: _Let \(\mathcal{H}\) be a set of functions \(h:\mathcal{X}\mapsto\mathbb{R}^{n}\), \(f:\mathbb{R}^{n}\mapsto\mathbb{R}^{n}\) a \(K\)-Lipschitz function and \(\mathcal{S}=\{x_{i}\}_{i=1}^{s}\). Then_
\[\mathbb{E}\sup_{h\in\mathcal{H}}\sum_{i=1}^{s}\epsilon_{i}f\circ h(x_{i})\leq \sqrt{2}K\mathbb{E}\sup_{h\in\mathcal{H}}\sum_{i=1}^{s}\sum_{k=1}^{n}\epsilon_ {ik}h_{k}(x_{i}), \tag{27}\]
_where \((\epsilon_{i})\) and \((\epsilon_{ik})\) are Rademacher sequences._
The latter allows us to study \(\mathcal{R}_{\mathbf{S}}(\mathbf{H})\) alone. Since it is easy to check that \(\|\cdot\|_{2}^{2}\) is Lipschitz continuous, with Lipschitz constant \(\mathrm{Lip}_{\|\cdot\|_{2}^{2}}=2\mathrm{B}_{\mathrm{in}}+2\mathrm{B}_{ \mathrm{out}}\), we employ Lemma 3.6 to obtain:
\[\mathcal{R}_{\mathbf{S}}(l\circ\mathbf{H}^{L}) \leq\sqrt{2}(2\mathrm{B}_{\mathrm{in}}+2\mathrm{B}_{\mathrm{out} })\mathbb{E}\sup_{h\in\mathbf{H}^{L}}\sum_{i=1}^{s}\sum_{k=1}^{n}\epsilon_{ik} h_{k}(x_{i})\] \[=\sqrt{2}(2\mathrm{B}_{\mathrm{in}}+2\mathrm{B}_{\mathrm{out}}) \mathcal{R}_{\mathbf{S}}(\mathbf{H}). \tag{28}\]
Therefore, we are left with estimating (28). We do so in a series of steps, presented in the next subsections.
### Bounded outputs
We pass to matrix notation by accounting for the number of samples in the training set \(\mathbf{S}\). Hence, we apply the Cauchy-Schwartz inequality in (25), (26) yielding
\[\|Y\|_{F} \leq\sqrt{s}\mathrm{B}_{\mathrm{in}}, \tag{29}\] \[\|h(Y)\|_{F} =\|\psi(\phi(f_{\Phi}^{L}(Y)))\|_{F}\leq\sqrt{s}\mathrm{B}_{ \mathrm{out}}, \tag{30}\]
respectively. We also state below two results that will be needed in some of the proofs later on.
**Lemma 3.7** (Proof in the supplementary material).: _Let \(A\in\mathbb{R}^{n\times n}\) be invertible and \(B\in\mathbb{R}^{n\times n}\). For some norm \(\|\cdot\|\) in \(\mathbb{R}^{n}\), if it holds \(\|A^{-1}\|\|B\|<1\), then \(A+B\in\mathbb{R}^{n\times n}\) is invertible. Moreover, we have_
\[\|(A+B)^{-1}\|\leq\frac{\|A^{-1}\|}{1-\|A^{-1}\|\|B\|}. \tag{31}\]
**Lemma 3.8** (Proof in the supplementary material).: _For some norm \(\|\cdot\|\) in \(\mathbb{R}^{n}\), if \(A,\,B\in\mathbb{R}^{n\times n}\) are invertible, then_
\[\|B^{-1}-A^{-1}\|\leq\|B^{-1}\|\|\|A^{-1}\|\|A-B\|. \tag{32}\]
We prove that the output of the intermediate decoder (11) is bounded with respect to the Frobenius norm, after any number of layers \(k<L\).
**Proposition 3.9**.: _Let \(k\in\mathbb{N}\). For any \(\Phi\in\mathcal{F}_{\beta}\) and arbitrary \(\lambda,\,\rho>0\) in the definition of \(f_{\Phi}^{k}\), we have_
\[\|f_{\Phi}^{k}(Y)\|_{F}\leq 3\|A\|_{2\to 2}\|Y\|_{F}q\sqrt{\beta}\sum_{i=0}^{k-1} 3^{i}(1+2q\rho\beta)^{i}, \tag{33}\]
_where \(q=\frac{\rho}{\alpha-\rho\|A^{T}A\|_{2\to 2}}\), and \(\alpha\) and \(\beta\) are defined as in Remark 3.3._
Proof.: We prove (33) via induction. For \(k=1\):
\[\|f_{\Phi}^{1}(Y)\|_{F}\leq 3\|B\|_{F}\leq 3\|A\|_{2\to 2}\|Y\|_{F}\sqrt{\beta}\|(A^{T}A +\rho\Phi^{T}\Phi)^{-1}\|_{2\to 2}, \tag{34}\]
which holds by definition of (9). The invertibility of \(S=\Phi^{T}\Phi\), along with Assumption 3.2, Remark 3.3 and Lemma 3.7, imply that
\[\|(A^{T}A+\rho\Phi^{T}\Phi)^{-1}\|_{2\to 2} =\|(A^{T}A+\rho S)^{-1}\|_{2\to 2}\leq\frac{\rho\|S^{-1}\|_{2 \to 2}}{1-\rho\|S^{-1}\|_{2\to 2}\|A^{T}A\|_{2\to 2}}\] \[=\frac{\rho}{\alpha-\rho\|A^{T}A\|_{2\to 2}}:=q, \tag{35}\]
where in the last step we used the fact that \(\beta^{-1}\leq\|S^{-1}\|_{2\to 2}\leq\alpha^{-1}\), for some \(0<\alpha\leq\beta<\infty\). Substituting (35) into (34) yields \(\|f_{\Phi}^{1}(Y)\|_{F}\leq 3\|A\|_{2\to 2}\|Y\|_{F}q\sqrt{\beta}\). Suppose now that (33) holds for some \(k\in\mathbb{N}\). Then, for \(k+1\):
\[\|f_{\Phi}^{k+1}(Y)\|_{F}\leq \|\tilde{\Phi}\|_{2\to 2}\|f_{\Phi}^{k}(Y)\|_{F}+2\|\Theta\|_{2 \to 2}\|f_{\Phi}^{k}(Y)\|_{F}+3\|B\|_{F}\] \[\leq 3\left((1+2\|W\|_{2\to 2})\|f_{\Phi}^{k}(Y)\|_{F}+\|B\|_{F}\right)\] \[\leq 3\Bigg{(}(1+2q\rho\beta)\left(3\|A\|_{2\to 2}\|Y\|_{F}q \sqrt{\beta}\sum_{i=0}^{k-1}3^{i}(1+2q\rho\beta)^{i}\right)\] \[+\|A\|_{2\to 2}\|Y\|_{F}q\sqrt{\beta}\Bigg{)}\] \[= 3\|A\|_{2\to 2}\|Y\|_{F}q\sqrt{\beta}\sum_{i=0}^{k}3^{i}(1+2q \rho\beta)^{i}.\]
The proof follows.
### Lipschitzness with respect to \(\Phi\)
With the previous result in hand, we prove that the intermediate decoder (11) and the final decoder (16) are Lipschitz continuous with respect to \(\Phi\).
**Theorem 3.10** (Proof in the supplemental material).: _Let \(f_{W}^{L}\) defined as in (11), \(L\geq 2\), and analysis operator \(\Phi\in\mathcal{F}_{\beta}\). Then, for any \(\Phi_{1},\,\Phi_{2}\in\mathcal{F}_{\beta}\), it holds_
\[\|f_{\Phi_{1}}^{L}(Y)-f_{\Phi_{2}}^{L}(Y)\|_{F}\leq K_{L}\|\Phi_{1}-\Phi_{2}\| _{2\to 2}, \tag{36}\]
_where_
\[K_{L} = qG^{L}\,+\,\sum_{k=2}^{L}\Bigg{(}G^{L-k}\bigg{[}qG\,+\,36\beta q ^{2}\rho(1\,+\,\beta q\rho)\|A\|_{2\to 2}\|Y\|_{F}\sum_{i=0}^{k-2}G^{i} \bigg{]}\Bigg{)}, \tag{37}\]
_with \(G=3(1+2\beta q\rho)\), and \(q\), \(\beta\) as in Proposition 3.9._
**Corollary 3.11**.: _Let \(h\in\mathbf{H}^{L}\) defined as in (18), \(L\geq 2\), and analysis operator \(\Phi\in\mathcal{F}_{\beta}\). Then, for any \(\Phi_{1},\,\Phi_{2}\in\mathcal{F}_{\beta}\), we have:_
\[\|\sigma(T_{1}(f_{\Phi_{1}}^{L}(Y)))-\sigma(T_{2}(f_{\Phi_{2}}^{L}(Y)))\|_{F} \leq\Sigma_{L}\|\Phi_{2}-\Phi_{1}\|_{2\to 2}, \tag{38}\]
_where_
\[\Sigma_{L}=2q\rho\sqrt{\beta}\left(K_{L}+3\|A\|_{2\to 2}\|Y\|_{F}q(1+2\beta q \rho)\sum_{k=0}^{L-1}3^{k}(1+2\beta q\rho)^{k}\right), \tag{39}\]
_with \(q\), \(\beta\) as in Proposition 3.9 and \(K_{L}\) as in Theorem 3.10._
Proof.: By definition, \(\sigma\) is a \(1\)-Lipschitz function. The affine map \(T\) is also Lipschitz continuous, with Lipschitz constant satisfying
\[\operatorname{Lip}_{T}=\|T\|_{2\to 2}\leq 2q\rho\sqrt{\beta}, \tag{40}\]
due to the explicit forms of (13) and (14), with \(q\) and \(\beta\) as in Proposition 3.9. Putting everything together and applying Theorem 3.10 yields
\[\|\sigma(T_{1}(f^{L}_{\Phi_{1}}(Y)))-\sigma(T_{2}(f^{L}_{\Phi_{2}}(Y) ))\|_{F}\] \[\leq \|T_{1}(f^{L}_{\Phi_{1}}(Y))-T_{2}(f^{L}_{\Phi_{2}}(Y))\|_{F}\] \[= \|T_{1}(f^{L}_{\Phi_{1}}(Y))-T_{1}(f^{L}_{\Phi_{2}}(Y))+T_{1}(f^{L }_{\Phi_{2}}(Y))-T_{2}(f^{L}_{\Phi_{2}}(Y))\|_{F}\] \[\leq \|T_{1}\|_{2\to 2}\|f^{L}_{\Phi_{2}}(Y))-f^{L}_{\Phi_{1}}(Y))\|_{F}+ \|T_{2}-T_{1}\|_{2\to 2}\|f^{L}_{\Phi_{1}}(Y))\|_{F}\] \[\leq 2q\rho\sqrt{\beta}K_{L}\|\Phi_{2}-\Phi_{1}\|_{2\to 2}+ \left(3\|A\|_{2\to 2}\|Y\|_{F}q\sqrt{\beta}\sum_{k=0}^{L-1}G^{k}\right)\|T_{2}-T _{1}\|_{2\to 2},\]
where \(G=3(1+2q\rho\beta)\). The introduction of mixed terms and the application of Lemma 3.8 give:
\[\|T_{2}-T_{1}\|_{2\to 2}\] \[\leq 2\rho\|(A^{T}A+\rho\Phi_{2}^{T}\Phi_{2})^{-1}\Phi_{2}^{T}-(A^{T}A +\rho\Phi_{1}^{T}\Phi_{1})^{-1}\Phi_{1}^{T}\|_{2\to 2}\] \[= 2\rho\|(A^{T}A+\rho\Phi_{2}^{T}\Phi_{2})^{-1}\Phi_{2}^{T}-(A^{T}A +\rho\Phi_{2}^{T}\Phi_{2})^{-1}\Phi_{1}^{T}\] \[+(A^{T}A+\rho\Phi_{2}^{T}\Phi_{2})^{-1}\Phi_{1}^{T}-(A^{T}A+\rho \Phi_{1}^{T}\Phi_{1})^{-1}\Phi_{1}^{T}\|_{2\to 2}\] \[\leq 2\rho\bigg{(}q\|\Phi_{2}-\Phi_{1}\|_{2\to 2}+\sqrt{\beta}\|(A^{T}A +\rho\Phi_{2}^{T}\Phi_{2})^{-1}-(A^{T}A+\rho\Phi_{1}^{T}\Phi_{1})^{-1}\|_{2 \to 2}\bigg{)}\] \[\leq 2\rho\bigg{(}q\|\Phi_{2}-\Phi_{1}\|_{2\to 2}+2\beta q^{2}\rho\| \Phi_{2}-\Phi_{1}\|_{2\to 2}\bigg{)}\] \[= 2q\rho(1+2q\beta\rho)\|\Phi_{2}-\Phi_{1}\|_{2\to 2}.\]
Overall, we obtain
\[\|\sigma(T_{1}(f^{L}_{\Phi_{1}}(Y)))-\sigma(T_{2}(f^{L}_{\Phi_{2}}(Y)))\|_{F} \leq\Sigma_{L}\|\Phi_{2}-\Phi_{1}\|_{2\to 2}, \tag{41}\]
where \(\Sigma_{L}=2q\rho\sqrt{\beta}\left(K_{L}+\|A\|_{2\to 2}\|Y\|_{F}qG\sum_{k=0}^{L-1}G ^{k}\right)\).
### Chaining the Rademacher complexity
We apply the results of Sections 3.2 and 3.3 and estimate the covering numbers of an equivalent to (18) set, namely,
\[\mathbf{M}: =\{(h(y_{1})|h(y_{2})|\ldots|h(y_{s}))\in\mathbb{R}^{n\times s}:\ h \in\mathbf{H}^{L}\}\] \[=\{\sigma(T((f^{L}_{\Phi}(Y)))\in\mathbb{R}^{n\times s}:\ \Phi\in \mathcal{F}_{\beta}\}. \tag{42}\]
The columns of each \(M\in\mathbf{M}\) constitute the reconstructions produced by \(h\in\mathbf{H}^{L}\) when applied to each \(y_{i}\), \(i=1,2,\ldots,s\). Since both \(\mathbf{M}\) and \(\mathbf{H}^{L}\) are parameterized by \(\Phi\), we rewrite (28) as follows:
\[\mathcal{R}_{\mathbf{S}}(\mathbf{H})=\mathbb{E}\sup_{h\in\mathbf{H}^{L}}\sum_ {i=1}^{s}\sum_{k=1}^{n}\epsilon_{ik}h_{k}(x_{i})=\mathbb{E}\sup_{M\in\mathbf{ M}}\frac{1}{s}\sum_{i=1}^{s}\sum_{k=1}^{n}\epsilon_{ik}M_{ik}. \tag{43}\]
The latter has subgaussian increments, so we employ Dudley's inequality [29, Theorem 8.23] to upper bound it in terms of the covering numbers of \(\mathbf{M}\). A key quantity
appearing in Dudley's inequality is the radius of \(\mathbf{M}\), that is,
\[\Delta(\mathbf{M}) =\sup_{h\in\mathbf{H}^{L}}\sqrt{\mathbb{E}\left(\sum_{i=1}^{s}\sum_{ k=1}^{n}\epsilon_{ik}h_{k}(y_{i})\right)^{2}}\leq\sup_{h\in\mathbf{H}^{L}} \sqrt{\mathbb{E}\sum_{i=1}^{s}\sum_{k=1}^{n}\epsilon_{ik}(h_{k}(y_{i}))^{2}}\] \[\leq\sup_{h\in\mathbf{H}^{L}}\sqrt{\sum_{i=1}^{s}\|h(y_{i})\|_{2}^ {2}}\overset{\eqref{eq:dudley}}{\leq}\sqrt{s}B_{\mathrm{out}}. \tag{44}\]
We combine (28), (43), (44) and apply Dudley's inequality to obtain
\[\mathcal{R}_{\mathbf{S}}(l\circ\mathbf{H}^{L})\leq\frac{16(\mathrm{B}_{\mathrm{ in}}+\mathrm{B}_{\mathrm{out}})}{s}\int_{0}^{\frac{\sqrt{s}B_{\mathrm{out}}}{2}} \sqrt{\log\mathcal{N}(\mathbf{M},\|\cdot\|_{F},\varepsilon)}d\varepsilon. \tag{45}\]
Finally, we estimate the quantity \(\mathcal{N}(\mathbf{M},\|\cdot\|_{F},\varepsilon)\).
**Lemma 3.12**.: _For \(0<t<\infty\), the covering numbers of the ball \(B_{\|\cdot\|_{2\to 2}}^{N\times n}(t)=\{X\in\mathbb{R}^{N\times n}:\,\|X\|_{2 \to 2}\leq t\}\) satisfy the following for any \(\varepsilon>0\):_
\[\mathcal{N}(B_{\|\cdot\|_{2\to 2}}^{N\times n}(t),\|\cdot\|_{2 \to 2},\varepsilon)\leq\left(1+\frac{2t}{\varepsilon}\right)^{Nn}. \tag{46}\]
Proof.: For \(|\cdot|\) denoting the volume in \(\mathbb{R}^{N\times n}\), we adapt a well-known result [54, Proposition 4.2.12], in order to connect covering numbers and \(|\cdot|\):
\[\mathcal{N}(B_{\|\cdot\|_{2\to 2}}^{N\times n}(t),\|\cdot\|_{2 \to 2},\varepsilon) \leq\frac{|B_{\|\cdot\|_{2\to 2}}^{N\times n}(t)+(\frac{ \varepsilon}{2})B_{\|\cdot\|_{2\to 2}}^{N\times n}(1)|}{|(\frac{\varepsilon}{2})B_{\| \cdot\|_{2\to 2}}^{N\times n}(1)|}=\frac{|(t+\frac{\varepsilon}{2})B_{\| \cdot\|_{2\to 2}}^{N\times n}(1)|}{|(\frac{\varepsilon}{2})B_{\|\cdot\|_{2\to 2}}^{N \times n}(1)|}\] \[\leq\left(1+\frac{2t}{\varepsilon}\right)^{Nn}.\qed\]
**Proposition 3.13**.: _For the covering numbers of \(\mathbf{M}\) it holds:_
\[\mathcal{N}(\mathbf{M},\|\cdot\|_{F},\varepsilon)\leq\left(1+\frac{2\sqrt{ \beta}\Sigma_{L}}{\varepsilon}\right)^{Nn}. \tag{47}\]
Proof.: By Definition 3.2 and Remark 3.3 we have \(\mathcal{F}_{\beta}\subset B_{\|\cdot\|_{2\to 2}}^{N\times n}(\sqrt{\beta})\). Then, the application of Lemma 3.12 implies for \(\mathcal{F}_{\beta}\) that
\[\mathcal{N}(\mathcal{F}_{\beta},\|\cdot\|_{2\to 2},\varepsilon)\leq\left(1+ \frac{2\sqrt{\beta}}{\varepsilon}\right)^{Nn}. \tag{48}\]
Therefore, the covering numbers of \(\mathbf{M}\) are bounded as follows:
\[\mathcal{N}(\mathbf{M},\|\cdot\|_{F},\varepsilon) \leq\mathcal{N}(\Sigma_{L}\mathcal{F}_{\beta},\|\cdot\|_{2\to 2}, \varepsilon)=\mathcal{N}(\mathcal{F}_{\beta},\|\cdot\|_{2\to 2}, \varepsilon/\Sigma_{L})\] \[\leq\left(1+\frac{2\sqrt{\beta}\Sigma_{L}}{\varepsilon}\right)^{N n}, \tag{49}\]
which is the desired estimate.
### Generalization error bounds
We combine the results of Section 3.4 with Theorem 3.5, to deliver generalization error bounds for ADMM-DAD.
**Theorem 3.14**.: _Let \(\mathbf{H}^{L}\) be the hypothesis class defined in (18). With probability at least \(1-\delta\), for all \(h\in\mathbf{H}^{L}\), the generalization error is bounded as_
\[\begin{split}\mathcal{L}(h)\leq\hat{\mathcal{L}}_{train}(h)& +8(B_{\mathrm{in}}+B_{\mathrm{out}})B_{\mathrm{out}}\sqrt{\frac{Nn}{s}} \sqrt{\log\left(e\left(1+\frac{2\sqrt{\beta}\Sigma_{L}}{\sqrt{s}B_{\mathrm{ out}}}\right)\right)}\\ &+4(B_{\mathrm{in}}+B_{\mathrm{out}})^{2}\sqrt{\frac{2\log(4/ \delta)}{s}},\end{split} \tag{50}\]
_with \(\Sigma_{L}\) defined as in Corollary 3.11._
Proof.: We apply Proposition 3.13 in (45) to get
\[\mathcal{R}_{\mathbf{S}}(l\circ\mathbf{H}^{L}) \leq\frac{16(\mathrm{B}_{\mathrm{in}}+\mathrm{B}_{\mathrm{out}}) }{s}\int_{0}^{\frac{\sqrt{s}B_{\mathrm{out}}}{2}}\sqrt{\log\mathcal{N}( \mathbf{M},\|\cdot\|_{F},\varepsilon)}d\varepsilon\] \[\leq\frac{16(\mathrm{B}_{\mathrm{in}}+\mathrm{B}_{\mathrm{out}}) }{s}\int_{0}^{\frac{\sqrt{s}B_{\mathrm{out}}}{2}}\sqrt{Nn\log\left(1+\frac{2 \sqrt{\beta}\Sigma_{L}}{\varepsilon}\right)}d\varepsilon\] \[\leq 8(\mathrm{B}_{\mathrm{in}}+\mathrm{B}_{\mathrm{out}})B_{ \mathrm{out}}\sqrt{\frac{Nn}{s}}\sqrt{\log\left(e\left(1+\frac{4\sqrt{\beta} \Sigma_{L}}{\sqrt{s}B_{\mathrm{out}}}\right)\right)}, \tag{51}\]
where in the last step we used the following inequality:
\[\int_{0}^{a}\sqrt{\log\left(1+\frac{b}{t}\right)}dt\leq a\sqrt{\log(e(1+b/a))},\qquad a,b>0.\]
We substitute the estimate (51) in Theorem 3.5 and the proof follows.
**Theorem 3.15**.: _Let \(\mathbf{H}^{L}\) be the hypothesis class defined in (18). Assume there exist pair-samples \(\{(x_{i},y_{i})\}_{i=1}^{s}\overset{i.i.d.}{\sim}\mathcal{D}^{s}\), with \(y_{i}=Ax_{i}+e\), \(\|e\|_{2}\leq\varepsilon\), for some \(\varepsilon>0\). Let us further assume that it holds \(\|y_{i}\|_{2}\leq\mathrm{B}_{\mathrm{in}}\) almost surely with \(\mathrm{B}_{\mathrm{in}}=\mathrm{B}_{\mathrm{out}}\) in (15). Then with probability at least \(1-\delta\), for all \(h\in\mathbf{H}^{L}\), the generalization error is bounded as_
\[\begin{split}\mathcal{L}(h)\leq\hat{\mathcal{L}}_{train}(h)+16B_ {\mathrm{out}}^{2}\Bigg{(}\sqrt{\frac{Nn}{s}}&\sqrt{\log\left(e \left(1+\frac{2\sqrt{\beta}\Sigma_{L}}{\sqrt{s}B_{\mathrm{out}}}\right)\right) }\\ &+\sqrt{\frac{2\log(4/\delta)}{s}}\Bigg{)},\end{split} \tag{52}\]
_with \(\Sigma_{L}\) defined as in Corollary 3.11._
Notice that \(L\) enters at most exponentially in the definition of \(K_{L}\) (37) - and thus \(\Sigma_{L}\) (39). If we treat all terms in (52) as constants, except for \(L\), \(N\), \(s\), then the previous Theorem tells us that the generalization error of ADMM-DAD roughly scales like \(\sqrt{NL/s}\).
Experiments
We train and test ADMM-DAD on a synthetic dataset of random vectors, drawn from the normal distribution (70000 training and 10000 test examples) and the MNIST dataset [41], containing 60000 training and 10000 test \(28\times 28\) image examples. For the MNIST dataset, we take the vectorized form of the images. We examine ADMM-DAD for alternating number of layers \(L\) and redundancy ratios \(N/n\). For the measurement process, we select an appropriately normalized Gaussian matrix \(A\in\mathbb{R}^{m\times n}\), with \(m/n=25\%\) CS ratio. We also add zero-mean Gaussian noise \(e\), with standard deviation \(\text{std}=10^{-4}\) to the measurements, so that \(y=Ax+e\). We perform (He) normal initialization [55] for \(W\in\mathbb{R}^{N\times n}\). We implement all models in PyTorch [56] and train them using the _Adam_ algorithm [57], with batch size 128. For all experiments, we report the _test MSE_:
\[\mathcal{L}_{test}=\frac{1}{d}\sum_{i=1}^{d}\|h(\tilde{y}_{i})-\tilde{x}_{i} \|_{2}^{2}, \tag{53}\]
where \(\mathbf{D}=\{(\tilde{y}_{i},\tilde{x}_{i})\}_{i=1}^{d}\) is a set of \(d\) test data, that are not used during training, and the _empirical generalization error_ (EGE)
\[\mathcal{L}_{gen}=|\mathcal{L}_{test}-\mathcal{L}_{train}|, \tag{54}\]
where \(\mathcal{L}_{train}\) is defined in (20). Since (53) approximates the true loss, we use (54) - which can be explicitly computed - to approximate (22). We train all models, on all datasets, employing early stopping [58] with respect to (54). We repeat all the experiments at least 10 times and average the results over the runs. We also compare ADMM-DAD to a recent variant of ISTA-net [20]. Both DUNs learn corresponding decoders for CS, but ISTA-net promotes synthesis sparsity, by learning an orthogonal sparsifying transform; ADMM-DAD, in constrast, promotes analysis sparsity by means of the learnable redundant analysis operator. Therefore, the structure of ISTA-net makes it a nice candidate for comparison with ADMM-DAD, in order to showcase how the reconstructive and generalization ability of DUNs are affected, when employing a redundant sparsifier instead of an orthogonal one. For ISTA-net, we set the best hyper-parameters proposed by the original authors.
### Experimental results & discussion
We evaluate the quality of our theoretical results with the following experimental scenarios.
#### 4.1.1 Varying \(N\), \(L\) on real-world image data
We examine the performance of ADMM-DAD on MNIST dataset, with varying number of layers \(L\) and redundancy \(N\) of the learnable sparsifier. We gather the results in Figure 0(a), which illustrates that the test MSE achieved by each instance of ADMM-DAD drops, as \(L\) and \(N\) increase. The decays seem reasonable, if examined from a model-based point of view. Specifically, when an iterative algorithm solves the generalized LASSO problem (5), it is expected that the reconstruction quality and performance of the solver will benefit from the (high) redundancy offered by the involved analysis operators [32], especially as the number of iterations/layers increases. On the other hand, the EGE of ADMM-DAD increases as both \(L\) and \(N/n\) increase. This behaviour
Figure 1: Performance plots of ADMM-DAD on (a) MNIST and (b) synthetic datasets, for varying \(L\), \(N\) (and \(n\)).
confirms the theory we developed in Section 3.5, since the EGE seems to scale like \(\sqrt{NL}\).
#### 4.1.2 Varying \(n\), \(N\), \(L\) on synthetic data
We test ADMM-DAD on a synthetic dataset, with varying \(L\), \(N\) and ambient dimension \(n\). We report the results in Figure 0(b), which illustrates the reconstruction error decreasing as \(L\) increases. Regarding the generalization error, we observe in Figure 0(b) that the EGE appears to grow at the rate of \(\sqrt{nNL}\), despite the fact that the theoretical generalization error bounds depend on other terms as well. The overall performance of ADMM-DAD again conforms with our theoretical results.
#### 4.1.3 Comparison to baseline
We examine how analysis and synthesis sparsity models affect the generalization ability of unfolding networks solving the CS problem. To that end, we compare the decoders of ADMM-DAD and ISTA-net, on the MNIST and the synthetic datasets, for varying
\begin{table}
\begin{tabular}{||c|c|c|c|c|c|c|} \hline \multicolumn{2}{||c|}{Test MSE} \\ \hline Dataset & \multicolumn{4}{c|}{Synthetic} & \multicolumn{4}{c|}{MNIST} \\ \hline Decoder Layers & \(L=10\) & \(L=20\) & \(L=30\) & \(L=10\) & \(L=20\) & \(L=30\) \\ \hline ADMM-DAD (Ours) & \(\mathbf{0.007725}\) & \(\mathbf{0.007600}\) & \(\mathbf{0.007586}\) & \(\mathbf{0.046391}\) & \(\mathbf{0.040282}\) & \(\mathbf{0.032001}\) \\ \hline ISTA-net [20] & \(0.007959\) & \(0.007774\) & \(0.007710\) & \(0.070645\) & \(0.068006\) & \(0.066325\) \\ \hline \hline \multicolumn{2}{||c|}{Generalization Error} \\ \hline Decoder Layers & \multicolumn{4}{c|}{Synthetic} & \multicolumn{4}{c|}{MNIST} \\ \hline ADMM-DAD (Ours) & \(\mathbf{0.22\cdot 10^{-6}}\) & \(\mathbf{1.04\cdot 10^{-6}}\) & \(\mathbf{1.65\cdot 10^{-6}}\) & \(\mathbf{0.63\cdot 10^{-4}}\) & \(\mathbf{0.40\cdot 10^{-4}}\) & \(\mathbf{1.21\cdot 10^{-4}}\) \\ \hline ISTA-net [20] & \(4.48\cdot 10^{-6}\) & \(2.64\cdot 10^{-6}\) & \(9.44\cdot 10^{-6}\) & \(22.51\cdot 10^{-4}\) & \(50.45\cdot 10^{-4}\) & \(76.16\cdot 10^{-4}\) \\ \hline \end{tabular}
\end{table}
Table 1: Test MSEs and empirical generalization errors for 10-, 20- and 30-layer decoders, with fixed 25% CS ratio and redundancy ratio \(N/n=50\). Bold letters indicate the best performance between the two decoders.
\begin{table}
\begin{tabular}{||c|c||c|} \hline \((N,\,L)\) & \(\rho\|S^{-1}\|_{2\to 2}\|A\|_{2\to 2}\) & \((n,N,L)\) & \(\rho\|S^{-1}\|_{2\to 2}\|A\|_{2\to 2}\) \\ \hline \hline \((23520,\,10)\) & \(0.0003\) & \(\left\{\begin{array}{c}(100,\,1000,\,10)\) & \(0.0031\\ (100,\,5000,\,20)\) & \(0.0004\) \\ \hline \((39200,\,15)\) & \(0.0002\) & \(\left\{\begin{array}{c}(100,\,3000,\,30)\) \\ (100,\,7000,\,40)\) \\ \end{array}\right.\) & \(0.0007\) \\ \hline \((39200,\,20)\) & \(0.0002\) & \(\left\{\begin{array}{c}(300,\,21000,\,10)\) \\ (300,\,6000,\,20)\) \\ \end{array}\right.\) & \(0.0003\) \\ \hline \((7840,\,30)\) & \(0.0021\) & \(\left\{\begin{array}{c}(300,\,15000,\,30)\) \\ (300,\,12000,\,40)\) \\ \end{array}\right.\) & \(0.0003\) \\ \hline \((54880,\,35)\) & \(0.0001\) & \(\left\{\begin{array}{c}(700,\,21000,\,20)\) \\ (700,\,42000,\,30)\) \\ \end{array}\right.\) & \(0.0001\) \\ \hline \((54880,\,40)\) & \(0.0001\) & \(\left\{\begin{array}{c}(700,\,49000,\,40)\) \\ \end{array}\right.\) & \(0.00008\) \\ \hline \end{tabular}
\end{table}
Table 2: Examination of the values of \(\rho\|S^{-1}\|_{2\to 2}\|A\|_{2\to 2}\), under different choices of \(L\), \(N\) (and \(n\)), for the MNIST (left) and the synthetic (right) datasets.
Figure 2: Visualization of \(S^{-1}S\) on (a) MNIST and (b) synthetic datasets, for varying \(L\), \(N\) (and \(n\)) of the associated learnable \(\Phi\).
number of layers. For the synthetic dataset, we fix the ambient dimension to \(n=300\). For ADMM-DAD, we set \(N=39200\) for the sparsifier acting on the MNIST dataset and \(N=15000\) for the sparsifier acting on the synthetic data. Our results are collected in Table 1. As depicted in the latter, ADMM-DAD's decoder outperforms ISTA-net's decoder, consistently for both datasets, in terms of both reconstruction and generalization error. For the former, our experiments confirm the model-based results regarding the advantage of analysis sparsity over its synthesis counterpart (cf. Section 2.2). As for the generalization error: our results indicate that the redundancy of the learnable sparsifier acts beneficially for the generalization ability of ADMM-DAD, compared to the orthogonality of ISTA-net's framework.
#### 4.1.4 A note on the invertibility of \(S=\Phi^{T}\Phi\)
We revisit the setups of Sections 4.1.1, 4.1.2 and implement exemplary instances of ADMM-DAD, in order to verify Assumptions 3.1 and 3.2. To that end, we examine the values of \(S^{-1}S\) and \(\rho\|S^{-1}\|_{2\to 2}\|A\|_{2\to 2}\), for fixed \(\rho=0.1\), \(\|A\|_{2\to 2}\approx 2\) and with \(S\)-operator associated to each learned \(\Phi\), and present the results in Figure 2 and Table 2, respectively. According to the latter, the values of \(\rho\|S^{-1}\|_{2\to 2}\|A\|_{2\to 2}\) are consistently less than \(1\), for different tuples of \(L\), \(N\) (and \(n\)), which is in accordance to Assumption 3.2. We also provide in Figure 2 a visualization of the structure of \(S^{-1}S\). As illustrated in the aforementioned figure, ADMM-DAD learns a redundant analysis operator \(\Phi\) with associated \(S\)-operator satisfying2\(S^{-1}S=I\). This observation validates our intuition for imposing Assumption 3.1 in our framework, as well as constraining \(\Phi\) to lie in \(\mathcal{F}_{\beta}\) (see Section 3.1). Furthermore, we conjecture that the fact that ADMM-DAD learns an analysis operator associated to a frame could explain its increased performance, compared to the synthesis-based baseline; this assumption could serve as a potential line of future work. Note that we have also conducted experiments with a regularizer of the form \(\|S^{-1}S-I\|_{F}\), in order to cover the small probability of learning a \(\Phi\) such that \(S\) is not invertible. Since ADMM-DAD with and without the regularizer yielded almost identical performance, we chose to proceed with minimizing the train MSE only. Overall, this set of example experiments showcases that the appearance of the term \(\Phi^{T}\Phi\) in the iterative scheme of ADMM-DAD, induces a frame property to the learnable redundant analysis operator \(\Phi\).
Footnote 2: Due to Python’s round-off errors, we consider the identity matrix \(I\) to have ones on the main diagonal and non-diagonal entries of the order at most \(10^{-5}\)
## 5 Conclusion and Future Work
In this paper, we studied the generalization ability of a state-of-the-art ADMM-based unfolding network, namely ADMM-DAD. The latter jointly learns a decoder for Compressed Sensing (CS) and a sparsifying redundant analysis operator. To that end, we first exploited an inherent characteristic of ADMM to impose a meaningful structural constraint on ADMM-DAD's learnable sparsifier; the latter parametrized ADMM-DAD's hypothesis class. Our novelty relies on the fact that the proposed framework induces a frame property on the learnable sparsifying transform. Then, we employed chaining to estimate the Rademacher complexity of ADMM-DAD's hypothesis class. With this estimate in hand, we delivered generalization error bounds for ADMM-DAD. To our knowledge, we are the first to study the generalization ability of an ADMM-based unfolding network, that solves the analysis-based CS problem. Finally,
we conducted experiments validating our theory and compared ADMM-DAD to a state-of-the-art unfolding network for CS; the former outperformed the latter, consistently for all datasets. As a future line of work, we would like to include more experiments regarding the structure of ADMM-DAD, especially with respect to the afore-stated frame property. Additionally, it would be interesting to include numerical comparisons among ADMM-DAD and ADMM-based unfolding networks promoting synthesis sparsity in CS.
## Acknowledgements
V. Kouni acknowledges financial support for the implementation of this paper by Greece and the European Union (European Social Fund-ESF) through the Operational Program "Human Resources Development, Education and Lifelong Learning" in the context of the Act "Enhancing Human Resources Research Potential by undertaking a Doctoral Research" Sub-action 2: IKY Scholarship Program for PhD candidates in the Greek Universities.
## Conflict of Interest Statement
On behalf of all authors, the corresponding author states that there is no conflict of interest.
|
2304.10896 | GCNH: A Simple Method For Representation Learning On Heterophilous
Graphs | Graph Neural Networks (GNNs) are well-suited for learning on homophilous
graphs, i.e., graphs in which edges tend to connect nodes of the same type.
Yet, achievement of consistent GNN performance on heterophilous graphs remains
an open research problem. Recent works have proposed extensions to standard GNN
architectures to improve performance on heterophilous graphs, trading off model
simplicity for prediction accuracy. However, these models fail to capture basic
graph properties, such as neighborhood label distribution, which are
fundamental for learning. In this work, we propose GCN for Heterophily (GCNH),
a simple yet effective GNN architecture applicable to both heterophilous and
homophilous scenarios. GCNH learns and combines separate representations for a
node and its neighbors, using one learned importance coefficient per layer to
balance the contributions of center nodes and neighborhoods. We conduct
extensive experiments on eight real-world graphs and a set of synthetic graphs
with varying degrees of heterophily to demonstrate how the design choices for
GCNH lead to a sizable improvement over a vanilla GCN. Moreover, GCNH
outperforms state-of-the-art models of much higher complexity on four out of
eight benchmarks, while producing comparable results on the remaining datasets.
Finally, we discuss and analyze the lower complexity of GCNH, which results in
fewer trainable parameters and faster training times than other methods, and
show how GCNH mitigates the oversmoothing problem. | Andrea Cavallo, Claas Grohnfeldt, Michele Russo, Giulio Lovisotto, Luca Vassio | 2023-04-21T11:26:24Z | http://arxiv.org/abs/2304.10896v1 | # GCNH: A Simple Method For Representation Learning On Heterophilous Graphs
###### Abstract
Graph Neural Networks (GNNs) are well-suited for learning on homophilous graphs, i.e., graphs in which edges tend to connect nodes of the same type. Yet, achievement of consistent GNN performance on heterophilous graphs remains an open research problem. Recent works have proposed extensions to standard GNN architectures to improve performance on heterophilous graphs, trading off model simplicity for prediction accuracy. However, these models fail to capture basic graph properties, such as neighborhood label distribution, which are fundamental for learning.
In this work, we propose GCN for Heterophily (GCNH), a simple yet effective GNN architecture applicable to both heterophilous and homophilous scenarios. GCNH learns and combines _separate_ representations for a node and its neighbors, using one _learned importance coefficient_ per layer to balance the contributions of center nodes and neighborhoods. We conduct extensive experiments on eight real-world graphs and a set of synthetic graphs with varying degrees of heterophily to demonstrate how the design choices for GCNH lead to a sizable improvement over a vanilla GCN. Moreover, GCNH outperforms state-of-the-art models of much higher complexity on four out of eight benchmarks, while producing comparable results on the remaining datasets. Finally, we discuss and analyze the lower complexity of GCNH, which results in fewer trainable parameters and faster training times than other methods, and show how GCNH mitigates the oversmoothing problem.
Graph Neural Networks, heterophily, disassortativity, graph representation learning
## I Introduction
GNNs are core components of current state-of-the-art methods for learning and prediction on graph-structured data across domains and applications. Their capability to encode semantic and contextual information into node embeddings is known to be particularly effective on homophilous graphs, i.e., graphs in which nodes of the same type tend to be connected [1, 2]. On the other hand, in the case of heterophilous graphs, in which neighboring nodes tend to be dissimilar in type, the achievement of competitive, or at least consistent, GNN prediction accuracy remains an open research goal [3, 4].
Among other explanations for the inconsistent GNN performance in the heterophilous case, [5] suggests that the message aggregation strategies of standard GNNs could lead to weakly representative node embeddings, due to a disproportionately high contribution of dissimilar neighbors. That is why some works attempt to improve the generalization capability of GNNs to heterophilous networks by either modifying the graph structure or tailoring aggregation strategies and network architecture for such scenario [3, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18]. Nevertheless, [19] shows that a vanilla Graph Convolutional Network (GCN) can still _outperform_ more complex (heterophilous) models on some heterophilous graphs. [19] and [20] suggest that this contradiction originates from the fact that the edge homophily ratio should not be considered a representative indicator of GCN performance, and instead recommend taking into account neighborhood structure and distribution of node labels.
Based on these observations, we propose a simple yet effective GNN architecture, namely GCN for Heterophily (GCNH), to improve node representation capabilities for applications on heterophilous graphs. Differently from other GNNs, GCNH learns two different functions that encode a node and its neighbors _separately_. In addition, we allow the GCNH layer to flexibly assign different relevance to the information present in the neighborhood versus the information present in the center node. We discuss and show how this design mitigates noisy neighborhood representations from negatively influencing the learned embeddings while allowing informative neighbors to strongly influence the final node embedding. These extensions make GCNH more adaptive to heterophily than a vanilla GCN, while also improving prediction accuracy over more complex models designed for heterophily on common benchmarks.
We evaluate GCNH on the task of node classification using eight common real-world datasets and one set of synthetic graphs with varying degrees of heterophily. We show that GCNH is able to learn meaningful representations of nodes independently of the homophily level of the graph, achieving new state-of-the-art performance on heterophilous graph datasets while producing results comparable to the state-of-the-art on homophilous benchmarks.
Our main contributions are summarized below.
* We present GCNH, a simple yet effective GNN architecture that improves graph representation learning capabilities on heterophilous graphs while preserving the advantages of GCN.
* We present extensive experiments on real and synthetic datasets with varying degrees of heterophily for the node classification task. GCNH improves over the state-of-the-art on four (out of eight) real-world benchmarks while performing comparably to the state-of-the-art on all other datasets, including homophilous graphs.
* We showcase the lower complexity of GCNH compared to other state-of-the-art models, both in terms of the training time and the number of trainable parameters, and how it mitigates the oversmoothing problem.
## II Related work
The analysis and improvement of GNN performance on heterophilous graphs have received increasing attention in recent years [3, 4, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18]. GNNs generate node embeddings in two steps: message transformation, where node messages, i.e., features or embeddings from the previous layer, are transformed, and message aggregation, where the final node embeddings are generated by aggregating the messages of neighbors in the graph. One of the earliest and today most common GNN models is the GCN [1], which defines message transformation as a learnable linear layer and message aggregation as the average of the messages from the neighbors and from the center node.
Several works have proposed new GNN architectures or graph transformations to make the message-passing framework suitable for applications on non-homophilous graphs. We compare a number of these approaches to our method in Section V. Borrowing the categorization introduced in [5], these approaches can be subdivided as follows.
### _Non-Local Neighbor Extension_
These methods selectively extend the receptive field of GNNs to include potentially important nodes located outside of the local neighborhood. These approaches are based on the assumption that neighboring nodes may be dissimilar and that information about the target node, presumably carried by nodes with the same label, is available in nodes belonging to higher-order neighborhoods. In particular, methods such as [3, 4, 7, 15] create latent representations for nodes using multiple neighborhoods at different hop distances and combine them into one overall embedding. Other approaches, such as [6, 8, 16, 18], do not consider entire neighborhoods but single nodes as potential neighbors, regardless of their location in the graph, and aggregate messages from the nodes that are estimated to be more relevant.
### _GNN Architecture Adaptation_
Methods in this category modify the GNN architecture with the goal of improving representation learning capabilities on heterophilous graphs. A common approach is to estimate the relevance of a node to a given target node with respect to the prediction task at hand, and to assign individual weights to messages from neighbors based on their relevance. This approach is used in [9, 11, 12, 13, 17] among the others. Another approach is to learn separate embeddings for the neighborhood and the target node and merge them at a later stage. This mitigates the phenomenon of noisy embeddings, where information from potentially highly dissimilar neighbors is mixed with the features of the target node. Methods that follow this approach include [3, 18]. In addition, [3, 10, 14] adopt the strategy of treating the information captured by different GNN layers separately, instead of first aggregating the information from all layers and then using the final node embeddings for prediction.
## III Preliminaries
### _Notation And Problem Statement_
Let \(G=(V,E)\) be an unweighted and undirected graph, where \(V\) is the set of nodes and \(E\) is the set of edges. The connectivity information in the graph is represented by the adjacency matrix \(A\in\mathbb{R}^{n\times n}\), where \(n=|V|\) is the number of nodes and matrix elements \(A_{uv}\) are equal to 1 if nodes \(u\) and \(v\) are adjacent and 0 otherwise. Each node is associated with a feature vector \(x_{u}\) of size \(f\), and the complete set of features in the graph is denoted by \(X\in\mathbb{R}^{n\times f}\). The _neighbors_ of a node \(u\) are the set of nodes adjacent to it, denoted by \(\mathcal{N}_{u}\). Note that \(\mathcal{N}_{u}\) does not include node \(u\).
The task addressed in this work is _supervised node classification_. In this scenario, each node is associated with a label \(y_{u}\in C\) representing the class the node belongs to, where \(C\) is the set of labels. The task corresponds to learning a mapping \(\mathcal{F}:V\to C\) that uses the information of the graph \(G\), the features \(X\) and the labels to map nodes into their ground-truth class. As is standard practice, we add a linear layer that maps the final node representations of the last network layer \(H^{L}\in\mathbb{R}^{n\times e^{L}}\) (with \(L\) total number of layers and \(e^{L}\) size of the final node embeddings) to class probabilities with the addition of a softmax. For a node \(u\):
\[\tilde{y}_{u}=\text{softmax}(h_{u}^{L}W_{cl}), \tag{1}\]
where \(W_{cl}\in\mathbb{R}^{e^{L}\times|C|}\) is a learnable matrix. We omit the bias term from the equation for simplicity. Note that \(\tilde{y}_{u}\in\mathbb{R}^{|C|}\) is the probability distribution over classes for node \(u\). During inference, networks assign to the node the class with maximum probability:
\[\hat{y}_{u}=\text{argmax}(\tilde{y}_{u}). \tag{2}\]
To train the model, we minimize the negative log-likelihood loss on the training data \(\mathcal{L}(y_{u},\hat{y_{u}})\), where \(\hat{y_{u}}=\mathcal{F}(G,X,u)\) is the predicted label for node \(u\). The loss function \(\mathcal{L}(\cdot,\cdot)\) describes the distance between the true label \(y_{u}\) and the predicted label. We focus on the transductive case: we work on single-graph and separate nodes into training, validation and test set.
### _Homophily And Heterophily_
_Graph homophily_ is a social science-inspired property of graphs [21], defined as the extent to which similar nodes in a graph are connected. Although other definitions of node similarity exist [17, 19, 20], this work focuses on label similarity, i.e., we define nodes as similar if they share the same label, and we measure graph homophily using the standard _edge homophily ratio_, denoted by \(h\):
\[h=\frac{|\{(u,v):(u,v)\in E\wedge y_{u}=y_{v}\}|}{|E|}, \tag{3}\]
which quantifies the fraction of edges in a graph that connect nodes with the same label. Graphs with low values of \(h\) are called _heterophilous_ or _disassortative_. As discussed in Section I, GNNs perform inconsistently on this category of graphs - we analyze this point in depth in Section V.
## IV Method
In this section, we describe the architecture of the model proposed in this paper, namely GCNH. Figure 1 illustrates the structure of the GCNH layer.
### _GCNH Layer Formulation_
A GCNH network is composed of one or more, \(L\), GCNH layers. The \(\ell^{\text{th}}\) layer receives in input the adjacency matrix and the node representations computed at the previous layer \(H^{\ell-1}\) and produces updated node representations \(H^{\ell}\). We set \(H^{0}=X\).
Within a layer, node representations are transformed through two separate 1-layer MLPs, \(\texttt{mlp}_{u}\) and \(\texttt{mlp}_{\mathcal{N}_{u}}\), resulting in latent representations \(z_{u}\) and \(z_{\mathcal{N}_{u}}\) of the target node and its neighborhood, respectively. For a node \(u\), at layer \(\ell\), we formally describe this step as follows. First, an intermediate representation is computed for the node \(u\):
\[z_{u}^{\ell}=\texttt{mlp}_{u}(h_{u}^{\ell-1})=\sigma(h_{u}^{\ell-1}W_{1}). \tag{4}\]
Secondly, all the representations of \(u\)'s neighbors (\(v\in\mathcal{N}_{u}\)) are updated based on their current representations \(h_{v}\) and aggregated together to obtain a neighborhood representation:
\[z_{\mathcal{N}_{u}}^{\ell}=\bigoplus_{v\in\mathcal{N}_{u}}(\texttt{mlp}_{ \mathcal{N}_{u}}(h_{v}^{\ell-1}))=\bigoplus_{v\in\mathcal{N}_{u}}(\sigma(h_{v} ^{\ell-1}W_{2})). \tag{5}\]
In Equations 4 and 5, \(W_{1},W_{2}\in\mathbb{R}^{e^{\ell-1}\times e^{\ell}}\) are learnable matrices, \(\sigma(\cdot)\) is a generic activation function, \(e^{\ell}\) is the size of the embeddings created at the \(\ell\)-th layer and \(\bigoplus\) is a permutation invariant aggregation function over the nodes \(v\in\mathcal{N}_{u}\). We omit the bias terms of the MLPs in the equations for clarity. The final output embedding for node \(u\) is obtained as a linear combination of \(z_{u}^{\ell}\) and \(z_{\mathcal{N}_{u}}^{\ell}\), parametrized by a learnable scalar value \(\beta\):
\[h_{u}^{\ell}=(1-\beta)z_{\mathcal{N}_{u}}^{\ell}+\beta z_{u}^{\ell} \tag{6}\]
where \(\beta\) is normalized between 0 and 1 with a \(\text{sigmoid}(\cdot)\) function. Note that the MLPs, \(\bigoplus\) and \(\beta\) are different for each layer \(\ell\). We omit superscripts in the equations for clarity.
### _GCNH Design Choices_
GCNH introduces two main design choices which differentiate it from a standard GCN: (i) the separate encoding of the target node and its neighbors (Equations 4, 5) and (ii) the explicit parameterization of the contributions of neighborhood and center node with \(\beta\) (Equation 6).
The positive impact of separately processing the node and its neighborhood has been previously outlined in [3, 18]. Intuitively, in homophilous settings, where neighbors are similar, aggregation from the neighborhood brings useful information. On the other hand, in heterophilous graphs, dissimilar neighbors might bring detrimental information that a GNN cannot easily ignore. Compared to [3], GCNH takes the separation principle further, while minimizing the amount of complexity that this choice adds to the network architecture. Specifically, three choices distinguish GCNH from the model proposed in [3]: (i) we only use 1-hop neighborhoods, (ii) we learn separate MLPs for center node and neighborhood and (iii) we combine separate embeddings using a learned linear combination instead of concatenation. Note that (i) leads to faster computation while retaining most of the information useful for node representation, as shown in Section V, (ii) improves flexibility and (iii) leads to models with fewer parameters. (i) and (iii) lead to lower time complexity compared to [3] (see Section IV-C for a time complexity analysis).
Combined with the embedding separation, the explicit modeling of the importance of the neighborhood informativeness with the coefficient \(\beta\) in GCNH allows to adaptively determine the impact of the neighborhood on the final node embeddings based on how informative the neighbors are. Note that informative neighbors might also exist in heterophilous graphs [19, 20]; therefore, the contribution of neighbors is not necessarily related to the homophily level of the network. Intuitively, modeling and learning \(\beta\) explicitly provides a helpful inductive bias that allows the network to directly prefer the center node versus neighborhood information. Section V shows how models that are equipped with similar, or better, flexibility (such as GAT [2]) perform poorly in comparison [22].
Fig. 1: Architecture of the GCNH layer. To create an updated representation for the center node (red), the GCNH layer explicitly separates the node from its 1-hop neighborhood (blue). Two different MLPs encode center node and neighborhood separately (\(\texttt{mlp}_{u}\) and \(\texttt{mlp}_{\mathcal{N}_{u}}\), respectively). The encoded neighbors are then aggregated with a permutation invariant function \(\bigoplus\). Finally, the aggregated neighborhood and the encoded center node are combined together with a _learnable_ weighting factor \(\beta\) which regulates the contributions of the center node versus its neighborhood to produce the final output embedding for the node.
### _Time Complexity_
We compute the time complexity of a generic GCNH layer \(\ell\) by analyzing the individual processing steps of the model.
First, the node representations \(H^{\ell-1}\) are transformed, separately for the center node and the neighbors, as described in Equations (4) and (5). This step has a time complexity of \(\mathcal{O}\left(ne^{\ell-1}e^{\ell}\right)\). Subsequently, the neighbors' representations are aggregated and merged with the center-node embedding according to Equation (6). All the neighbor aggregation functions tested (see Section V-F) have a time complexity of \(\mathcal{O}\left(|E|e^{\ell}\right)\), whereas the weighted sum with the self-node representation has a time complexity of \(\mathcal{O}\left(ne^{\ell}\right)\). This last term is dominated by the complexity of the transformation. In total, the overall time complexity of the GCNH layer is \(\mathcal{O}\left(ne^{\ell-1}e^{\ell}+|E|e^{\ell}\right)\). The linear dependency of the complexity of the GCNH layer on the number of nodes \(n\) makes it easier to scale on large graphs compared to attention-based models (e.g., GGCN [12]), whose complexity depends quadratically on \(n\).
## V Experiments
In this section, we analyze the learning capability of GCNH on the task of supervised node classification, commonly used for models dealing with heterophilous graphs. We perform the evaluation both on synthetic datasets with different levels of homophily ratio and on the real-world graphs widely used in related works. Further information about the datasets is reported in Appendix A.
### _Baselines_
The baselines used for comparison belong to two main categories. The first category includes well-known methods:
* **MLP**: Multi-Layer Perceptron.
* **GCN**[1]: Graph Convolutional Network.
* **GAT**[2]: Graph Attention Network.
Methods in the second category, instead, are specifically designed for heterophilous graphs. In particular:
* **Geom-GCN**[6] maps nodes to a latent space and defines a new graph based on embedding similarity.
* **H\({}_{2}\)GCN**[3] introduces three specific designs to boost performance on heterophilous graphs: ego and neighbor-embedding separation, higher-order neighborhoods and combination of intermediate representations.
* **GPRGNN**[14] uses PageRank to determine relations between nodes.
* **GGCN**[12] applies two designs to extend GNNs: degree correction and signed messages.
* **O(d)-SD**[13] is based on sheaf diffusion.
### _Experimental Setting_
We evaluate model performance in terms of classification accuracy, i.e., the percentage of nodes in the test set that are assigned to the correct class. We test different values for hyperparameters and we select the best ones. We report further information about the grids for the hyperparameters in Appendix B. The aggregation function \(\bigoplus\) in Equation (5) is an element-wise sum unless differently specified (see Section V-F for a comparison of aggregation functions). We compute classification accuracies on 10 different train/validation/test splits provided by [6] for each dataset, and we report mean and standard deviation. The sizes of the splits are 48%/32%/20%. For each dataset, we train the models on the training sets and the model performing best on average across the validation sets is used on the test sets, on which we compute the accuracy values.
We run experiments on an NVIDIA Tesla V100 PCIE with 16 GB, except for the experiments in Table IV that we run on an NVIDIA Tesla T4. The code is implemented in Python and PyTorch is used for deep learning models. Our code is available at [https://github.com/SmartData-Polito/GCNH](https://github.com/SmartData-Polito/GCNH).
### _Results On Real-World Datasets_
Table I reports classification accuracies for GCNH and baselines on real-world datasets. GCNH achieves state-of-the-art performance on four out of the eight datasets used; it ranks second and third on Wisconsin and Film and performs slightly worse on the two homophilous graphs Cora and Citeseer. The results on all the heterophilous graphs prove the effectiveness of the design choices of GCNH. On Cornell, Texas, Wisconsin
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline
**Benchmark** & **Cornell** & **Texas** & **Wisconsin** & **Film** & **Chameleon** & **Squirrel** & **Cora** & **Citeseer** \\ \(h\) & 0.30 & 0.11 & 0.21 & 0.22 & 0.23 & 0.22 & 0.81 & 0.74 \\ \hline
**MLP** & 81.89\(\pm\)6.40 & 80.81\(\pm\)4.75 & 85.29\(\pm\)3.31 & 36.53\(\pm\)0.70 & 46.21\(\pm\)2.99 & 28.77\(\pm\)1.56 & 75.69\(\pm\)2.00 & 74.02\(\pm\)1.90 \\
**GCN** & 60.54\(\pm\)5.30 & 55.14\(\pm\)5.16 & 51.76\(\pm\)3.06 & 27.32\(\pm\)1.10 & 64.82\(\pm\)2.24 & 53.43\(\pm\)2.01 & 86.98\(\pm\)1.27 & 76.50\(\pm\)1.36 \\
**GAT** & 61.89\(\pm\)5.05 & 52.16\(\pm\)6.63 & 49.41\(\pm\)4.09 & 27.44\(\pm\)0.89 & 60.26\(\pm\)2.50 & 40.72\(\pm\)1.55 & 87.30\(\pm\)1.10 & 76.55\(\pm\)1.23 \\
**Geom-GCN** & 60.54\(\pm\)3.67 & 66.76\(\pm\)2.72 & 64.51\(\pm\)3.66 & 31.59\(\pm\)1.15 & 60.00\(\pm\)2.81 & 43.80\(\pm\)1.48 & 85.35\(\pm\)1.57 & **78.02\(\pm\)1.15** \\
**H\({}_{2}\)GCN** & 82.70\(\pm\)5.28 & 84.86\(\pm\)7.23 & 87.65\(\pm\)4.98 & 37.50\(\pm\)1.00 & 60.11\(\pm\)2.15 & 36.48\(\pm\)1.86 & 87.87\(\pm\)1.20 & 77.11\(\pm\)1.57 \\
**GPRGNN** & 80.27\(\pm\)8.11 & 78.38\(\pm\)4.36 & 82.94\(\pm\)4.21 & 34.63\(\pm\)1.22 & 46.58\(\pm\)1.71 & 61.61\(\pm\)1.24 & **87.95\(\pm\)1.18** & **77.13\(\pm\)1.67** \\
**GGCN** & **85.68\(\pm\)6.63** & **84.86\(\pm\)4.55** & 86.86\(\pm\)3.29 & **37.54\(\pm\)1.56** & **71.14\(\pm\)1.84** & **55.17\(\pm\)1.58** & **87.95\(\pm\)1.05** & **77.14\(\pm\)1.45** \\
**O(d)-SD** & 84.86\(\pm\)4.71 & **85.95\(\pm\)5.51** & **89.41\(\pm\)4.74** & **37.81\(\pm\)1.15** & 68.04\(\pm\)1.58 & **56.34\(\pm\)1.32** & 86.90\(\pm\)1.13 & 76.70\(\pm\)1.57 \\ \hline
**GCNH** & **86.49\(\pm\)6.98** & **87.84\(\pm\)3.87** & **87.65\(\pm\)3.59** & 36.89\(\pm\)1.50 & **71.56\(\pm\)1.86** & **61.85\(\pm\)1.54** & 86.88\(\pm\)1.04 & 75.81\(\pm\)1.14 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Mean classification accuracy and standard deviation for GCNH on real-world datasets, on the 10 splits taken from [6]. Best results are in **red**, second best results in **blue** and third best in **violet**. The results for the baselines are taken from [12] and [13]. We use sum as the aggregation function in GCNH; other parameters of GCNH are selected from the best-performing configuration (see Table VI in the Appendix for details on the hyperparameters).
and Film, GCNH avoids the detrimental influence of neighbors on the final embeddings observed, for example, in GCN and GAT, with respect to which improvements are large (up to 35%). On Chameleon and Squirrel, GCNH manages to encode the useful neighborhood information present in the graph thanks to its separate processing of node and 1-hop neighbors. Indeed, on these two graphs, GCN performs quite well (it ranks fourth on both), meaning that information contained in the 1-hop neighborhood is useful for node classification, whereas more complex models (e.g. Geom-GCN, H\({}_{2}\)GCN and GPRGNN) fail to capture this property and perform worse (from 5 to 20% accuracy drop with respect to GCN). On the homophilous graphs Cora and Citeseer, GCNH performs comparably to the other models, although slightly worse. These results show that GCNH flexibly adapts to various homophily settings, achieving consistent performance.
### _Results On Synthetic Datasets_
To better understand how GCNH deals with different levels of homophily, we evaluate it on synthetic graphs covering a wide range of values of \(h\), while keeping node features and other graph properties unchanged. We provide additional details about the dataset used in Appendix A-B.
Figure 2 shows the accuracy of GCNH and three other simple baselines: MLP, whose performance is not affected by the value of \(h\) as it is a graph-unaware method, and GCN and GAT, which achieve significantly worse results on heterophilous settings. The hyperparameter configurations for these baselines are reported in Appendix B-A. GCNH's performance is significantly less dependent on the homophily level of the graph than the performances of GAT and GCN. On heterophilous graphs, GCNH improves by almost 50% over GCN and GAT and by \(\sim\)2/8% over MLP, meaning that the separate encoding of the neighbors has a positive impact. On homophilous graphs, GCNH performs comparably to GCN and GAT, achieving perfect accuracy on perfectly homophilous graphs (\(h=1\)).
### _Analysis of the GCNH Design_
To measure the impact of the design choices we made for GCNH, we perform an ablation study by removing the two main components of GCNH, namely, the separate MLPs and the learned importance coefficient \(\beta\), to see how they affect the performance. We report the results in Table II, where we use a standard GCN as a baseline. The table shows how both components bring an improvement in classification accuracy on heterophilous graphs, with a small tradeoff on the performance on homophilous graphs. The MLPs separation leads to the largest gain, and its combination with the \(\beta\) coefficient consistently brings further improvements.
We analyze further the behavior of \(\beta\). Since \(\beta\) balances the contribution of messages of self-node and its neighbors, we expect heterophilous graphs to lead to larger \(\beta\)s given that neighborhoods are generally less informative in those cases. Figure 3 shows the values of the parameter \(\beta\) on different graphs. On synthetic graphs, \(\beta\) is evidently correlated with the edge homophily ratio measured by \(h\). On most real graphs, the values follow a trend similar to that observed in synthetic graphs. Chameleon and Squirrel are exceptions: we find that, while these datasets are heterophilous according to the \(h\) metric, GCNH tends to learn low values of \(\beta\), corresponding to highly informative neighborhoods. This corroborates the findings outlined in [19], which pointed out a similar contradiction: how a standard GCN performs unexpectedly well on the heterophilous Chameleon and Squirrel, even outperforming heterophily-specific methods. In fact, this result further suggests that edge homophily ratio is not suited to describe neighborhood informativeness in general.
### _Aggregation Functions_
We test three different aggregation functions for \(\bigoplus_{v\in\mathcal{N}_{u}}\) in Equation (5): element-wise sum, element-wise mean across the neighbors and element-wise max. Table III reports a comparison of the node classification results for these aggregation functions.
Fig. 3: Values of learned parameter \(\beta\) for GCNH on real and synthetic datasets. Results are obtained using the best-performing hyperparameter configuration for GCNH models with one layer.
Fig. 2: Comparison of the classification accuracy achieved by a set of models on the syn-cora synthetic datasets. Each point shows the average accuracy on three datasets generated with a specific homophily ratio.
### _GCNH And Oversmoothing_
The performance of GNNs is known to gradually decrease when increasing the number of layers. This decay is partly attributed to oversmoothing, i.e., repeated graph convolutions eventually making node embeddings indistinguishable from each other [23, 24]. We show experimentally that the design choices of GCNH alleviate the oversmoothing problem. As shown in Figure 4, GCNH's accuracy decreases just slightly when increasing the number of layers, whereas increasing the layers of GCN leads to a larger drop in performance; see Section B-C in Appendix for the experimental details.
### _Training Times And Trainable Parameters_
We report in Table IV the training times required by GCNH and two of its main competitors GGCN [12] and O(d)-SD [13], as well as their number of trainable parameters. We take the implementations of these methods from the repositories of the authors [25, 26]. We report the result for GCNH using sum and max aggregation. We omit results for mean since sum and mean require the same amount of computation as they can be implemented efficiently with a matrix multiplication, i.e., their training times are the same. Max, instead, requires a scattered max pooling operation for which we use the implementation in [27]. GCNH is noticeably faster and has fewer trainable parameters than both GGCN and O(d)-SD.
## VI Conclusions
We introduced GCNH, a simple yet effective GNN architecture that improves representation capabilities on heterophilous graphs. GCNH leverages two design choices: (i) two distinct learnable mapping functions that separately encode the center node and its neighbors into intermediate representations and (ii) the importance coefficients \(\beta\), one per layer, to balance the contributions of these two representations on the final node embeddings. We analyze and demonstrate how these two components enhance the representation capabilities of GCNH compared to GCN, as, together, they reduce the detrimental contributions of noisy neighborhoods to the updated node representations, while making effective use of basic structural information beneficial for classification. Through extensive experiments on real and synthetic graphs, we show that GCNH performs competitively on the node classification task, outperforming state-of-the-art methods on four out of eight common real-world datasets. In addition to its effectiveness, GCNH's simple design results in significantly faster training times and fewer trainable parameters compared to competing methods.
\begin{table}
\begin{tabular}{c c c|c c c c c c} \hline \hline & Separate MLPs & Learned \(\beta\) & **Cornell** & **Texas** & **Wisconsin** & **Film** & **Chameleon** & **Squirrel** & **Cora** & **Citeseer** \\ \hline
**GCN** & ✗ & ✗ & 60.54 & 55.14 & 51.76 & 27.32 & 64.82 & 53.43 & **86.98** & **76.50** \\ \hline
**GCNH** & ✓ & ✗ & 83.78(+23.2) & 86.49(+31.3) & 85.49(+33.7) & 36.01(+8.7) & 70.22(+5.4) & 59.74(+6.3) & 86.90(\(-0.1\)) & 75.65(\(-0.8\)) \\
**GCNH** & ✓ & ✓ & **86.49**(+25.9) & **87.84**(+32.7) & **87.65**(+35.9) & **36.89**(+9.6) & **71.56**(+6.7) & **61.85**(+8.4) & 86.88(\(-0.1\)) & 75.81(\(-0.7\)) \\ \hline \hline \end{tabular}
\end{table} TABLE II: Mean classification accuracy of GCN and ablated GCNH. In parentheses, we report the performance improvement or degradation from the GCN baseline. “Separate MLPs” refers to whether we learn separate linear layers for node and neighborhoods (\(W_{1},W_{2}\) in Equation 4 and 5, respectively). “Learned \(\beta\)” refers to whether we learn \(\beta\) as a parameter or not; if not, it is fixed at \(\beta\)=0.5. Best results are in **bold**.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & **N. Params** & **Cora** & **Citeseer** & **Chameleon** & **Squirrel** & **Film** \\ \hline
**GGCN** & 118 k & 96.05 & 99.94 & 74.83 & 331.64 & 628.36 \\
**O(d)-SD** & 46 k & 19.64 & 20.15 & 48.87 & 275.90 & 44.22 \\ \hline
**GCNH (sum)** & 30 k & **8.79** & **10.28** & **8.56** & **12.61** & 15.59 \\
**GCNH (max)** & 30 k & 11.26 & 13.19 & 10.71 & 17.20 & **12.82** \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Number of trainable parameters and training times (sec) for two state-of-the-art methods and GCNH on several graphs. Training is performed for 200 epochs on 10 splits for each dataset. Models have one layer and hidden size 16. Shortest times for each dataset are in **bold**.
Fig. 4: Accuracy for GCNH and GCN with different numbers of layers. With a large number of layers, GCNH prevents oversmoothing.
\begin{table}
\begin{tabular}{c c c|c c c c c c c} \hline \hline & Separate MLPs & Learned \(\beta\) & **Cornell** & **Texas** & **Wisconsin** & **Film** & **Chameleon** & **Squirrel** & **Cora** & **Citeseer** \\ \hline
**GCN** & ✗ & ✗ & 60.54 & 55.14 & 51.76 & 27.32 & 64.82 & 53.43 & **86.98** & **76.50** \\ \hline
**GCNH** & ✓ & ✗ & 83.78(+23.2) & 86.49(+31.3) & 85.49(+33.7) & 36.01(+8.7) & 70.22(+5.4) & 59.74(+6.3) & 86.90(\(-0.1\)) & 75.65(\(-0.8\)) \\
**GCNH** & ✓ & ✓ & **86.49**(+25.9) & **87.84**(+32.7) & **87.65**(+35.9) & **36.89**(+9.6) & **71.56**(+6.7) & **61.85**(+8.4) & 86.88(\(-0.1\)) & 75.81(\(-0.7\)) \\ \hline \hline \end{tabular}
\end{table} TABLE III: Mean classification accuracy and standard deviation for GCNH with different aggregation functions in Equation 5. Note that results for “max” are obtained full-batch training only (see Table VI and discussion in Section V-H), as it allows for more efficient implementation. Best results are in **bold**. |
2305.16116 | Formation of complex organic molecules on interstellar CO ices? Insights
from computational chemistry simulations | Carbon ($^3$P) atom is a reactive species that, according to laboratory
experiments and theoretical calculations, condensates with interstellar ice
components. This fact is of uttermost importance for the chemistry in the
interstellar medium (ISM) because the condensation reaction is barrierless and
the subsequent species formed are still reactive given their open-shell
character. Carbon condensation on CO-rich ices forms the \ch{C=C=O}
($^3$$\Sigma$$^-$) species, which can be easily hydrogenated twice to form
ketene (H$_2$CCO). Ketene is very reactive in terrestrial conditions, usually
found as an intermediate hard to be isolated in chemical synthesis
laboratories. These characteristics suggest that ketene can be a good candidate
to form interstellar complex organic molecules (iCOMs) via a two-step process,
i.e., its activation followed by a radical-radical coupling. In this work,
reactions between ketene and atomic H, and the OH and NH$_2$ radicals on a
CO-rich ice model have been explored by means of quantum chemical calculations
complemented by kinetic calculations to evaluate if they are favourable in the
ISM. Results indicate that H addition to ketene (helped by tunneling) to form
the acetyl radical (CH$_3$CO) is the most preferred path, as the reactions with
OH and NH$_2$ possess activation energies ($\geq$ 9kJ/mol) hard to surmount in
the ISM conditions, unless external processes provide energy to the system.
Thus, acetaldehyde (CH$_3$CHO) and, probably, ethanol (CH$_3$CH$_2$OH)
formation via further hydrogenations are the possible unique operating
synthetic routes. Moreover, from the computed relatively large binding energies
of OH and NH$_2$ on CO ice, slow diffusion is expected, hampering possible
radical-radical couplings with CH$_3$CO. The astrophysical implications of
these findings are discussed considering the incoming James Webb Space
Telescope observations. | Stefano Ferrero, Cecilia Ceccarelli, Piero Ugliengo, Mariona Sodupe, Albert Rimola | 2023-05-25T14:48:24Z | http://arxiv.org/abs/2305.16116v1 | Formation of complex organic molecules on interstellar CO ices? Insights from computational chemistry simulations
###### Abstract
Carbon (\({}^{3}\)P) atom is a reactive species that, according to laboratory experiments and theoretical calculations, condensates with interstellar ice components. This fact is of uttermost importance for the chemistry in the interstellar medium (ISM) because the condensation reaction is barrierless and the subsequent species formed are still reactive given their open-shell character. Carbon condensation on CO-rich ices forms the C=C=O (\({}^{3}\Sigma^{-}\)) species, which can be easily hydrogenated twice to form ketene (H\({}_{2}\)CCO). Ketene is very reactive in terrestrial conditions, usually found as an intermediate hard to be isolated in chemical synthesis laboratories. These characteristics suggest that ketene can be a good candidate to form interstellar complex organic molecules (iCOMs) via a two-step process, i.e., its activation followed by a radical-radical coupling. In this work, reactions between ketene and atomic H, and the OH and NH\({}_{2}\) radicals on a CO-rich ice model have been explored by means of quantum chemical calculations complemented by kinetic calculations to evaluate if they are favourable in the ISM. Results indicate that H addition to ketene (helped by tunneling) to form the acetyl radical (CH\({}_{3}\)CO) is the most preferred path, as the reactions with OH and NH\({}_{2}\) possess activation energies (\(\geq\) 9kJ/mol) hard to surmount in the ISM conditions, unless external processes provide energy to the system. Thus, acetaldehyde (CH\({}_{3}\)CHO) and, probably, ethanol (CH\({}_{3}\)CH\({}_{2}\)OH) formation via further hydrogenations are the possible unique operating synthetic routes. Moreover, from the computed relatively large binding energies of OH and NH\({}_{2}\) on CO ice, slow diffusion is expected, hampering possible radical-radical couplings with CH\({}_{3}\)CO. The astrophysical implications of these findings are discussed considering the incoming James Webb Space Telescope observations.
Astrochemistry -- Interstellar medium -- Interstellar molecules -- Interstellar dust -- Surface ices -- Complex organic molecules -- Reactions rates -- Computational methods +
Footnote †: journal: ApJ
0000-0002-3070-3884]Stefano Ferrero
0000-0002-3470-3878]Cecilia Ceccarelli
0000-0002-4880-7888]Piero Ugliengo
0000-0002-3470-3888]Mariona Sodupe
0000-0002-0733-0888]Albert Rimola
0000-0002-4880-0888]Albert Rimola
## 1 Introduction
Interstellar grains are submicron-sized solid particles made either of carbonaceous materials or silicates. In cold (\(\sim\)10 K) and dense (\(\sim\)10\({}^{4}\) cm\({}^{-3}\)) molecular clouds, these grains are covered predominantly by water icy mantles, with several other, less abundant, species detected by infra-red (IR) observations of the interstellar (e.g., Boogert et al., 2015; Yang et al., 2022; McClure et al., 2023). Interstellar grains are important in astrochemistry because they can provide the surfaces on which chemical reactions can occur forming stable products (e.g., Tielens & Hagen, 1982; Cuppen et al., 2017; Potapov & McCoustra, 2021; Ceccarelli et al., 2022).
In addition to H\({}_{2}\)O, one of the most abundant icy species is carbon monoxide (CO), which is thought to form in the gas phase and then to freeze out onto the surface of the grains (Caselli et al., 1999; Bacmann et al.,
2002; Favre et al., 2013). The CO freeze-out is supposed to happen after the formation of the bulk of water ice and, therefore, in the extreme cases of large CO freeze-out, interstellar ices are thought to present an (almost) onion-like structure, the innermost layers being formed by a polar phase dominated by water, whereas the outer layers by a non-polar phase, possibly dominated by CO (Boogert et al., 2015; Pontoppidan et al., 2008). These non-polar outermost layers are thought to be crucial for the formation of interstellar complex organic molecules (iCOMs) through hydrogenation of CO followed by the formation of radicals via photodissociation on the ice-surfaces, and their recombination (Garrod and Herbst, 2006; Chuang et al., 2017; Chuang et al., 2021; Simons et al., 2020).
However, another promising route towards chemical complexity in the interstellar medium (ISM), which has emerged in the past few years, is the reactivity of atomic carbon towards different components of the icy mantles. The condensation of atomic carbon on water ice, both in its neutral (C) and in its cationic (C\({}^{+}\)) forms, has been studied, and possible chemical reactions with different icy components have been elucidated (Krasnokutski et al., 2017; Shimonishi et al., 2018; Henning and Krasnokutski, 2019; Woon, 2021; Molpecreres et al., 2021; Potapov et al., 2021). Moreover, some of these processes have been linked to chemical pathways to form amino acids (Krasnokutski et al., 2020, 2022).
On the other hand, the condensation of atomic carbon on pure CO ice is by far less studied. The C(\({}^{3}\)P) + CO reaction has been reported and studied in the gas phase by a recent computational work, which found the formation of the C=C=O (\({}^{3}\Sigma^{-}\)) species as the product in a barrierless way (Papakondylis and Mavridis, 2019). In the ISM, these species can be easily hydrogenated twice to form ketene (H\({}_{2}\)CCO), which can be hydrogenated even further to form acetaldehyde, as found recently experimentally (Fedoseev et al., 2022) and, eventually, ethanol. The formation of the latter species is particularly interesting as, in the gas phase, it is the starting point of a chain of reactions leading to glycolaldehyde (Skouteris et al., 2018). Recent observations by the James Webb Space Telescope (JWST) towards background stars have shown the possible presence of acetaldehyde and ethanol in the icy grain mantles in molecular clouds (Yang et al., 2022; McClure et al., 2023). In these regions, the ices are composed of about 25% of frozen CO (i.e., the catastrophically freeze-out mentioned above has not occurred yet) (e.g. Boogert et al., 2015) while some atomic carbon is still in the gas phase (Zmuidzinas et al., 1988; Kamegai et al., 2003). Although the identification of iced acetaldehyde and ethanol needs to be confirmed, the possibility that (either of) these species are already present in the molecular cloud phase warrants a dedicated study on the chemistry triggered by the C condensation on CO ice.
In this work, the formation and reactivity of ketene on a model of CO ice are explored through quantum mechanical simulations. Its reactivity with abundant interstellar radicals, like H, O, N, NH, OH and NH\({}_{2}\), is studied in order to identify whether and which of these species can react with ketene, thereby opening up chemical pathways that form even more complex species.
## 2 Computational details
### Gas phase calculations
A preliminary gas phase screening of reaction barriers was made to determine which reactions are more probable at ISM conditions. All the electronic structure calculations have been carried out with the Orca 5.0 software (Neese et al., 2020). Density functional theory (DFT) was used for geometry optimizations adopting the \(\omega\)B97X-D4 functional and the def2-TZVP basis set (Najibi and Goerigk, 2020; Weigend and Ahlrichs, 2005). (hereafter referred to as \(\omega\)B97X-D4/TZVP). Geometry optimizations were carried out with the geometrical counterpoise correction (gCP) method to remove the basis set superposition error (BSSE) (Kruse and Grimme, 2012; Liu and McLean, 1973). Electronic energies were refined with single point calculations at the DFT optimized geometries with the CCSD(T)-F12 and DLPNO-CCSD(T)-F12 methods (Hattig et al., 2012; Adler et al., 2007; Kong et al., 2012; Pavosevic et al., 2017), which employs a cc-pVTZ-F12 basis set, the cc-pVTZ-F12-CABS near-complete basis set and the aug-cc-pVTZ/C fitting basis set for the resolution of identity (RI) approximation (Weigend et al., 2002; Peterson et al., 2008). For simplicity the CCSD(T)-F12/cc-pVTZ-F12/\(\omega\)B97X-D4/TZVP and DLPNO-CCSD(T)-F12/cc-pVTZ-F12/\(\omega\)B97X-D4/TZVP levels of theory will be referred to as CCSD(T)-F12 and DLPNO-F12, respectively. For DLPNO calculations, a tight PNO setting was used. Transition state (TS) structure searches have been conducted using the NEB-TS algorithm implemented in Orca (Asgeirsson et al., 2021). Harmonic vibrational frequencies were calculated to characterize the nature (e.g. minimum or TS) of the optimized structures and to correct the electronic energies for the zero point energy (ZPE).
### Solid phase calculations
In order to mimic a CO ice, a cluster approach was used in a similar way to the inspiring paper by Lamberts et al. (2019). An initial cluster was generated with the
Packmol software by randomly placing 20 CO molecules inside a sphere of 12 A radius, which was then optimized with the \(\omega\)B97X-D4 functional and a def2-SVP basis set (see Figure 1A). This cluster model is enough because, since the nature of the interactions between CO molecules (namely, quadrupole-quadrupole and dispersion components, Zamirri et al. (2018)), the generated cluster model exhibits all the possible CO orientations on the surface, that is, C/C, O/O and C/O, hence covering all the likely CO configurational variability.
As CO molecules in the cluster are not strongly bound together, and to avoid large deformation of the cluster structure when studying adsorption and reactivity, constrained geometry optimizations and transition state searches were performed, as explained as follows.
Interaction energies for OH and NH\({}_{2}\) on the CO cluster were calculated by performing geometry optimizations, in which only the adsorbate (NH\({}_{2}\) or OH) and the closest CO molecules (3 to 5, depending on the site) were allowed to relax. Harmonic frequencies were calculated only for the adsorbates employing a partial hessian vibrational analysis (PHVA) scheme (Li & Jensen, 2002). The ZPE-corrected interaction energies on the CO ice model were calculated as:
\[\Delta\mathrm{H}(0)=E_{complex}-E_{CO}-E_{adsorbate}+\Delta ZPE \tag{1}\]
where \(E_{complex}\) is the absolute potential energy of the adsorption complex, \(E_{CO}\) is the absolute potential energy of the isolated optimized CO cluster and \(E_{adsorbate}\) is the absolute potential energy of the isolated adsorbate, and bearing in mind that, at 0 K, the absolute ZPE energy is equal to the absolute enthalpy, i.e., E\({}_{0}\) = H(0). \(\Delta ZPE\) values have been calculated by subtracting the ZPE corrections of the adsorbate optimized on the CO cluster and the ZPE corrections of the isolated adsorbate.
For the reactivity on the CO cluster, the structures of reactants and products were first optimized, and then TS structures were localized with the NEB-TS algorithm. Seven CO molecules were included in the optimizations and TS searches. A PHVA scheme was used to characterize the optimized structures as minimum or TS and to correct for ZPE. Energy barriers were refined by single point calculations at DLPNO-CCSD(T)-F12 level of theory and calculated as:
\[\Delta\mathrm{H}^{\ddagger}(0)=E_{TS}-E_{min}+\Delta ZPE \tag{2}\]
where \(E_{TS}\) and \(E_{min}\) are the DLPNO-CCSD(T)-F12 absolute potential energies for the transition state and the minimum structure of the reaction, respectively. \(\Delta ZPE\) has been calculated by subtracting the ZPE of the fragment made by the adduct (ketene plus H/OH/NH\({}_{2}\)) optimized on the CO cluster and the ZPE of the adduct calculated in vacuum at \(\omega\)B97X-D4/TZVP level. At the ISM temperatures (e.g., 10 K), thermal corrections are in practice negligible (Zamirri et al. (2017, 2019); Enrique-Romero et al. (2019, 2021)) so we assume that the calculated \(\Delta H(0)\) and \(\Delta H^{\ddagger}(0)\) do not vary at the cryogenic temperatures. The VMD software was used for rendering images (Humphrey et al., 1996).
#### 2.2.1 Instanton rate theory calculations
To assess the effect of tunneling on the reaction rates, the semiclassical instanton theory (Miller, 1975; Chapman et al., 1975) was employed for the H additions to ketene. A simple estimation of the crossover temperature (T\({}_{c}\)), at which tunneling becomes important, was obtained as:
\[T_{c}=\frac{\hbar\omega}{2\pi k_{B}} \tag{3}\]
where h is the Planck's constant, k\({}_{B}\) is the Boltzmann's constant and \(\omega\) is the imaginary frequency of the TS. As instanton theory tends to overestimate the reaction rates around T\({}_{c}\)(Andersson et al., 2009), alongside that our interest is on rates at interstellar temperatures, instanton rate theory has been applied only in the deep tunneling regime, e.g. below T\({}_{c}\). The instanton describes the most probable tunneling path from reactants to products at a given temperature and can be regarded as a saddle point on a ring polymer potential energy surface constructed as a discretized Feynman path of N segments, called beads (Kastner, 2014; Beyer et al., 2016; Richardson, 2018, 2018). Instantons for the hydrogenation reactions on the two ketene carbon atoms have been optimized employing a progressive cooling approach, starting from a temperature just below T\({}_{c}\) down to 50 K. As a first discretization of the path, 16 beads were employed, which were then incremented up to 128 beads to obtain convergence on the rates at 50 K. This was the ending temperature because convergence at lower temperatures requires even more beads, making the calculation computationally impractical. However, as it will be seen, the rate constants at 50 K do not depend on temperature so they can be extrapolated to 10 K. A duel-level instanton approach (CCSD(T)-F12/cc-pVTZ-F12//\(\omega\)B97X-D4/TZVP) (Meisner & Kastner, 2018) was then used to refine the energy of the beads. Finally, as these reactions are supposed to happen on a CO-rich ice, the implicit surface model approach was applied to include surface effects (Meisner et al., 2017), which holds only if the catalytic role of the surface is negligible. In this approximation, the rotational partition function is assumed to be constant during the reaction
as the surface hampers rotations in both the reactant and TS structures. Instantons have been optimized on the fly by interfacing the Orca software with a Python code developed in Jeremy Richardson's group at ETH Zurich.
## 3 Results and Discussion
### Gas phase calculations
The C(\({}^{3}\)P) + CO reaction, already studied in the work of Papakondylis & Mavridis (2019), was here reproduced at the \(\omega\)B97x-D4/TZVP level of theory. Results are in agreement with the previous findings on the barrierless formation of C=C=O (\({}^{3}\Sigma^{-}\)) and, thus, the formation of ketene via a double hydrogenation reaction is viable. In order to study the reactivity of ketene and to assess the height of the reaction barriers, gas-phase calculations were carried out for the reactions with abundant interstellar radicals, involving attacks on the two carbon ketene atoms, that is:
\[\mathrm{H_{2}CCO+X^{\bullet}}\]
\[\mathrm{H_{2}CCO+X^{\bullet}}\]
\[\mathrm{H_{2}XCC^{\bullet}O}\]
with X\({}^{\bullet}\) = H, O, N, NH, OH and NH\({}_{2}\) (see panel B of Fig. 1). An important point is that, after the attack of X\({}^{\bullet}\), the newly formed molecule is still a radical due to its unpaired electron and, therefore, reactive to couple with other open-shell species to possibly form iCOMs. In this work, the carbon bonded to the oxygen atom is labelled as C1 and the other as C2 (see Fig. 1B).
All the studied reactions are exothermic, but present energy barriers (see Table 1).
In all cases, except for H, the products arising from the C1 attack are thermodynamically more stable than those from the C2 attack, due to the overstabilization gained by the \(\pi\) delocalization when forming, for instance, an amide bond or a carboxylic group. However, C2 is the most prominent site to experience an attack due to the lower energy barriers in the cases of H, N, O and OH, whereas for NH and NH\({}_{2}\) the attack to C1 is preferred. Moreover, the OH radical attack presents the lowest energy barrier among the species studied, whereas N-bearing species are the most inert (see Table 1).
Note that DFT energy barriers are, in most cases, in good agreement with the results obtained with the CCSD(T)-F12 method. The barriers found for the hydrogenation reactions are also in good agreement with Ibrahim et al. (2022). Furthermore, it can be noticed that the DLPNO-F12 results agree fairly well with the CCSD(T)-F12 ones, which supports the use of DLPNO-F12 for the energetic refinement on CO ices, for which the CCSD(T)-F12 method cannot be employed due to the large size of the cluster.
### Solid phase calculations
To confirm that the ketene formation is also doable in the solid state, the C(\({}^{3}\)P) + CO(ice) condensation was also investigated on the CO ice, resulting indeed in the formation of the C=C=O (\({}^{3}\Sigma^{-}\)) species.
Based on the gas-phase results, the cases of H (the most abundant species) and OH and NH\({}_{2}\) (the species presenting the lowest barriers for the O- and N-containing radical family species) have been selected to study their reactivity with ketene on the CO ice.
Figure 1: _Panel A:_ Optimized structure of the CO cluster model in Van der Waals representation. _Panel B, left:_ Ketene molecule structure with the C1 and C2 labels. _Panel B, right (dashed box)_: Molecular structure of the species to react with ketene (H, N, O, NH, OH and NH\({}_{2}\)) considering the two possible attacks (arrows towards C1 and C2). Atom colour legend: H, white; C, cyan; N, blue; O, red.
The reactions have been modeled by adopting a Langmuir-Hinshelwood mechanism. Thus, to calculate the reaction barriers, the reactant, TS and product structures have been optimized on two adjacent adsorption sites, including 7 unconstrained CO molecules (the rest remaining fixed, see Fig. 2). The calculated barriers are reported in Table 2. According to the ISM conditions, they are all very high (the lowest one being 9.3 kJ mol\({}^{-1}\)) for the reactions to proceed. In the following, we discuss the differences with the barriers obtained in the gas phase.
_Ketene + NH\({}_{2}\):_ Comparison of the solid-state barriers with the gas-phase ones gives differences that are less than 4 kJ mol\({}^{-1}\). Since the energy barriers for both cases are high (insurmountable in the ISM conditions), these variations have no practical effects so that CO ice behaves as an inert surface.
_Ketene + OH:_ For OH, there is an increase of 6 kJ mol\({}^{-1}\) on CO, a non-negligible variation since the gas-phase barriers are very low. The increase can be attributed to the interactions between OH and the CO ice (absent in the gas phase), which need to be partly overcome to proceed with the reaction on the surface. Interestingly, for this case, the energy barrier is relatively small, which could be overcome classically by means of non-thermal mechanisms (as reported in works involving reactive OH radicals (see Garrod & Pauly, 2011; Ishibashi et al., 2021)), or could even proceed with the help of heavy atom quantum tunneling below the crossover temperature (see Castro & Karney, 2020; Meisner & Kastner, 2016). Moreover, it is worth mentioning that the calculated barrier arises from just one starting configuration of the reactants, which could well be lower in other surface reactive sites.
_Ketene + H:_ Here, the C2 attack appears to be the most doable as it presents a barrier that is less than half of the barrier of the C1 reaction but it is still very high for the ISM conditions. However, it is worth noticing that reactions happening at low temperatures and involving light species such as H, tunneling effects can dominate and, therefore, considering only classical barrier heights can lead to wrong conclusions. This kinetic aspect is discussed in more detail in the next section.
Another crucial point for the on-grain reactivity is the species/ice interactions that can be established at the surfaces. On icy water grains, species like NH\({}_{2}\) and OH can form strong hydrogen bond (H-bond) interactions, inhibiting their diffusivity and, hence, their re
\begin{table}
\begin{tabular}{l c c|c c c} \hline \hline \multicolumn{1}{c}{Species} & Attack on C1 \(\Delta\)H\({}^{\ddagger}\)(0) & Attack on C2 \(\Delta\)H\({}^{\ddagger}\)(0) \\ \hline & \multicolumn{2}{c}{DLPNO-F12} & \multicolumn{2}{c}{DLPNO-F12} \\ \hline H & 29.1 & 13.5 & \\ NH\({}_{2}\) & 41.0 & 42.9 & \\ OH & 10.2 & 9.3 & \\ \hline \hline \end{tabular}
\end{table}
Table 2: Computed ZPE-corrected energy barriers (\(\Delta\)H\({}^{\ddagger}\)(0)) for the reactions of ketene with H, NH\({}_{2}\) and OH on the CO cluster considering the two C atoms of ketene (C1 and C2). Values are reported in kJ mol\({}^{-1}\)
\begin{table}
\begin{tabular}{l c c|c c c} \hline \hline \multicolumn{1}{c}{Reactions} & \multicolumn{2}{c}{\(\Delta\)H\({}_{react}\)} & \multicolumn{2}{c}{\(\Delta\)H\({}^{\ddagger}\)(0)} \\ \hline Attack on C1 & \(\omega\)B97x & CCSD(T) & DLPNO & \(\omega\)B97x & CCSD(T) & DLPNO \\ \hline Ketene + H & -147.0 & -147.6 & -150.1 & 36.3 & 30.6 & 31.1 \\ Ketene + N & -75.4 & -46.4 & -49.5 & 68.7 & 78.8 & 78.5 \\ Ketene + NH\({}_{2}\) & -116.5 & -99.9 & -101.3 & 51.8 & 54.4 & 54.8 \\ Ketene + NH\({}_{2}\) & -169.1 & -163.8 & -163.1 & 37.4 & 37.2 & 38.5 \\ Ketene + O & -183.9 & -186.7 & -185.4 & 14.9 & 15.7 & 17.1 \\ Ketene + OH & -209.9 & -213.1 & -213.6 & 5.2 & 3.3 & 4.6 \\ \hline \hline Attack on C2 & & & & & \\ \hline Ketene + H & -173.9 & -175.6 & -175.7 & 19.3 & 16.8 & 17.1 \\ Ketene + N & -50.1 & -27.4 & -28.2 & 62.7 & 74.8 & 74.1 \\ Ketene + NH & -71.9 & -60.7 & -60.4 & 59.0 & 63.9 & 63.9 \\ Ketene + NH\({}_{2}\) & -93.7 & -91.5 & -90.5 & 43.9 & 43.5 & 43.4 \\ Ketene + O & -121.3 & -115.3 & -115.7 & 5.8 & 9.8 & 9.2 \\ Ketene + OH & -117.5 & -123.1 & -122.9 & 4.1 & 3.1 & 3.4 \\ \hline \hline \end{tabular}
\end{table}
Table 1: ZPE corrected reaction energies (\(\Delta\)H\({}_{react}\)) and energy barriers (\(\Delta\)H\({}^{\ddagger}\)(0)) for every studied reaction, listed in kJ mol\({}^{-1}\) and computed with different levels of theory. \(\omega\)B97x, CCSD(T) and DLPNO stand for \(\omega\)B97x/TZVP, CCSD(T)-F12//\(\omega\)B97x/TZVP and DLPNO-F12//\(\omega\)B97x/TZVP level of theories, respectively.
activity via Langmuir-Hinshelwood. These interactions have been studied on water ice surfaces both experimentally and theoretically (Sameera et al., 2022, 2017; Tsuge and Watanabe, 2021; Enrique-Romero et al., 2019, 2022, 2021; Ferrero et al., 2020; Wakelam et al., 2017; Duflot et al., 2021). However, very little data are available in the literature relative to CO ices. To have an idea of the interactions of OH and NH\({}_{2}\) with CO-rich ices (different in nature to water ice), we used a similar strategy as in Lamberts et al. (2019) to calculate interaction energies: we sampled ten different binding sites around the CO cluster to calculate the corresponding \(\Delta\)H(0) adsorption energies (results shown in Table 3).
Although the binding site sampling is not exhaustive, we can compare our values with other computational works that calculated the interaction of these two species on water ice models. The difference in the adsorption energies is quite large, clearly indicating that the interaction on CO-rich ices is significantly weaker than on H\({}_{2}\)O-rich ones (particularly for NH\({}_{2}\)), as already found in previous works for different radicals (Lamberts et al., 2019). This is due to the fewer and weaker H-bonds that OH and NH\({}_{2}\) form on CO ice with respect to water ice. This fact is also connected with the mobility of these species on different icy surfaces. Even without calculating explicitly the diffusion energy barriers, it is noticeable that the diffusivity of these two radicals on CO-rich ices will be larger than that on water ices due to the weaker radical/surface interactions in the former. However, the binding energies of these species on CO ice are still significantly large so diffusion is expected to be
\begin{table}
\begin{tabular}{c c c} \hline \hline Species & CO ice & Water ice \\ \hline OH & -22.1\(\pm\) 3.9 & -43.2\(\pm\)16.1\({}^{a}\); \\ & & -44.4\({}^{b}\) \\ & & -32 – \(-\)41\({}^{c}\) \\ & & -26.9\(\pm\)11.5\({}^{d}\) \\ NH\({}_{2}\) & -6.5\(\pm\) 3.2 & -31 – \(-\)44\({}^{e}\) \\ & & -27.9\(\pm\)3.1\({}^{d}\) \\ \hline \hline \end{tabular} Note. – References: \({}^{a}\), Duflot et al. (2021); \({}^{b}\), Sameera et al. (2017); \({}^{c}\), Meisner et al. (2017); \({}^{d}\), Ferrero et al. (2020); \({}^{e}\), Enrique-Romero et al. (2019).
\end{table}
Table 3: Adsorption energies on CO- and H\({}_{2}\)O ices of NH\({}_{2}\) and OH: mean value and standard deviation of the adsorption energies \(\Delta\)H(0) on the CO cluster calculated in this work are reported (second column) and calculated data on different water ice models from other works (third column). When more than two values were present in the literature, mean and standard deviation have been calculated. All values are in kJ mol\({}^{-1}\).
Figure 2: Optimized structures of the reactant and product for the ketene + NH\({}_{2}\) reaction taken as an example. Geometry relaxed species are represented in a ball and stick mode, whereas transparent CO molecules are held fixed during geometry optimization
very slow at the ISM conditions, which in turn hamper diffusion-limited reactions like radical-radical couplings.
### Kinetic calculations and tunneling effects
At the cryogenic temperatures of the ISM and for reactions involving light atoms, like H, quantum tunneling cannot be overlooked. Thus, ketene hydrogenation considering both C1 and C2 attacks was also studied adopting the instanton theory (outlined in section 2.2.1), which calculates the most probable tunneling path connecting reactants and products. Regarding the attack on C1, the calculated T\({}_{c}\) is 230 K, whereas on C2 is 157K. Figure 3 reports the Arrhenius plots for these two H additions. As done in previous works (Meisner et al., 2017; Lamberts and Kastner, 2017; Lamberts et al., 2016), the reactions are considered as unimolecular because the diffusion of the two reactants is not considered and the rate constants measure the rate starting from a reactive complex. The calculated rate constants refined at CCSD(T)-F12/cc-pVTZ-F12//\(\omega\)B97X-D4/TZVP at 50 K are \(4.47\times 10^{4}\) s\({}^{-1}\) and \(1.74\times 10^{7}\) s\({}^{-1}\) for C1 and C2 hydrogenations, respectively. Remarkably, rate constants keep almost invariant at these temperatures, so we can assume similar values for 10 K. Therefore, both attacks are viable at ISM temperatures, being on C2 kinetically more favorable. As stated in section 2.2.1, these instanton calculations have been carried out in the gas phase, but the surface effects have been considered through the implicit surface approximation, which in this case holds because the CO surface acts as an inert substrate without affecting significantly the classical barriers.
## 4 Astrophysical implications and concluding remarks
The calculations presented here show that, when atomic C lands on an icy CO surface, C=C=O forms, as in the gas phase (Papakondylis and Mavridis, 2019). However, once formed, it could very likely be hydrogenated into ketene by the addition of H, also landing on the grain surfaces. We then explored if ketene (CH\({}_{2}\)CO) could grow into larger molecules by reacting with simple and abundant radicals on the grain surfaces, namely H, OH and NH\({}_{2}\).
These reactions on the (non-polar) CO ice present relatively high energy barriers (\(\geq 9\) kJ/mol) for the ISM conditions. It seems clear that ketene hydrogenation can occur through the H tunneling. Thus, acetyl radical (CH\({}_{3}\)CO) formation (through H addition to ketene) is the most kinetically favourable path. Formation of radical precursors for acetamide (CH\({}_{3}\)CONH\({}_{2}\)) and acetic acid (CH\({}_{3}\)COOH) (through NH\({}_{2}\) and OH addition, respectively), according to our results, are not expected to form. However, local surface heating and other non-thermal mechanisms could be operative, this way helping the occurrence of these reactions, so that these synthetic paths cannot be discarded.
CH\({}_{3}\)CO can be in turn hydrogenated and produce acetaldehyde (CH\({}_{3}\)CHO) and/or CO + CH\({}_{4}\). Recent experiments indicate that the latter channel competes with the first one, with a branching ratio up to 80%, against naive expectations (Ibrahim et al., 2022). One can speculate that acetaldehyde can successively be hydrogenated into ethanol by other H additions. One could also suppose that the CH\({}_{3}\)CO radical could react with the OH and NH\({}_{2}\) radicals. However, our computed relatively large binding energies of OH and NH\({}_{2}\) on CO ice point towards a very low diffusivity, inhibiting, once again, the formation of CH\({}_{3}\)COOH and CH\({}_{3}\)CONH\({}_{2}\) via radical-radical coupling with CH\({}_{3}\)CO.
As mentioned in the Introduction, gaseous atomic C and frozen CO are simultaneously found in molecular clouds, probably in the photo-dissociated region (PDR) skin. The bottleneck to form ketene is probably the quantity of gaseous atomic C, since a substantial fraction of frozen CO, about 25% of the ice, is observed in diffuse clouds (with visual extinction \(\sim 3\) mag: Boogert et al., 2015). Estimating the amount of gaseous atomic C, where also frozen CO exists, is not easy, but observations show that it is up to the same abundance of gaseous CO in giant molecular clouds (e.g. Plume et al., 1999), making the C=C=O hydrogenation a possible important source of frozen acetaldehyde and, perhaps, ethanol. It is worth emphasizing that in the interiors of the molecular clouds, where carbon is almost entirely trapped in CO, neutral carbon is not expected to be as abundant as in their skins. Therefore, the mechanism described in this work is mostly viable in the latter, which represents only a small fraction of the molecular clouds. That said, little is known of the composition of ices in the PDRs and whether surface chemistry, other than the formation of H\({}_{2}\), is efficient. Indirect proof that frozen CO is present and that it is hydrogenated is provided by the observed relatively abundant gaseous methanol in PDRs (e.g. Bouvier et al., 2020). In this respect, therefore, the C=C=O hydrogenation probably occurs in these regions and can lead to frozen acetaldehyde and, perhaps, ethanol.
Another possibility is that ketene formed in the gas phase is also frozen into the grain icy mantles. Astrochemical models predict a gaseous ketene abundance of about \(10^{-10}\)-\(10^{-8}\) while observations indicate \(10^{-10}\)-\(10^{-9}\) abundances (e.g., Bacmann et al., 2012; Jaber et al., 2014; Vastel et al., 2014). Assuming that the frozen ketene is all converted into acetaldehyde and/or ethanol,
it provides upper limits to their abundance of \(10^{-10}\)-\(10^{-9}\) with respect to H, namely about \(10^{-6}\)-\(10^{-5}\) with respect to frozen water.
In summary, it is possible that (frozen) ketene is formed in the PDR skins of the molecular clouds, where C and frozen CO may coexist, which then would trigger the formation of iced acetaldehyde and ethanol, possibly explaining the new observations of JWST (which need confirmation).
## 5 Acknowledgments
S.F. is deeply indebted to Gabriel Laude and Jeremy Richardson for their help and advice on instanton calculations. This project has received funding from the Marie Sklodowska-Curie for the project "Astro-Chemical Origins" (ACO), grant agreement No 811312 and within the European Union's Horizon 2020 research and innovation program from the European Research Council (ERC) for the projects "The Dawn of Organic Chemistry" (DOC), grant agreement No 741002 and "Quantum Chemistry on Interstellar Grains" (QUANTUMGRAIN), grant agreement No 865657.
|
2303.15949 | Derivations and KMS-Symmetric Quantum Markov Semigroups | We prove that the generator of the $L^2$ implementation of a KMS-symmetric
quantum Markov semigroup can be expressed as the square of a derivation with
values in a Hilbert bimodule, extending earlier results by Cipriani and
Sauvageot for tracially symmetric semigroups and the second-named author for
GNS-symmetric semigroups. This result hinges on the introduction of a new
completely positive map on the algebra of bounded operators on the GNS Hilbert
space. This transformation maps symmetric Markov operators to symmetric Markov
operators and is essential to obtain the required inner product on the Hilbert
bimodule. | Matthijs Vernooij, Melchior Wirth | 2023-03-28T13:02:58Z | http://arxiv.org/abs/2303.15949v2 | # Derivations and Kms-symmetric quantum Markov semigroups
###### Abstract.
We prove that the generator of the \(L^{2}\) implementation of a KMS-symmetric quantum Markov semigroup can be expressed as the square of a derivation with values in a Hilbert bimodule, extending earlier results by Cipriani and Sauvageot for tracially symmetric semigroups and the second-named author for GNS-symmetric semigroups. This result hinges on the introduction of a new completely positive map on the algebra of bounded operators on the GNS Hilbert space. This transformation maps symmetric Markov operators to symmetric Markov operators and is essential to obtain the required inner product on the Hilbert bimodule.
## 1. Introduction
Quantum Markov semigroups are a versatile tool that has found applications not only in quantum statistical mechanics, where they were originally introduced in the description of certain open quantum systems [1, 2, 3, 10], but also in various purely mathematical fields such as noncommutative harmonic analysis [1, 2], noncommutative probability [1, 13], noncommutative geometry [1, 2, 3] and the structure theory of von Neumann algebras [1, 2, 14].
One central question from the beginning was to describe the generators of quantum Markov semigroups. For quantum Markov semigroups acting on matrix algebras, a characterization of their generators was given by Lindblad [10] and Gorini-Kossakowski-Sudarshan [1] and later extended to generators of uniformly continuous quantum Markov semigroups on arbitrary von Neumann algebras by Christensen-Evans [1]. While partial results are known in particular for type I factors [1, 2, 3], a similarly explicit description of unbounded generators of quantum Markov semigroups on arbitrary von Neumann algebras seems out of reach.
Both for the modelling of open quantum systems and purely mathematical questions in noncommutative probability, operator algebra theory, etc., one is often not interested in arbitrary quantum Markov semigroups, but quantum Markov semigroups that are symmetric with respect to a reference state or weight. In quantum statistical mechanics, these describe open systems coupled to a heat bath in thermal equilibrium. From the mathematical standpoint, symmetry with respect to a reference state allows to extend the semigroups to symmetric semigroups on the GNS Hilbert space, which makes the powerful tools for self-adjoint Hilbert space operators available.
If the reference state or weight is a trace, there is an unambiguous notion of symmetry, called tracial symmetry. The study of tracially symmetric quantum Markov semigroups through their associated quadratic forms, so-called Dirichlet forms, was initiated by Albeverio and Hoegh-Krohn [1] and further developed by Lindsay and Davies [1, 2]. The (Hilbert space) generators of tracially symmetric quantum Markov semigroups have been characterized by Cipriani and Sauvageot [1] to be of the
Introduction
The study of quantum Markov semigroups on matrix algebras has attracted much attention in the study of quantum Markov semigroups [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 222, 231, 232, 241, 242, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 323, 335, 341, 342, 355, 361, 355, 362, 363, 370, 371, 372, 373, 374, 375, 376, 378, 379, 380, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 411, 422, 434, 44, 451, 452, 453, 454, 455, 456, 457, 458, 459, 460, 471, 472, 473, 474, 475, 476, 477, 478, 479, 48, 490, 41, 422, 434, 44, 451, 452, 453, 454, 456, 457, 459, 461, 462, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 474, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 511, 512, 513, 514, 525, 535, 545, 556, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 81, 83, 84, 85, 86, 87, 88, 89, 91, 84, 88, 89, 92, 85, 89, 93, 84, 85, 86, 89, 94, 87, 88, 89, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 109, 110, 109, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 123, 124, 125, 126, 127, 128, 129, 130, 131, 140, 141, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 156, 157, 158, 159, 160, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 203, 204, 205, 206, 207, 208, 209, 210, 209, 210, 211, 223, 241, 251, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 271, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 293, 294, 295, 296, 297, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 323, 335, 341, 342, 355, 361, 361, 361, 373, 38, 398, 49, 500, 501, 512, 513, 51, 514, 525, 53, 545, 556, 57, 58, 59, 60, 61, 62, 63, 64, 65, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 81, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 81, 84, 85, 86, 87, 88, 89, 91, 80, 82, 83, 84, 85, 86, 87, 88, 89, 90, 81, 82, 83, 84, 85, 86, 87, 88, 89, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 1109, 112, 103, 104, 109, 112, 113, 114, 115, 116, 117, 118, 119, 120, 123, 124,
\(\delta(ab)=a\delta(b)+\delta(a)b\) such that
\[\mathcal{E}(a,b)=\langle\delta(a),\,\delta(b)\rangle_{\mathcal{H}}\]
for \(a,b\in\mathfrak{A}_{\mathcal{E}}\). Moreover, \(\mathcal{H}\) carries an anti-unitary involution and a strongly continuous unitary group with certain compatibility conditions that reflect the commutation of \((T_{t})\) with the modular operator and modular conjugation.
It is then natural to ask whether this relation between \(L^{2}\) generators of quantum Markov semigroups and derivations can be extended to KMS-symmetric semigroups as KMS symmetry can be seen as the more natural assumption in some contexts. For one, every completely positive map can be decomposed as a linear combination of KMS-symmetric ones using the Accardi-Cecchini adjoint [1], while for GNS symmetry the commutation with the modular group poses an algebraic constraint. This makes KMS-symmetric quantum Markov semigroups more suitable for various applications, such as the characterization of the Haagerup property in terms of KMS-symmetric quantum Markov semigroups [13], while the same property for GNS-symmetric semigroups is more restrictive. But also in quantum statistical mechanics, irreversible open quantum systems are often modeled by quantum Markov semigroups that are only KMS-symmetric rather than GNS-symmetric, such as the heat-bath dynamics introduced in [12] for example.
The lack of commutation with the modular group poses a serious challenge. For example, many questions regarding noncommutative \(L^{p}\) spaces can be reduced to \(L^{p}\) spaces with respect to a trace by Haagerup's reduction method [11], but commutation with the modular group is necessary for maps to be compatible with this reduction procedure.
Even for KMS-symmetric quantum Markov semigroups on type I factors, when explicit representations of the generator are known [10, 1], it is not obvious if the generator can be expressed in the form \(\delta^{*}\delta\) for a derivation.
For these reasons it has not been clear if one should even expect a Cipriani-Sauvageot-type result for KMS-symmetric quantum Markov semigroups. In this article we show that this is indeed the case, not only on matrix algebras, but arbitrary von Neumann algebras. For generators of uniformly continuous quantum Markov semigroups, our main result is the following (Theorems 4.2, 4.7 in the main part). Here \(\mathcal{L}_{2}\) denotes the KMS implementation of \(\mathcal{L}\) on \(L^{2}(M)\).
**Theorem**.: _Let \((\Phi_{t})\) be a uniformly continuous KMS-symmetric quantum Markov semigroup on \(M\) and let \(\mathcal{L}\) denote its generator. There exists a correspondence \(\mathcal{H}\), an anti-linear involution \(\mathcal{J}:\mathcal{H}\to\mathcal{H}\) and a bounded operator \(\delta\colon L^{2}(M)\to\mathcal{H}\) satisfying_
1. \(\mathcal{J}(x\xi y)=y^{*}(\mathcal{J}\,\xi)x^{*}\) _for all_ \(x,y\in M\) _and_ \(\xi\in\mathcal{H}\)_,_
2. \(\delta(ja)=\mathcal{J}\,\delta(a)\) _for all_ \(a\in L^{2}(M)\)_,_
3. \(\delta(ab)=\pi_{l}(a)\cdot\delta(b)+\delta(a)\cdot\pi_{r}(b)^{*}I\) _for all_ \(a\in M\varphi^{1/2}\)_,_ \(b\in\varphi^{1/2}M\)_,_
4. \(\overline{\ln}\{\delta(a)x\mid a\in L^{2}(M),\,x\in M\}=\mathcal{H}\)__
_such that_
\[\mathcal{L}_{2}=\delta^{*}\delta.\]
_Moreover, there exists \(\xi\in\mathcal{H}\) such that_
\[\delta(a)=\pi_{l}(a)\xi-\xi(J\pi_{r}(a)^{*}I)\]
_for \(a\in M\varphi^{1/2}\cap\varphi^{1/2}M\)._
_Furthermore, a triple \((\mathcal{H},\mathcal{J},\delta)\) satisfying (a)-(d) is uniquely determined by \(\delta^{*}\delta\) up to isomorphism._
For KMS-symmetric quantum Markov semigroups that are not uniformly continuous we do not have a uniqueness result, but we can still prove existence in the following form (Theorems 5.2, 5.4 in the main part).
**Theorem**.: _Let \((\Phi_{t})\) be a KMS-symmetric quantum Markov semigroup on \(M\) with generator \(\mathcal{L}\). If \(a\in\operatorname{dom}(\mathcal{L}_{2}^{1/2})\cap M\varphi^{1/2}\) and \(b\in\operatorname{dom}(\mathcal{L}_{2}^{1/2})\cap\varphi^{1/2}M\), then \(ab\in\operatorname{dom}(\mathcal{L}_{2}^{1/2})\)._
_Moreover, there exists a Hilbert space \(\mathcal{H}\) with commuting left and right actions of \(M\), an anti-unitary involution \(\mathcal{J}:\mathcal{H}\to\mathcal{H}\) such that_
\[\mathcal{J}(x\xi y)=y^{*}(\mathcal{J}\xi)x^{*}\]
_for \(x,y\in M\) and \(\xi\in\mathcal{H}\), a closed operator \(\delta\colon\operatorname{dom}(\mathcal{L}_{2}^{1/2})\to\mathcal{H}\) such that \(\mathcal{J}\delta=\delta J\) and_
\[\delta(ab)=\pi_{l}(a)\cdot\delta(b)+\delta(a)\cdot J\pi_{r}(a)^{*}J\]
_for \(a\in\operatorname{dom}(\mathcal{L}_{2}^{1/2})\cap M\varphi^{1/2}\), \(b\in\operatorname{dom}(\mathcal{L}_{2}^{1/2})\cap\varphi^{1/2}M\), and_
\[\mathcal{L}_{2}=\delta^{*}\delta.\]
To establish these results, it requires a fundamentally new tool in the form of a quantum channel on \(B(L^{2}(M))\), which we call the \(\mathcal{V}\)_-transform_. Formally, the \(\mathcal{V}\)-transform of \(T\in B(L^{2}(M))\) is the solution \(S\) of the equation
\[\frac{1}{2}(\Delta^{1/4}S\Delta^{-1/4}+\Delta^{-1/4}S\Delta^{1/4})=T\,,\]
where \(\Delta\) is the modular operator.
A remarkable fact about this map is that it maps positivity-preserving maps (with respect to the self-dual cone induced by \(\varphi\)) to positivity-preserving maps. This property is key for the existence of the Hilbert space \(\mathcal{H}\) in our main results.
Let us briefly summarize the outline of this article. To make the proof strategy transparent without the technical difficulties occuring for general von Neumann algebras, but also to make the main results more accessible for researchers in the quantum information theory community, we first develop the \(\mathcal{V}\)-transform and prove our main result for matrix algebras in Section 2. In Section 3 we define the \(\mathcal{V}\)-transform on general von Neumann algebras and establish some of its properties in particular regarding positvity preservation. In Section 4 we prove our main results on existence and uniqueness of derivations associated with uniformly continuous KMS-symmetric quantum Markov semigroups. Finally, in Section 5 we show the existence of derivations associated with not necessarily uniformly continuous semigroups.
### Acknowledgments
The authors are grateful to Martijn Caspers for helpful comments on a preliminary version of this manuscript. M. V. was supported by the NWO Vidi grant VI.Vidi.192.018 'Non-commutative harmonic analysis and rigidity of operator algebras'. M. W. was funded by the Austrian Science Fund (FWF) under the Esprit Programme [ESP 156]. For the purpose of Open Access, the authors have applied a CC BY public copyright licence to any Author Accepted Manuscript (AAM) version arising from this submission.
## 2. Quantum Markov Semigroups and Derivations on Matrix Algebras
In this section we demonstrate the connection between KMS-symmetric quantum Markov semigroups and derivations in the case of matrix algebras. We first prove a finite-dimensional version of the main result of this article, which allows to express generators of KMS-symmetric quantum Markov semigroups as squares of derivations in a suitable sense (Theorem 2.4). We then use the simple structure of bimodules over matrix algebras to give a more explicit expression for the quadratic form associated with the generator of a KMS-symmetric quantum Markov semigroups (Theorem 2.5). As in the general case treated in the next sections, the crucial technical tool is the \(\mathscr{V}\)-transform, which will be introduced in Subsection 2.1.
Let us start with some notation. We write \(M_{n}(\mathbb{C})\) for the algebra of \(n\times n\) matrices over the complex numbers, \(I_{n}\) for the identity matrix in \(M_{n}(\mathbb{C})\), and \(\mathrm{id}_{n}\) for the identity map from \(M_{n}(\mathbb{C})\) to itself. The norm \(\|\cdot\|\) always denotes the operator norm, either for elements of \(M_{n}(\mathbb{C})\) or for linear maps from \(M_{n}(\mathbb{C})\) to itself.
A linear map \(\Phi\colon M_{n}(\mathbb{C})\to M_{n}(\mathbb{C})\) is called
* _completely positive_ if, for all \(A_{1},\ldots,A_{n},B_{1},\ldots,B_{n}\in M_{n}(\mathbb{C})\), \[\sum_{j,k=1}^{n}B_{j}^{*}\Phi(A_{j}^{*}A_{k})B_{k}\geq 0;\]
* _conditionally completely negative_ if, for all \(A_{1},\ldots,A_{n},B_{1},\ldots,B_{n}\in M_{n}(\mathbb{C})\) with \(\sum_{j=1}^{n}A_{j}B_{j}=0\), \[\sum_{j,k=1}^{n}B_{j}^{*}\Phi(A_{j}^{*}A_{k})B_{k}\leq 0;\]
* _unital_ if \(\Phi(I_{n})=I_{n}\).
A _quantum Markov semigroup_ is a family \((\Phi_{t})_{t\geq 0}\) of unital completely positive maps on \(M_{n}(\mathbb{C})\) such that
* \(\Phi_{0}=\mathrm{id}_{n}\),
* \(\Phi_{S}\Phi_{t}=\Phi_{S+t}\) for all \(s,t\geq 0\),
* \(\lim_{t\to 0}\|\Phi_{t}-\mathrm{id}_{n}\|=0\).
If \((\Phi_{t})\) is a quantum Markov semigroup on \(M_{n}(\mathbb{C})\), then the limit
\[\mathscr{L}=\lim_{t\to 0}\frac{1}{t}(\mathrm{id}_{n}-\Phi_{t})\]
exists and is called the _generator_ of \((\Phi_{t})\). It is the unique linear operator on \(M_{n}(\mathbb{C})\) such that \(e^{-t\mathscr{L}}=\Phi_{t}\) for all \(t\geq 0\).
Now fix a density matrix \(\rho\in M_{n}(\mathbb{C})\), that is, a positive matrix with trace \(1\), and assume that \(\rho\) is invertible. The KMS inner product induced by \(\rho\) is defined as
\[\langle\cdot,\cdot\rangle_{\rho}\colon M_{n}(\mathbb{C})\times M_{n}(\mathbb{ C})\to\mathbb{C},\,(A,B)\mapsto\mathrm{tr}(A^{*}\rho^{1/2}B\rho^{1/2}).\]
If \(\Phi\colon M_{n}(\mathbb{C})\to M_{n}(\mathbb{C})\) is a linear map, we write \(\Phi^{\dagger}\) for its adjoint with respect to \(\langle\cdot,\cdot\rangle_{\rho}\) and we say that \(\Phi\) is _KMS-symmetric_ if \(\Phi^{\dagger}=\Phi\).
A quantum Markov semigroup \((\Phi_{t})\) is called _KMS-symmetric_ if \(\Phi_{t}\) is KMS-symmetric for all \(t\geq 0\). Equivalently, \((\Phi_{t})\) is KMS-symmetric if and only if its generator is KMS-symmetric.
The _modular group_ (or rather its analytic continuation) of \(\rho\) is the family \((\sigma_{z})_{z\in\mathbb{C}}\) of algebra homomorphisms on \(M_{n}(\mathbb{C})\) defined by
\[\sigma_{z}(A)=\rho^{iz}A\rho^{-iz}\]
for \(A\in M_{n}(\mathbb{C})\) and \(z\in\mathbb{C}\). Note that \(\sigma_{z}^{\ddagger}=\sigma_{-z}\).
If a KMS-symmetric operator on \(M_{n}(\mathbb{C})\) commutes with the modular group, then it is called GNS-symmetric (this is equivalent to the usual definition of GNS symmetry by [10, Lemma 2.5]).
According to [1, Theorem 4.4], the generator \(\mathscr{L}\) of a KMS-symmetric quantum Markov semigroup is of the form
\[\mathscr{L}(A)=(1+\sigma_{-i/2})^{-1}(\Psi(I_{n}))A+A(1+\sigma_{i/2})^{-1}( \Psi(I_{n}))-\Psi(A)\]
for some KMS-symmetric completely positive map \(\Psi\colon M_{n}(\mathbb{C})\to M_{n}(\mathbb{C})\).
The main goal of this section is to show that the sesquilinear form associated with \(\mathscr{L}\) can be written as
\[\langle\mathscr{L}(A),B\rangle_{\rho}=\sum_{j=1}^{N}\langle[V_{j},A],[V_{j},B ]\rangle_{\rho}\]
with matrices \(V_{1},\ldots,V_{N}\in M_{n}(\mathbb{C})\).
### The \(\mathscr{V}\)-Transform
We write \(B(M_{n}(\mathbb{C}))\) for the space of all linear maps from \(M_{n}(\mathbb{C})\) to itself. This space is generated by left and right multiplication operators in the following sense. For \(A\in M_{n}(\mathbb{C})\) let
\[\mathbb{L}_{A},\mathbb{R}_{A}\colon M_{n}(\mathbb{C})\to M_{n}(\mathbb{C}),\, \mathbb{L}_{A}(X)=AX,\,\mathbb{R}_{A}(X)=XA.\]
By [10, Lemma A.1], the linear span of \(\{\mathbb{L}_{A}\mathbb{R}_{B}\mid A,B\in M_{n}(\mathbb{C})\}\) is \(B(M_{n}(\mathbb{C}))\).
The \(\mathscr{V}\)-transform is a linear map on \(B(M_{n}(\mathbb{C}))\), which is most conveniently defined through its inverse. Let
\[\mathscr{W}\colon B(M_{n}(\mathbb{C}))\to B(M_{n}(\mathbb{C})),\,\Phi \mapsto\frac{1}{2}(\sigma_{i/4}\Phi\sigma_{-i/4}+\sigma_{-i/4}\Phi\sigma_{i/4 }).\]
In particular, if \(\Phi=\mathbb{L}_{A}\mathbb{R}_{B}\), then
\[\mathscr{W}(\Phi)=\frac{1}{2}(\mathbb{L}_{\sigma_{i/4}(A)}\mathbb{R}_{\sigma_ {i/4}(B)}+\mathbb{L}_{\sigma_{-i/4}(A)}\mathbb{R}_{\sigma_{-i/4}(B)}).\]
**Proposition 2.1**.: _The map \(\mathscr{W}\) is invertible with inverse given by_
\[\mathscr{W}^{-1}(\Phi)=2\int_{0}^{\infty}\sigma_{-i/4}e^{-r\sigma_{-i/2}}\Phi \sigma_{-i/4}e^{-r\sigma_{-i/2}}\,dr\]
_for \(\Phi\in B(M_{n}(\mathbb{C}))\)._
_In particular, if \(A,B\in M_{n}(\mathbb{C})\), then_
\[\mathscr{W}^{-1}(\mathbb{L}_{A}\mathbb{R}_{B})=2\int_{0}^{\infty}\mathbb{L}_ {\sigma_{i/4}(e^{-r\sigma_{i/2}(A)})}\mathbb{R}_{\sigma_{-i/4}(e^{-r\sigma_{- i/2}(B)})}\,dr.\]
Proof.: Since \(\sigma_{-i/2}\) is an invertible operator on \(M_{n}(\mathbb{C})\) and \(\operatorname{tr}(\sigma_{-i/2}(A)^{\ast}A)\geq 0\) for all \(A\in M_{n}(\mathbb{C})\), the spectrum of \(\sigma_{-i/2}\) consists of strictly positive numbers. Let \(\lambda\) denote the smallest eigenvalue of \(\sigma_{-i/2}\). By the spectral theorem,
\[\|e^{-r\sigma_{-i/2}}\|\leq e^{-\lambda r}\]
for all \(r\geq 0\). It follows that for \(\Phi\in B(M_{n}(\mathbb{C}))\) we have
\[\|\sigma_{-i/4}e^{-r\sigma_{-i/2}}\Phi\sigma_{-i/4}e^{-r\sigma_{-i/2}}\|\leq e^{ -2\lambda r}\|\sigma_{-i/4}\|^{2}\|\Phi\|.\]
Therefore the integral
\[\int_{0}^{\infty}\sigma_{-i/4}e^{-r\sigma_{-i/2}}\Phi\sigma_{-i/4}e^{-r\sigma_ {-i/2}}\,dr\]
converges absolutely.
Moreover,
\[\mathcal{W}\left(2\int_{0}^{\infty}\sigma_{-i/4}e^{-r\sigma_{-i/2 }}\Phi\sigma_{-i/4}e^{-r\sigma_{-i/2}}\,dr\right)\] \[\qquad=\int_{0}^{\infty}e^{-r\sigma_{-i/2}}(\sigma_{-i/2}\Phi+ \Phi\sigma_{-i/2})e^{-r\sigma_{-i/2}}\,dr\] \[\qquad=-\int_{0}^{\infty}\frac{d}{dr}(e^{-r\sigma_{-i/2}}\Phi e^{ -r\sigma_{-i/2}})\,dr\] \[\qquad=\Phi.\]
Thus \(\mathcal{W}\) is invertible and the claimed integral expression for the inverse holds. Similar arguments yield the integral formula for \(\mathcal{W}^{-1}(\mathbb{L}_{A}\mathbb{R}_{B})\).
**Definition 2.2**.: We call the inverse of \(\mathcal{W}\) the \(\mathcal{V}\)-transform and denote it by \(\mathcal{V}\). For \(\Phi\in B(M_{n}(\mathbb{C}))\) we also write \(\tilde{\Phi}\) for \(\mathcal{V}(\Phi)\).
**Lemma 2.3**.: _The \(\mathcal{V}\)-transform is a bijective linear map on \(B(M_{n}(\mathbb{C}))\) with the following properties._
1. _If_ \(\Phi\in B(M_{n}(\mathbb{C}))\) _is KMS-symmetric, then_ \(\mathcal{V}(\Phi)\) _is KMS-symmetric._
2. _If_ \(\Phi\in B(M_{n}(\mathbb{C}))\) _is completely positive, then_ \(\mathcal{V}(\Phi)\) _is completely positive._
3. \(\mathcal{V}(\mathrm{id}_{n})=\mathrm{id}_{n}\)_._
Proof.:
1. Let \(\Phi\in B(M_{n}(\mathbb{C}))\). By Proposition 2.1 we have \[\mathcal{V}(\Phi)=2\int_{0}^{\infty}\sigma_{-i/4}e^{-r\sigma_{-i/2}}\Phi\sigma_ {-i/4}e^{-r\sigma_{-i/2}}\,dr.\] Since \(\sigma_{-i/4}\) is KMS-symmetric, so is \(e^{-r\sigma_{-i/2}}\). Thus the KMS adjoint of \(\mathcal{V}(\Phi)\) satisfies \[\mathcal{V}(\Phi)^{\dagger}=2\int_{0}^{\infty}\sigma_{-i/4}e^{-r\sigma_{-i/2} }\Phi^{\dagger}\sigma_{-i/4}e^{-r\sigma_{-i/2}}\,dr.\] In particular, if \(\Phi\) is KMS-symmetric, so is \(\mathcal{V}(\Phi)\).
2. If \(\Phi\) is completely positive, by Kraus' theorem there exist \(V_{1},\dots,V_{N}\in M_{n}(\mathbb{C})\) such that \[\Phi=\sum_{j=1}^{N}\mathbb{L}_{V_{j}^{*}}\mathbb{R}_{V_{j}}.\] From Proposition 2.1 and the identity \(\sigma_{z}(a)^{*}=\sigma_{\overline{z}}(a^{*})\) we deduce \[\mathcal{V}(\Phi)=\sum_{j=1}^{N}\int_{0}^{\infty}\mathbb{L}_{\sigma_{-i/4}(e^{ -r\sigma_{-i/2}}(V_{j}))^{*}}\mathbb{R}_{\sigma_{-i/4}(e^{-r\sigma_{-i/2}}(V_ {j}))}\,dr.\] Since maps of the form \(\mathbb{L}_{A^{*}}\mathbb{R}_{A}\) are completely positive and positive linear combinations and limits of completely positive maps are again completely positive, it follows that \(\mathcal{V}(\Phi)\) is completely positive.
3. The identity \(\mathcal{W}(\mathrm{id}_{n})=\mathrm{id}_{n}\) is immediate from the definition, from which \(\mathcal{V}(\mathrm{id}_{n})=\mathrm{id}_{n}\) follows directly.
### Derivations for KMS-Symmetric Markov Generators on Matrix Algebras
We are now in the position to prove the existence of a (twisted) derivation that implements the Dirichlet form associated with a KMS-symmetric quantum Markov semigroup on \(M_{n}(\mathbb{C})\). We first present an abstract version that will later be generalized to quantum Markov semigroups on arbitrary von Neumann algebras. A more explicit version tailored for matrix algebras will be discussed below.
**Theorem 2.4**.: _Let \((\Phi_{t})\) be a KMS-symmetric quantum Markov semigroup on \(M_{n}(\mathbb{C})\) and let \(\mathcal{L}\) denote its generator. There exists a Hilbert space \(\mathcal{H}\), a unital \(*\)-homorphism \(\pi_{l}\colon M_{n}(\mathbb{C})\to B(\mathcal{H})\), a unital \(*\)-antihomorphism \(\pi_{r}\colon M_{n}(\mathbb{C})\to B(\mathcal{H})\), and anti-linear isometric involution \(\mathcal{J}:\mathcal{H}\to\mathcal{H}\) and a linear map \(\delta\colon M_{n}(\mathbb{C})\to\mathcal{H}\) satisfying_
1. \(\pi_{l}(A)\pi_{r}(B)=\pi_{r}(B)\pi_{l}(A)\) _for all_ \(A\)_,_ \(B\in M_{n}(\mathbb{C})\)_,_
2. \(\mathcal{J}(\pi_{l}(A)\pi_{r}(B)\xi)=\pi_{l}(B)^{*}\pi_{r}(A)^{*}\mathcal{J}\xi\) _for all_ \(A\)_,_ \(B\in M_{n}(\mathbb{C})\) _and_ \(\xi\in\mathcal{H}\)_,_
3. \(\delta(A^{*})=\mathcal{J}\delta(A)\) _for all_ \(A\in M_{n}(\mathbb{C})\)_,_
4. \(\delta(AB)=\pi_{l}(\sigma_{-i/4}(A))\delta(B)+\pi_{r}(\sigma_{i/4}(B))\delta(A)\) _and_
5. \(\mathcal{H}=\lim\{\pi_{l}(A)\delta(B)\}|A,B\in M_{n}\mathbb{C}\}\)__
_such that_
\[\langle A,\mathcal{L}(B)\rangle_{\rho}=\langle\delta(A),\delta(B)\rangle_{ \mathcal{H}} \tag{1}\]
_for all \(A,B\in M_{n}(\mathbb{C})\)._
Proof.: First, we claim that \(\tilde{\mathcal{L}}\) is a KMS-symmetric conditionally completely negative map. The KMS-symmetry follows from Lemma 2.3(i). By Lemma 2.3(ii) \(\tilde{\Phi}_{t}\) is completely positive for all \(t\geq 0\). Since
\[\tilde{\mathcal{L}}=\lim_{t\to 0}\frac{1}{t}(\mathrm{id}_{n}-\tilde{\Phi}_{t})\]
by Lemma 2.3(iii), it follows from the definitions that \(\tilde{\mathcal{L}}\) is conditionally completely negative, proving the claim. We also observe that
\[\tilde{\mathcal{L}}(I_{n})=2\int_{0}^{\infty}\sigma_{-i/4}e^{-r\sigma_{-i/2} }(\mathcal{L}(\sigma_{-i/4}e^{-r\sigma_{-i/2}}(I_{n})))\,dr=0\]
by Proposition 2.1.
Next, we define a sesquilinear form \(\langle\cdot,\cdot\rangle_{\mathcal{H}}\) on the algebraic tensor product \(M_{n}(\mathbb{C})\odot M_{n}(\mathbb{C})\) by
\[\langle A_{1}\otimes B_{1},A_{2}\otimes B_{2}\rangle_{\mathcal{H}}=-\frac{1}{2 }\mathrm{tr}(B_{1}^{*}\rho^{1/2}\tilde{\mathcal{L}}(A_{1}^{*}A_{2})\rho^{1/2}B _{2}).\]
Now consider the subspace
\[N=\left\{\sum_{j}A_{j}\otimes B_{j}:\sum_{j}A_{j}\sigma_{-i/2}(B_{j})=0\right\}.\]
Because \(\tilde{\mathcal{L}}\) is completely conditionally negative, we see that for any \(\sum_{j}A_{j}\otimes B_{j}\in N\) we have
\[\sum_{jk}\langle A_{j}\otimes B_{j},A_{k}\otimes B_{k}\rangle_{ \mathcal{H}} =-\frac{1}{2}\sum_{jk}\operatorname{tr}(B_{j}^{*}\rho^{1/2}\tilde{ \mathcal{L}}(A_{j}^{*}A_{k})\rho^{1/2}B_{k})\] \[=-\frac{1}{2}\sum_{jk}\operatorname{tr}(\rho^{1/2}\sigma_{-i/2}( B_{j})^{*}\tilde{\mathcal{L}}(A_{j}^{*}A_{k})\sigma_{-i/2}(B_{k})\rho^{1/2})\] \[\geq 0.\]
Therefore, this sesquilinear form is positive semidefinite on \(N\). Now let
\[\mathcal{H}=N/\{u\in N|\langle u,u\rangle_{\mathcal{H}}=0\}.\]
Then \(\langle\cdot,\cdot\rangle\) induces an inner product on \(\mathcal{H}\), which turns \(\mathcal{H}\) into a Hilbert space (as \(\mathcal{H}\) is finite-dimensional). We write \(\sum_{j}A_{j}\otimes_{\mathcal{H}}B_{j}\) for the image of \(\sum_{j}A_{j}\otimes B_{j}\) in \(\mathcal{H}\) under the quotient map.
Define \(\pi_{l}\) and \(\pi_{r}\) by
\[\pi_{l}(X)\sum_{j}A_{j}\otimes_{\mathcal{H}}B_{j}=\sum_{j}XA_{j}\otimes_{ \mathcal{H}}B_{j},\,\pi_{r}(X)\sum_{j}A_{j}\otimes_{\mathcal{H}}B_{j}=\sum_{j }A_{j}\otimes_{\mathcal{H}}B_{j}X\]
for \(X\in M_{n}(\mathbb{C})\) and \(\sum_{j}A_{j}\otimes B_{j}\in N\). These maps are well defined. Indeed, they preserve \(N\), and for all \(u\in N\) with \(\langle u,u\rangle_{\mathcal{H}}=0\) we have
\[\langle\pi_{l}(X)\pi_{r}(Y)u,\pi_{l}(X)\pi_{r}(Y)u\rangle=\langle\pi_{l}(X^{*} X)\pi_{r}(YY^{*})u,u\rangle\leq 0\]
by Cauchy-Schwarz. From the definitions of \(\pi_{l}\), \(\pi_{r}\) and \(\langle\cdot,\cdot\rangle_{\mathcal{H}}\) we now conclude that \(\pi_{l}\) is a unital \(*\)-homomorphism and \(\pi_{r}\) is a unital \(*\)-antihomomorphism.
Subsequently, we will define the anti-linear isometric involution \(\mathcal{J}\) on \(\mathcal{H}\). Consider the map
\[A\otimes B\mapsto-B^{*}\otimes A^{*}\]
from \(M_{n}(\mathbb{C})\odot M_{n}(\mathbb{C})\) to itself. This is an isometry since \(\tilde{\mathcal{L}}\) is KMS-symmetric. Moreover, it preserves \(N\). Consequently, it acts in a well-defined manner on the equivalence classes of \(\mathcal{H}\), and we call this map \(\mathcal{J}\). It is clear that \(\mathcal{J}\) is an anti-linear involution.
Lastly, we define the map \(\delta:M_{n}(\mathbb{C})\to\mathcal{H}\) by
\[\delta(A)=\sigma_{-i/4}(A)\otimes_{\mathcal{H}}I_{n}-I_{n}\otimes\sigma_{i/4} (A).\]
With this definition properties (i)-(iv) are immediate from the definitions. For property (v) note that
\[\sum_{j}A_{j}\otimes_{\mathcal{H}}B_{j}=\sum_{j}\pi_{l}(A_{j})\delta(\sigma_{ -i/4}(B_{j}))+A_{j}\sigma_{-i/2}(B_{j})\otimes_{\mathcal{H}}I_{n}=\sum_{j} \pi_{l}(A_{j})\delta(\sigma_{-i/4}(B_{j}))\]
for any \(\sum_{j}A_{j}\otimes_{\mathcal{H}}B_{j}\in\mathcal{H}\).
To complete the proof of the theorem, we need to show that (1) holds. Let \(A,B\in M_{n}(\mathbb{C})\). Then we have
\[\langle\delta(A),\delta(B)\rangle_{\mathcal{H}} =\langle\sigma_{-i/4}(A)\otimes_{\mathcal{H}}I_{n}-I_{n}\otimes_{ \mathcal{H}}\sigma_{i/4}(A),\,\sigma_{-i/4}(B)\otimes_{\mathcal{H}}I_{n}-I_{ n}\otimes_{\mathcal{H}}\sigma_{i/4}(B)\rangle \tag{2}\] \[= \frac{1}{2}\Big{(}\operatorname{tr}(\sigma_{-i/4}(A^{*})\rho^{1/2} \tilde{\mathcal{L}}(\sigma_{-i/4}(B))\rho^{1/2})+\operatorname{tr}(\rho^{1/2} \tilde{\mathcal{L}}(\sigma_{i/4}(A^{*}))\rho^{1/2}\sigma_{i/4}(B))\] (3) \[-\operatorname{tr}(\rho^{1/2}\tilde{\mathcal{L}}(\sigma_{i/4}(A^{ *})\sigma_{-i/4}(B))\rho^{1/2})-\operatorname{tr}(\sigma_{-i/4}(A^{*})\rho^{1/2 }\tilde{\mathcal{L}}(I_{n})\rho^{1/2}\sigma_{i/4}(B))\Big{)}.\]
The two terms in line (3) are zero because \(\tilde{\mathcal{L}}\) is KMS-symmetric and \(\tilde{\mathcal{L}}(I_{n})=0\). For the terms in line (2) we use the KMS-symmetry of \(\tilde{\mathcal{L}}\) and the fact that \(\operatorname{tr}(\sigma_{it}(C)D)=\operatorname{tr}(C\sigma_{-it}(D))\) for all \(t\in\mathbb{R}\) and \(C,D\in M_{n}(\mathbb{C})\) to conclude that
\[\langle\delta(A),\delta(B)\rangle_{\mathcal{H}} =\frac{1}{2}\operatorname{tr}(A^{*}\rho^{1/2}\sigma_{i/4}(\tilde{ \mathcal{L}}(\sigma_{-i/4}(B)))\rho^{1/2})+\frac{1}{2}\operatorname{tr}(\rho^{ 1/2}A^{*}\rho^{1/2}\sigma_{-i/4}(\tilde{\mathcal{L}}(\sigma_{i/4}(B))))\] \[=\operatorname{tr}(A^{*}\rho^{1/2}\mathcal{W}(\tilde{\mathcal{L} })(B)\rho^{1/2})\] \[=\langle A,\mathcal{L}(B)\rangle_{\rho},\]
as desired.
As a consequence of the previous result, we get a more explicit expression for the quadratic form associated with the generator of a KMS-symmetric quantum Markov semigroup on \(M_{n}(\mathbb{C})\). An analogous expression can be found in [13, Eq. (5.3)], [13, Prop. 2.5] for the special case of GNS-symmetric quantum Markov semigroups.
**Theorem 2.5**.: _If \((\Phi_{t})\) is a KMS-symmetric quantum Markov semigroup on \(M_{n}(\mathbb{C})\) with generator \(\mathcal{L}\), then there exist matrices \(V_{1},\ldots,V_{N}\in M_{n}(\mathbb{C})\) such that \(\{V_{j}\}_{j=1}^{N}=\{V_{j}^{*}\}_{j=1}^{N}\) and_
\[\langle A,\mathcal{L}(B)\rangle_{\rho}=\sum_{j=1}^{N}\langle[V_{j},A],[V_{j},B ]\rangle_{\rho}\]
_for all \(A,B\in M_{n}(\mathbb{C})\)._
Proof.: Let \(\mathcal{H}\), \(\pi_{l}\), \(\pi_{r}\), \(\mathcal{J}\) and \(\delta\) be as in the last Theorem. The map \(\pi_{l}\otimes\pi_{r}\) is a unital representation of \(M_{n}(\mathbb{C})\otimes M_{n}(\mathbb{C})^{\mathrm{op}}\). It follows from the representation theory of matrix algebras [1, Proposition IV.1.2.2.] that there exists an auxiliary Hilbert space \(H\) such that \(\mathcal{H}\cong M_{n}(\mathbb{C})\otimes H\), where \(M_{n}(\mathbb{C})\) is endowed with the Hilbert-Schmidt inner product, and
\[\pi_{l}(A)(B\otimes\xi) =AB\otimes\xi,\] \[\pi_{r}(A)(B\otimes\xi) =BA\otimes\xi\]
for \(A,B\in M_{n}(\mathbb{C})\) and \(\xi\in H\) under this identification.
Moreover, since \(\mathcal{H}\) is finite-dimensional by property (v), the space \(H\) is finite-dimensional, say \(H=\mathbb{C}^{N}\).
Thus \(\delta(A)=\sum_{j=1}^{N}\delta_{j}(A)\otimes e_{j}\) with the canonical orthonormal basis \((e_{j})\) on \(\mathbb{C}^{N}\). It follows from property (iv) that
\[\rho^{-1/4}\delta_{j}(AB)\rho^{-1/4}=A\rho^{-1/4}\delta_{j}(B)\rho^{-1/4}+\rho^ {-1/4}\delta_{j}(A)\rho^{-1/4}B\]
for \(A,B\in M_{n}(\mathbb{C})\). In other words, \(A\mapsto\rho^{-1/4}\delta_{j}(A)\rho^{-1/4}\) is a derivation. By the derivation theorem [1, Theorem 9] there exists \(V_{j}\in M_{n}(\mathbb{C})\) such that \(\delta_{j}(A)=\rho^{1/4}[V_{j},A]\rho^{1/4}\).
We conclude that
\[\langle A,\mathcal{L}(B)\rangle_{\rho}=\langle\delta(A),\delta(B)\rangle_{ \mathcal{H}}=\sum_{j=1}^{N}\operatorname{tr}(\delta_{j}(A)^{*}\delta_{j}(B))= \sum_{j=1}^{N}\langle[V_{j},A],[V_{j},B]\rangle_{\rho}.\]
Since \(\langle[V_{j},A],[V_{j},B]\rangle_{\rho}=\langle[V_{j}^{*},B^{*}],[V_{j}^{*},A^{*} ]\rangle_{\rho}\), we have
\[\langle\delta(A),\delta(B)\rangle_{\mathcal{H}}=\langle\mathcal{J}\delta(B), \mathcal{J}\delta(A)\rangle_{\mathcal{H}}=\sum_{j=1}^{N}\langle[V_{j},B^{*}],[ V_{j},A^{*}]\rangle_{\rho}=\sum_{j=1}^{N}\langle[V_{j}^{*},A],[V_{j}^{*},B]\rangle_{\rho}.\]
This shows that \(V_{1},\ldots,V_{N}\) can be chosen such that \(\{V_{j}\}_{j=1}^{N}=\{V_{j}^{*}\}_{j=1}^{N}\).
_Remark 2.6_.: This theorem can also be proven without the use of Theorem 2.4. In the appendix we include a proof of Theorem 2.5 using the \(\mathcal{V}\)-transform and the structure of generators of KMS-symmetric quantum Markov semigroups described in [1, Theorem 4.4].
## 3. The \(\mathcal{V}\)-Transform
As we have seen in Section 2 in the case of matrix algebras, the key ingredient to construct the derivation associated with a KMS-symmetric quantum Markov semigroup is the \(\mathcal{V}\)-transform. This remains the case for semigroups on general von Neumann algebras, but the definition and properties of the \(\mathcal{V}\)-transform become much more delicate as it involves in general unbounded operators like the analytic generator of the modular group. In particular the fact that the \(\mathcal{V}\)-transform preserves completely positive maps, which is crucial for defining an inner product, requires new arguments in this general setting.
For technical convenience, we first define the \(\mathcal{V}\)-transform for bounded operators on the Hilbert space \(L^{2}(M)\), before we transfer it to KMS-symmetric unital completely positive maps on \(M\).
Throughout this section we fix a \(\sigma\)-finite von Neumann algebra \(M\) and a faithful normal state \(\varphi\) on \(M\). We also write \(\varphi\) for the element in \(L^{1}(M)\) representing \(\varphi\) so that \(\varphi^{1/2}\) is the unique positive vector in \(L^{2}(M)\) representing \(\varphi,\varphi^{it}\cdot\varphi^{-it}\) is the modular group etc. We write \(\Delta\) and \(J\) for the modular operator and modular conjugation associated with \(\varphi\), and \(L^{2}_{+}(M)\) for the standard self-dual positive cone in \(L^{2}(M)\).
### \(\mathcal{V}\)-Transform of Bounded Operators on \(L^{2}(m)\)
In this subsection we define the \(\mathcal{V}\)-transform on \(B(L^{2}(M))\) and discuss some of its properties. The \(\mathcal{V}\)-transform in this setting is the map
\[\mathcal{V}\colon B(L^{2}(M))\to B(L^{2}(M)),\,T\mapsto\widecheck{T}=2\int_{ 0}^{\infty}\Delta^{1/4}e^{-r\Delta^{1/2}}T\Delta^{1/4}e^{-r\Delta^{1/2}}\,dr.\]
We first prove that it is well-defined. Note that \(\Delta^{1/4}e^{-r\Delta^{1/2}}\) is bounded (and self-adjoint) for every \(r>0\), so that the integrand is a bounded operator and the only difficulty is integrability.
**Lemma 3.1**.: _If \(T\in B(L^{2}(M))\) and \(\xi,\eta\in L^{2}(M)\), then_
\[r\mapsto\langle\xi,\Delta^{1/4}e^{-r\Delta^{1/2}}T\Delta^{1/4}e^{-r\Delta^{1/2 }}\eta\rangle\]
_is integrable on \((0,\infty)\), and_
\[2\left|\int_{0}^{\infty}\langle\xi,\Delta^{1/4}e^{-r\Delta^{1/2}}T\Delta^{1/4 }e^{-r\Delta^{1/2}}\eta\rangle\,dr\right|\leq\|T\|\|\xi\|\|\eta\|.\]
Proof.: Let \(E\) denote the spectral measure of \(\Delta^{1/2}\). By the spectral theorem we have
\[2\int_{0}^{\infty} \left|\langle\xi,\Delta^{1/4}e^{-r\Delta^{1/2}}T\Delta^{1/4}e^{-r \Delta^{1/2}}\eta\rangle\right|\,dr\] \[\leq\|T\|\int_{0}^{\infty}(\|\Delta^{1/4}e^{-r\Delta^{1/2}}\xi\| ^{2}+\|\Delta^{1/4}e^{-r\Delta^{1/2}}\eta\|^{2})\,dr\] \[=\|T\|\int_{0}^{\infty}\int_{0}^{\infty}\lambda e^{-2\lambda r}\, d(\langle\xi,E(\lambda)\xi\rangle+\langle\eta,E(\lambda)\eta\rangle)\,dr\] \[=\|T\|\int_{0}^{\infty}\lambda\int_{0}^{\infty}e^{-2\lambda r}\, dr\,d(\langle\xi,E(\lambda)\xi\rangle+\langle\eta,E(\lambda)\eta\rangle)\] \[=\frac{1}{2}\|T\|(\|\xi\|^{2}+\|\eta\|^{2}).\]
Thus \(r\mapsto\langle\xi,\Delta^{1/4}e^{-r\Delta^{1/2}}T\Delta^{1/4}e^{-r\Delta^{1/ 2}}\eta\rangle\) is integrable on \((0,\infty)\), and the desired inequality follows from the usual rescaling trick \(\xi\mapsto\alpha\xi\), \(\eta\mapsto\eta/\alpha\).
**Definition 3.2**.: For \(T\in B(L^{2}(M))\) let \(\breve{T}\) denote the unique bounded linear operator on \(L^{2}(M)\) such that
\[\langle\xi,\breve{T}\eta\rangle=2\int_{0}^{\infty}\langle\xi,\Delta^{1/4}e^{- r\Delta^{1/2}}T\Delta^{1/4}e^{-r\Delta^{1/2}}\eta\rangle\,dr\]
for all \(\xi,\eta\in L^{2}(M)\).
We call the map
\[\mathcal{V}\colon B(L^{2}(M))\to B(L^{2}(M)),\,T\mapsto\breve{T}\]
the \(\mathcal{V}\)-transform.
Note that if \(T\) commutes with the modular operator, then \(\breve{T}=T\).
**Proposition 3.3**.: _The \(\mathcal{V}\)-transform is a normal unital completely positive trace-preserving map on \(B(L^{2}(M))\)._
Proof.: Let \(E\) denote the spectral measure of \(\Delta^{1/2}\). By the spectral theorem,
\[2\int_{0}^{\infty}\langle\Delta^{1/4}e^{-r\Delta^{1/2}}\xi, \Delta^{1/4}e^{-r\Delta^{1/2}}\eta\rangle\,dr =2\int_{0}^{\infty}\int_{0}^{\infty}\lambda e^{-2\lambda r}\,d \langle\xi,E(\lambda)\eta\rangle\,dr\] \[=2\int_{0}^{\infty}\lambda\int_{0}^{\infty}e^{-2\lambda r}\,dr \,d\langle\xi,E(\lambda)\eta\rangle\] \[=\langle\xi,\eta\rangle\]
for all \(\xi,\eta\in L^{2}(M)\). Thus \(\breve{1}=1\).
Since \(\Delta^{1/4}e^{-r\Delta^{1/2}}\) is self-adjoint, the map \(T\mapsto\Delta^{1/4}e^{-r\Delta^{1/2}}T\Delta^{1/4}e^{-r\Delta^{1/2}}\) is completely positive. Hence \(\mathcal{V}\) is completely positive.
To prove that \(\mathcal{V}\) is trace-preserving, let \(T\in B(L^{2}(M))\) be positive. By the previous part, \(\mathcal{V}(T)\) is positive and we have
\[\operatorname{tr}(\mathcal{V}(T)) =2\int_{0}^{\infty}\operatorname{tr}(\Delta^{1/4}e^{-r\Delta^{1/2 }}T\Delta^{1/4}e^{-r\Delta^{1/2}})\,dr\] \[=2\int_{0}^{\infty}\operatorname{tr}(T^{1/2}\Delta^{1/2}e^{-2r \Delta^{1/2}}T^{1/2})\,dr\] \[=\operatorname{tr}(T^{1/2}\mathcal{V}(1)T^{1/2})\] \[=\operatorname{tr}(T).\]
Interchanging the integral and the sum defining the trace is justified by Fubini's theorem since the integrand is positive in each case.
To prove that the \(\mathcal{V}\)-transform is normal, note first that it restricts to a bounded linear map \(\mathcal{V}\), on the space of trace-class operators on \(L^{2}(M)\) by the previous part. Moreover, if \(S,T\in B(L^{2}(M))\) and \(S\) is trace-class, then
\[\operatorname{tr}(\mathcal{V}(S)T)=2\int_{0}^{\infty}\operatorname{tr}( \Delta^{1/4}e^{-r\Delta^{1/2}}S\Delta^{1/4}e^{-r\Delta^{1/2}}T)\,dr= \operatorname{tr}(\mathcal{V}(T)).\]
Thus \(\mathcal{V}=(\mathcal{V}_{*})^{*}\), which implies that \(\mathcal{V}\) is normal.
**Lemma 3.4** (Key property).:
1. _If_ \(T\in B(L^{2}(M))\)_, then_ \[\frac{1}{2}\langle\Delta^{1/4}\xi,\tilde{T}\Delta^{-1/4}\eta\rangle+\frac{1}{ 2}\langle\Delta^{-1/4}\xi,\tilde{T}\Delta^{1/4}\eta\rangle=\langle\xi,T\eta\rangle\] _for all_ \(\xi,\eta\in\operatorname{dom}(\Delta^{1/4})\cap\operatorname{dom}(\Delta^{-1 /4})\)_._
2. _If_ \(R\in B(L^{2}(M))\) _such that_ \[\frac{1}{2}\langle\Delta^{1/4}\xi,R\Delta^{-1/4}\eta\rangle+\frac{1}{2}\langle \Delta^{-1/4}\xi,R\Delta^{1/4}\eta\rangle=\langle\xi,T\eta\rangle\] _for all_ \(\xi,\eta\in\bigcap_{n\in\mathbb{Z}}\operatorname{dom}(\Delta^{n})\)_, then_ \(R=\tilde{T}\)_._
Proof.: (a) If \(\xi,\eta\in\operatorname{dom}(\Delta^{1/4})\cap\operatorname{dom}(\Delta^{-1 /4})\), then
\[\langle\Delta^{1/4}\xi,\tilde{T}\Delta^{-1/4}\eta\rangle+\langle \Delta^{-1/4}\xi,\tilde{T}\Delta^{1/4}\eta\rangle\] \[\qquad=2\int_{0}^{\infty}(\langle\Delta^{1/2}e^{-r\Delta^{1/2}} \xi,Te^{-r\Delta^{1/2}}\eta\rangle+\langle e^{-r\Delta^{1/2}}\xi,T\Delta^{1/2 }e^{-r\Delta^{1/2}}\eta\rangle)\,dr\] \[\qquad=-2\int_{0}^{\infty}\frac{d}{dr}\langle e^{-r\Delta^{1/2}} \xi,Te^{-r\Delta^{1/2}}\eta\rangle\,dr\] \[\qquad=2\langle\xi,T\eta\rangle.\]
Here we used that since \(\Delta^{1/2}\) is non-singular, \(e^{-r\Delta^{1/2}}\zeta\to 0\) as \(r\to\infty\) for every \(\zeta\in L^{2}(M)\).
(b) If \(\xi,\eta\in\bigcap_{n\in\mathbb{Z}}\operatorname{dom}(\Delta^{n})\), then \(\Delta^{1/4}e^{-r\Delta^{1/2}}\xi,\Delta^{1/4}e^{-r\Delta^{1/2}}\eta\in \operatorname{dom}(\Delta^{1/4})\cap\operatorname{dom}(\Delta^{-1/4})\) and hence
\[\langle\xi,\tilde{T}\eta\rangle =2\int_{0}^{\infty}\langle\Delta^{1/4}e^{-r\Delta^{1/2}}\xi,T \Delta^{1/4}e^{-r\Delta^{1/2}}\eta\rangle\,dr\] \[=\int_{0}^{\infty}(\langle\Delta^{1/2}e^{-r\Delta^{1/2}}\xi,Re^{- r\Delta^{1/2}}\eta\rangle+\langle e^{-r\Delta^{1/2}}\xi,R\Delta^{1/2}e^{-r \Delta^{1/2}}\eta\rangle)\,dr.\]
From here we conclude \(\langle\xi,\vec{T}\eta\rangle=\langle\xi,R\eta\rangle\) as above. Since \(\bigcap_{n\in\mathbb{Z}}\operatorname{dom}(\Delta^{n})\) is dense in \(L^{2}(M)\), the equality \(R=\vec{T}\) follows.
Informally, the identity from the previous lemma reads
\[\frac{1}{2}\Delta^{1/4}\vec{T}\Delta^{-1/4}+\frac{1}{2}\Delta^{-1/4}\vec{T} \Delta^{1/4}=T.\]
**Proposition 3.5**.: _Let \(T\in B(L^{2}(M))\)._
1. \(\vec{T}=\int_{0}^{\infty}\Delta^{-1/4}e^{-r\Delta^{-1/2}}T\Delta^{-1/4}e^{-r \Delta^{-1/2}}\,dr\) _in the weak operator topology._
2. _If_ \(JT=TJ\)_, then_ \(\vec{T}=\vec{T}J\)_._
3. _If_ \(T\varphi^{1/2}=\varphi^{1/2}\)_, then_ \(\vec{T}\varphi^{1/2}=\varphi^{1/2}\)_._
Proof.: 1. Let \(R=\int_{0}^{\infty}\Delta^{-1/4}e^{-r\Delta^{-1/2}}T\Delta^{-1/4}e^{-r\Delta^ {1/2}}\,dr\). The existence is justified by the same arguments as for \(\vec{T}\). Replacing \(\Delta\) by \(\Delta^{-1}\) in Lemma 3.4 (a) we obtain \[\frac{1}{2}\langle\Delta^{1/4}\xi,R\Delta^{-1/4}\eta\rangle+\frac{1}{2} \langle\Delta^{-1/4}\xi,R\Delta^{1/4}\eta\rangle=\langle\xi,T\eta\rangle\] for \(\xi,\eta\in\operatorname{dom}(\Delta^{1/4})\cap\operatorname{dom}(\Delta^{- 1/4})\). Now \(R=\vec{T}\) follows from Lemma 3.4 (b).
2. One has \(J\Delta^{1/4}=\Delta^{-1/4}J\), and by the spectral theorem also \(Je^{-r\Delta^{1/2}}=e^{-r\Delta^{-1/2}}J\). Thus \[J\vec{T} =\int_{0}^{\infty}\Delta^{-1/4}e^{-r\Delta^{-1/2}}JT\Delta^{1/4}e ^{-r\Delta^{1/2}}\,dr\] \[=\int_{0}^{\infty}\Delta^{-1/4}e^{-r\Delta^{-1/2}}TJ\Delta^{1/4}e ^{-r\Delta^{1/2}}\,dr\] \[=\int_{0}^{\infty}\Delta^{-1/4}e^{-r\Delta^{-1/2}}T\Delta^{-1/4}e ^{-r\Delta^{-1/2}}\,dr\,J.\] Now \(J\vec{T}=\vec{T}J\) follows from (a).
3. This is immediate from the definition of \(\vec{T}\).
We are particularly interested in the action of the \(\mathcal{V}\)-transform on Markov operators. Let us first recall the definition.
We call an operator \(T\in B(L^{2}(M))\)_positivity-preserving_ if \(T(L^{2}_{+}(M))\subset L^{2}_{+}(M)\), and _completely positivity-preserving_ if the amplification \(T\otimes\operatorname{id}_{M_{n}(\mathbb{C})}\) is positivity-preserving on \(L^{2}(M)\otimes M_{n}(\mathbb{C})\cong L^{2}(M_{n}(M))\) for every \(n\in\mathbb{N}\). We use the terms "positivity-preserving" and "completely positivity-preserving" instead of positive and completely positive to avoid confusion with the concept of positive operators on a Hilbert space.
**Definition 3.6**.: An operator \(T\in B(L^{2}(M))\) is a _Markov operator_ if it is completely positivity-preserving and \(T\varphi^{1/2}=\varphi^{1/2}\).
To prove that the \(\mathcal{V}\)-transform also preserves Markov operators, we need a series of lemmas. Recall that the cones \(P^{\sharp}\) and \(P^{\flat}\) are defined as
\[P^{\sharp}=\overline{\{x\varphi^{1/2}\mid x\in M_{+}\}},\ P^{\flat}=\overline{ \{\varphi^{1/2}x\mid x\in M_{+}\}},\]
and further recall that \(P^{\sharp}\) and \(P^{\flat}\) are dual cones, that is, \(\xi\in P^{\sharp}\) if and only if \(\langle\xi,\eta\rangle\geq 0\) for all \(\eta\in P^{\flat}\), and vice versa.
**Lemma 3.7**.: _Let \(T\in B(L^{2}(M))\) be positivity-preserving. If \(\xi\in P^{\sharp}\) (resp. \(\xi\in P^{\flat}\)) and \(\eta\in P^{\flat}\) (resp. \(\eta\in P^{\sharp}\)), then_
\[\operatorname{Re}\langle\xi,\breve{T}\eta\rangle\geq 0.\]
Proof.: Let \(x,y\in M_{+}\). By Lemma 3.4 (a) we have
\[\frac{1}{2}\langle x\varphi^{1/2},\breve{T}(\varphi^{1/2}y)\rangle+\frac{1}{2 }\langle\varphi^{1/2}x,\breve{T}(y\varphi^{1/2})\rangle=\langle\varphi^{1/4}x \varphi^{1/4},T(\varphi^{1/4}y\varphi^{1/4})\rangle\geq 0.\]
Since \(T\) is positivity-preserving, it commutes with \(J\). Hence so does \(\breve{T}\) by Proposition 3.5 (b). If we apply this to the first summand from the previous equation, we obtain
\[\langle x\varphi^{1/2},\breve{T}(\varphi^{1/2}y)\rangle=\langle J\breve{T}( \varphi^{1/2}y),J(x\varphi^{1/2})\rangle=\langle\breve{T}(y\varphi^{1/2}), \varphi^{1/2}x\rangle.\]
Therefore
\[\operatorname{Re}\langle x\varphi^{1/2},\breve{T}(\varphi^{1/2}y)\rangle\geq 0.\]
This settles the claim for \(\xi=x\varphi^{1/2}\) and \(\eta=\varphi^{1/2}y\). For arbitrary \(\xi\in P^{\sharp}\) and \(\eta\in P^{\flat}\) the inequality follows by approximation. The proof for \(\xi\in P^{\flat}\) and \(\eta\in P^{\sharp}\) is analogous.
**Lemma 3.8**.: _If \(R\in B(L^{2}(M))\) such that_
\[\operatorname{Re}\langle\xi,R\eta\rangle\geq 0, \xi\in P^{\sharp},\eta\in P^{\flat},\] \[\operatorname{Re}\langle\xi,R\eta\rangle\geq 0, \xi\in P^{\flat},\eta\in P^{\sharp},\]
_then \(\operatorname{Re}\langle\xi,R\eta\rangle\geq 0\) for all \(\xi,\eta\in L^{2}_{+}(M)\)._
Proof.: Let \(S=\{z\in\mathbb{C}\mid 0<\operatorname{Re}z<1/2\}\) and
\[f\colon\overline{S}\to\mathbb{C},\;f(z)=e^{-\langle J\Delta^{z}(x\varphi^{1/2 }),R\Delta^{z}(y\varphi^{1/2})\rangle}\]
for \(x,y\in M_{+}\). The function \(f\) is continuous on \(\overline{S}\), holomorphic on \(S\) and satisfies
\[|f(z)| =e^{-\operatorname{Re}\langle J\Delta^{z}(x\varphi^{1/2}),R\Delta ^{z}(y\varphi^{1/2})\rangle}\] \[\leq e^{\|J\Delta^{z}(x\varphi^{1/2})\|\|R\Delta^{z}(y\varphi^{1/ 2})\|}\] \[\leq e^{\|R\|\|\Delta^{\operatorname{Re}z}(x\varphi^{1/2})\|\| \Delta^{\operatorname{Re}z}(y\varphi^{1/2})\|}.\]
By the spectral theorem,
\[\|\Delta^{\operatorname{Re}z}(x\varphi^{1/2})\|^{2}\leq\|x\varphi^{1/2}\|^{2} +\|\Delta^{1/2}(x\varphi^{1/2})\|^{2}\]
and likewise for \(y\varphi^{1/2}\). Thus \(f\) is bounded on \(\overline{S}\).
If \(\operatorname{Re}z=0\), then \(\Delta^{z}(y\varphi^{1/2})=\sigma^{\varphi}_{\operatorname{Im}z}(y)\varphi^ {1/2}\in P^{\sharp}\) and \(J\Delta^{z}(x\varphi^{1/2})=\varphi^{1/2}\sigma^{\varphi}_{\operatorname{Im}z }(x)\in P^{\flat}\). Thus \(|f(z)|\leq 1\) by assumption. Similarly, \(|f(z)|\leq 1\) if \(\operatorname{Re}z=1/2\).
It follows from the Phragmen-Lindelof principle that \(|f(1/4)|\leq 1\), which means
\[\operatorname{Re}\langle\Delta^{1/4}(x\varphi^{1/2}),R\Delta^{1/4}(y\varphi^ {1/2})=\operatorname{Re}\langle J\Delta^{1/4}(x\varphi^{1/2}),R\Delta^{1/4}(y \varphi^{1/2})\rangle\geq 0.\]
This settles the claim for \(\xi=\Delta^{1/4}(x\varphi^{1/2})\), \(\eta=\Delta^{1/4}(y\varphi^{1/2})\) with \(x,y\in M_{+}\). For arbitrary \(\xi,\eta\in L^{2}_{+}(M)\) the claim follows by approximation.
**Lemma 3.9**.: _If \(R\in B(L^{2}(M))\) such that \(\operatorname{Re}\langle\xi,R\eta\rangle\geq 0\) for all \(\xi,\eta\in L^{2}_{+}(M)\) and \(JR=RJ\), then \(R\) is positivity-preserving._
Proof.: Let \(S=\{z\in\mathbb{C}\mid 0<\operatorname{Re}z<1/2\}\) and \(f(z)=e^{-\langle J\Delta^{z}(x\varphi^{1/2}),R\Delta^{z}(y\varphi^{1/2})\rangle}\). Then \(\operatorname{Re}\langle\xi,R\eta\rangle\geq 0\) for all \(\xi,\eta\in L^{2}_{+}(M)\).
Let \(\xi\in P^{\sharp}\) and \(\eta\in P^{\sharp}\). Then \(\operatorname{Re}\langle\xi,R\eta\rangle\geq 0\) for all \(\xi,\eta\in L^{2}_{+}(M)\).
Let \(\xi\in P^{\sharp}\) and \(\eta\in P^{\sharp}\). Then \(\operatorname{Re}\langle\xi,R\eta\rangle\geq 0\) for all \(\xi,\eta\in L^{2}_{+}(M)\).
Let \(\xi\in P^{\sharp}\) and \(\eta\in P^{\sharp}\). Then \(\operatorname{Re}\langle\xi,R\eta\rangle\geq 0\) for all \(\xi,\eta\in L^{2}_{+}(M)\).
Proof.: For \(\xi,\eta\in L^{2}_{+}(M)\) we have \(J\xi=\xi,J\eta=\eta\), and hence
\[\langle R\eta,\xi\rangle=\langle J\xi,JR\eta\rangle=\langle\xi,R\eta\rangle.\]
Therefore \(\langle\xi,R\eta\rangle\) is real and thus positive by assumption. As \(L^{2}_{+}(M)\) is self-dual, the claim follows.
**Proposition 3.10**.: _The \(\mathcal{V}\)-transform maps symmetric Markov operators to symmetric Markov operators._
Proof.: If \(T\in B(L^{2}(M))\) is positivity-preserving, it follows from the previous three lemmas that \(\tilde{T}\) is positivity-preserving as well. If \(T\) is completely positivity-preserving, the same argument applied to the amplifications \(T\otimes\operatorname{id}_{M_{\#}(\mathbb{C})}\) shows that \(\tilde{T}\) is completely positivity-preserving. That \(T\varphi^{1/2}=\varphi^{1/2}\) implies \(\tilde{T}\varphi^{1/2}=\varphi^{1/2}\) was established in Proposition 3.5 (c).
### \(\mathcal{V}\)-Transform of KMS-Symmetric Operators on \(M\)
Formally, the \(\mathcal{V}\)-transform of a bounded linear operator \(\Phi\) on \(M\) should be given by
\[2\int_{0}^{\infty}\sigma_{-i/4}e^{-\tau\sigma_{-i/2}}\Phi\sigma_{-i/4}e^{-\tau \sigma_{-i/2}}\,dr,\]
where we simply replaced the modular operator in the definition of \(\mathcal{V}\) by the analytic generator of the modular group on \(M\). However, it seems hard to make this formula rigorous. It is not even clear if \(\sigma_{-i/2}\) generates a semigroup on \(M\) in a suitable sense.
Instead, we take a different approach that relies on the correspondence between certain operators in \(B(M)\) and operators in \(B(L^{2}(M))\). This way we can only define the \(\mathcal{V}\)-transform for KMS-symmetric unital completely positive maps and generators of KMS-symmetric quantum Markov semigroups, but this suffices for our purposes.
Let us first recall the definition of a KMS-symmetric map in this setting. A linear map \(\Phi\colon M\to M\) is called _KMS-symmetric_ (with respect to \(\varphi\)) if
\[\langle J\Phi(x)^{*}J\varphi^{1/2},y\varphi^{1/2}\rangle=\langle Jx^{*}J \varphi^{1/2},\Phi(y)\varphi^{1/2}\rangle\]
for all \(x,y\in M\). Using the trace-like functional \(\operatorname{tr}\) on the Haagerup \(L^{1}\) space, this can compactly be rewritten as
\[\operatorname{tr}(\Phi(x)^{*}\varphi^{1/2}y\varphi^{1/2})=\operatorname{tr}( x^{*}\varphi^{1/2}\Phi(y)\varphi^{1/2})\]
in analogy with the definition for matrix algebras.
There is the following correspondence between symmetric completely positivity-preserving maps on \(L^{2}(M)\) and KMS-symmetric completely positive maps on \(M\)[2, Section 2]: If \(\Phi\) is KMS-symmetric unital completely positive map on \(M\), then there exists a unique bounded linear operator \(T\) on \(L^{2}(M)\) such that
\[T(\varphi^{1/4}x\varphi^{1/4})=\varphi^{1/4}\Phi(x)\varphi^{1/4}\]
for all \(x\in M\). This operator is a symmetric Markov operator, which we denote by \(\Phi^{(2)}\).
Conversely, if \(T\) is a symmetric Markov operator on \(L^{2}(M)\), then there exists a KMS-symmetric unital completely positive map \(\Phi\) on \(M\) such that \(\Phi^{(2)}=T\).
If \(\Phi\) is a KMS-symmetric unital completely positive map on \(M\), then \(\mathcal{V}(\Phi^{(2)})\) is a symmetric Markov operator by Proposition 3.10. This justifies the following definition.
**Definition 3.11**.: Let \(\Phi\) be a KMS-symmetric unital completely positive map on \(M\). Its \(\mathcal{V}\)-transform \(\tilde{\Phi}\) is the unique KMS-symmetric unital completely positive map \(\Psi\) on \(M\) such that
\[\mathcal{V}(\Phi^{(2)})=\Psi^{(2)}.\]
We also write \(\mathcal{V}(\Phi)\) for \(\tilde{\Phi}\).
The main object of interest of this article are semigroups of unital completely positive maps. While the \(\mathcal{V}\)-transform preserves unitality and complete positivity, it does not, in general, preserve the semigroup property. We will discuss in the following how one can still \(\mathcal{V}\)-transform generators of a class of such semigroups in a way that preserves complete positivity of the generated semigroup. We start by recalling some definitions.
A _quantum Markov semigroup_ is a family \((\Phi_{t})_{t\geq 0}\) of normal unital completely positive maps on \(M\) such that
* \(\Phi_{0}=\mathrm{id}_{M}\),
* \(\Phi_{s+t}=\Phi_{s}\Phi_{t}\) for all \(s,t\geq 0\),
* \(\Phi_{t}\to\mathrm{id}_{M}\) in the pointwise weak\({}^{*}\) topology as \(t\to 0\).
If \(\Phi_{t}\to\mathrm{id}_{M}\) in operator norm as \(t\to 0\), then \((\Phi_{t})\) is called _uniformly continuous_.
The generator of \((\Phi_{t})\) is the weak\({}^{*}\) closed and densely defined operator \(\mathcal{L}\) given by
\[\mathrm{dom}(\mathcal{L}) =\left\{x\in M:\lim_{t\to 0}\frac{1}{t}(x-\Phi_{t}(x))\text{ exists in the weak${}^{*}$ topology}\right\}\] \[\mathcal{L}(x) =\lim_{t\to 0}\frac{1}{t}(x-\Phi_{t}(x))\text{ in the weak${}^{*}$ topology}.\]
The semigroup \((\Phi_{t})\) is uniformly continuous if and only if the generator is bounded. The generators of uniformly continuous quantum Markov semigroups can be characterized as follows [1, Theorem 14.7]: A bounded operator \(\mathcal{L}\) on \(M\) is the generator of a uniformly continuous quantum Markov semigroup if and only it is normal, \(\mathcal{L}(1)=0\) and \(\mathcal{L}\) is _conditionally negative definite_, that is,
\[\sum_{j=1}^{n}y_{j}^{*}\mathcal{L}(x_{j}^{*}x_{k})y_{k}\leq 0\]
whenever \(x_{1},\ldots,x_{n},y_{1},\ldots,y_{n}\in M\) with \(\sum_{j=1}^{n}x_{j}y_{j}=0\).
Let \((\Phi_{t})\) be a uniformly continuous KMS-symmetric Markov semigroup on \(M\) and \((T_{t})\) the associated symmetric Markov semigroup on \(L^{2}(M,\varphi)\). Let \(\mathcal{L}\) and \(\mathcal{L}_{2}\) denote the generator of \((\Phi_{t})\) on \(M\) and \((T_{t})\) on \(L^{2}(M,\varphi)\), respectively. By the uniform continuity assumption, both are bounded linear operators. Thus we can form the \(\mathcal{V}\)-transform of \(\mathcal{L}_{2}\), and the continuity of the \(\mathcal{V}\)-transform implies
\[\tilde{\mathcal{L}_{2}}=\lim_{t\to 0}\frac{1}{t}(1-\tilde{T}_{t})\]
in operator norm. However, since \((\tilde{T}_{t})\) is not a semigroup in general, this does not imply directly that \(\tilde{\mathcal{L}_{2}}\) generates a symmetric Markov semigroup. The lack of semigroup property can be taken care of by a suitable rescaling of the time parameter. More precisely, we have the following result.
**Proposition 3.12**.: _If \((T_{t})\) is a symmetric Markov semigroup on \(L^{2}(M,\,\varphi)\), then the semigroup generated by \(\tilde{\mathscr{L}}_{2}\) satisfies_
\[e^{-t\tilde{\mathscr{L}}_{2}}\xi=\lim_{n\to\infty}\tilde{T}_{t/n}^{n}\xi\]
_for all \(t\geq 0\) and \(\xi\in L^{2}(M,\,\varphi)\). In particular, \((e^{-t\tilde{\mathscr{L}}_{2}})\) is a symmetric Markov semigroup._
Proof.: Since the \(\mathscr{V}\)-transform is a contraction, we have \(\|\tilde{T}_{t}\|\leq\|T_{t}\|\leq 1\) for all \(t\geq 0\). It follows from the Chernoff product formula [21, 22] that \(e^{-t\tilde{\mathscr{L}}_{2}}\xi=\lim_{n\to\infty}\tilde{T}_{t/n}^{n}\xi\) for all \(t\geq 0\) and \(\xi\in L^{2}(M,\,\varphi)\). Finally, as \(\tilde{T}_{s}\) is a symmetric Markov operator for all \(s\geq 0\), so is \(e^{-t\tilde{\mathscr{L}}_{2}}\) for all \(t\geq 0\).
As a consequence of the previous theorem, there exists a unique KMS-symmetric quantum Markov semigroup \((\Psi_{t})\) on \(M\) such that
\[\varphi^{1/4}\Psi_{t}(x)\varphi^{1/4}=e^{-t\tilde{\mathscr{L}}_{2}}(\varphi^{ 1/4}x\varphi^{1/4})\]
for all \(x\in M\) and \(t\geq 0\). Moreover, \((\Psi_{t})\) is uniformly continuous. This justifies the following definition.
**Definition 3.13**.: If \(\mathscr{L}\) is the generator of a uniformly continuous KMS-symmetric Markov semigroup on \(M\), its \(\mathscr{V}\)-transform \(\tilde{\mathscr{L}}\) is the unique normal linear operator on \(M\) such that
\[\varphi^{1/4}\tilde{\mathscr{L}}(x)\varphi^{1/4}=\tilde{\mathscr{L}}_{2}( \varphi^{1/4}x\varphi^{1/4})\]
for all \(x\in M\).
By the discussion above, \(\tilde{\mathscr{L}}\) is again the generator of a KMS-symmetric quantum Markov semigroup on \(M\). Note moreover that since \(\mathscr{L}(1)=0\), the only completely positive generator of a quantum Markov semigroup is \(\mathscr{L}=0\). Therefore there is no conflict between our definitions of the \(\mathscr{V}\)-transform of completely positive maps and Markov generators.
## 4. Derivations for Uniformly Continuous Quantum Markov Semigroups
In this section we study the correspondence between uniformly KMS-symmetric quantum Markov semigroups and certain twisted derivations with values in bimodules. We show that - like in the case of matrix algebras - every uniformly continuous KMS-symmetric quantum Markov semigroup gives rise to a derivation (Theorem 4.2) and that this derivation is unique (Theorem 4.7).
Throughout this section let \(M\) be a von Neumann algebra and \(\varphi\) a faithful normal state on \(M\). KMS symmetry is always understood with respect to this state \(\varphi\) and multiplication of elements in \(L^{2}(M)\) is understood as the multiplication induced by the full left Hilbert algebra associated with \(\varphi\). For a left-bounded vector \(a\in L^{2}(M)\) we write \(\pi_{l}(a)\) for the bounded operator of left multiplication by \(a\). Likewise, if \(a\in L^{2}(M)\) is right-bounded, we write \(\pi_{r}(a)\) for the right multiplication operator.
### Existence and Innerness
After we established the existence and properties of the \(\mathscr{V}\)-transform in the previous section, the proof for the existence of a derivation associated with a uniformly continuous KMS-symmetric quantum Markov semigroup follows the same strategy as in the finite-dimensional case.
To establish that the derivation is inner, we need the following result, which is an easy consequence of the Christensen-Evans theorem [11]. Recall that a bounded
linear map between \(C^{*}\)-algebras is called _decomposable_ if it is a linear combination of completely positive maps [10].
**Proposition 4.1**.: _Every generator of a uniformly continuous quantum Markov semigroup on \(M\) is the difference of two normal completely positive maps. In particular, it is decomposable._
Proof.: Let \(\mathscr{L}\) be a normal conditionally negative definite map on \(M\). By [1, Theorem 3.1] there exists \(k\in M\) and a completely positive map \(\Phi\colon M\to M\) such that
\[\mathscr{L}(x)=k^{*}x+xk-\Phi(x)\]
for all \(x\in M\). Since \(\mathscr{L}\) is normal, so is \(\Phi\). It follows from [12, Proposition 6.10] that \(\mathscr{L}\) is the difference of two normal completely positive maps.
For the following result, let us recall the notion of correspondences (see [11, Chapter 5, Appendix B] for example). An \(M\)-\(M\)-_correspondence_ or simply _correspondence_ in our case is a Hilbert space \(\mathscr{H}\) endowed with normal unital \(*\)-homomorphisms \(\pi_{l}^{\mathscr{H}}\colon M\to B(\mathscr{H})\), \(\pi_{r}^{\mathscr{H}}\colon M^{\mathrm{op}}\to B(\mathscr{H})\) such that \(\pi_{l}^{\mathscr{H}}(M)\) and \(\pi_{r}^{\mathscr{H}}(M^{\mathrm{op}})\) commute. We write \(x\cdot\xi\cdot y\) or simply \(x\xi y\) for \(\pi_{l}^{\mathscr{H}}(x)\pi_{r}^{\mathscr{H}}(y^{\mathrm{op}})\xi\).
Every correspondence gives rise to a representation of the _binormal tensor product_\(M\otimes_{\mathrm{bin}}M^{\mathrm{op}}\), which is defined as follows (see [1]). A linear functional \(\omega\) from the algebraic tensor product \(M\odot M^{\mathrm{op}}\) to \(\mathbb{C}\) is called a _binormal state_ if \(\omega(u^{*}u)\geq 0\) for all \(u\in M\odot M^{\mathrm{op}}\), \(\omega(1)=1\) and the maps \(x\mapsto\omega(x\otimes y_{0}^{\mathrm{op}})\), \(y^{\mathrm{op}}\mapsto\omega(x_{0}\otimes y^{\mathrm{op}})\) are weak\({}^{*}\) continuous for every \(x_{0}\), \(y_{0}\in M\). For \(u\) in the algebraic tensor product \(M\odot M^{\mathrm{op}}\) let
\[\|u\|_{\mathrm{bin}}=\sup\{\omega(u^{*}u)^{1/2}\mid\omega\text{ binormal state on }M\odot M^{\mathrm{op}}\}.\]
The binormal tensor product \(M\otimes_{\mathrm{bin}}M^{\mathrm{op}}\) is the completion of \(M\odot M^{\mathrm{op}}\) with respect to the norm \(\|\cdot\|_{\mathrm{op}}\).
**Theorem 4.2**.: _Let \((\Phi_{t})\) be a uniformly continuous KMS-symmetric quantum Markov semigroup on \(M\) and let \(\mathscr{L}\) denote its generator. There exists a correspondence \(\mathscr{H}\), an anti-unitary involution \(\mathscr{J}:\mathscr{H}\to\mathscr{H}\) and a bounded operator \(\delta\colon L^{2}(M)\to\mathscr{H}\) satisfying_
1. \(\mathscr{J}(x\xi y)=y^{*}(\mathscr{J}\xi)x^{*}\) _for all_ \(x\)_,_ \(y\in M\) _and_ \(\xi\in\mathscr{H}\)_,_
2. \(\delta(Ja)=\mathscr{J}\delta(a)\) _for all_ \(a\in L^{2}(M)\)_,_
3. \(\delta(ab)=\pi_{l}(a)\cdot\delta(b)+\delta(a)\cdot\eta\pi_{r}(b)^{*}\mathcal{I}\) _for all_ \(a\in M\varphi^{1/2}\)_,_ \(b\in\varphi^{1/2}M\)_,_
4. \(\overline{\mathrm{lin}}\{\delta(a)x\mid a\in L^{2}(M),\;x\in M\}=\mathscr{H}\)__
_such that_
\[\mathscr{L}_{2}=\delta^{*}\delta.\]
_Moreover, there exists \(\xi_{0}\in\mathscr{H}\) such that_
\[\delta(a)=\pi_{l}(a)\cdot\xi_{0}-\xi_{0}\cdot J\pi_{r}(a)^{*}J\]
_for \(a\in M\varphi^{1/2}\cap\varphi^{1/2}M\)._
Proof.: Let
\[\omega\colon M\odot M^{\mathrm{op}}\to\mathbb{C},\;x\otimes y^{\mathrm{op}} \mapsto-\frac{1}{2}\langle\varphi^{1/2},\tilde{\mathscr{L}}(x)\varphi^{1/2}y\rangle.\]
We first show that \(\omega\) extends continuously to \(M\otimes_{\mathrm{bin}}M^{\mathrm{op}}\).
Since \(\tilde{\mathscr{L}}\) is the generator of a uniformly continuous quantum Markov semigroup, by Proposition 4.1 there exist normal completely positive maps \(\Psi_{1},\Psi_{2}\) on \(M\) such that
\(\tilde{\mathscr{L}}=\Psi_{2}-\Psi_{1}\). For \(j\in\{1,2\}\), the maps
\[\omega_{j}\colon M\odot M^{\mathrm{op}}\to\mathbb{C},\;x\otimes y^{\mathrm{op}} \mapsto\frac{1}{2}\langle\varphi^{1/2},\Psi_{j}(x)\varphi^{1/2}y\rangle\]
are positive and separately weak\({}^{*}\) continuous. Hence they extend to positive functionals on \(M\otimes_{\mathrm{bin}}M^{\mathrm{op}}\) by definition of \(\|\cdot\|_{\mathrm{bin}}\). Thus \(\omega\) extends to a bounded linear map on \(M\otimes_{\mathrm{bin}}M^{\mathrm{op}}\). We continue to denote the extension of \(\omega\) to \(M\otimes_{\mathrm{bin}}M^{\mathrm{op}}\) by \(\omega\).
Let
\[q\colon M\odot M^{\mathrm{op}}\to L^{2}(M),\;x\otimes y^{\mathrm{op}} \mapsto x\varphi^{1/2}y.\]
Clearly, the kernel of \(q\) is a left ideal of \(M\odot M^{\mathrm{op}}\). Let \(I\) denote its closure, which is a closed left ideal of \(M\otimes_{\mathrm{bin}}M^{\mathrm{op}}\).
If \(u=\sum_{j=1}^{n}x_{j}\otimes y_{j}^{\mathrm{op}}\in\ker q\), then
\[\omega(u^{*}u) =-\frac{1}{2}\sum_{j,k=1}^{n}\langle\varphi^{1/2}y_{j},\tilde{ \mathscr{L}}(x_{j}^{*}x_{k})\varphi^{1/2}y_{k}\rangle\] \[=\lim_{t\to 0}\frac{1}{2t}\left(\sum_{j,k=1}^{n}\langle\varphi^{1/ 2}y_{j},\tilde{\Phi}_{t}(x_{j}^{*}x_{k})\varphi^{1/2}y_{k}\rangle-\|q(u)\|^{2}\right)\] \[=\lim_{t\to 0}\frac{1}{2t}\sum_{j,k=1}^{n}\langle\varphi^{1/2}y_{j },\tilde{\Phi}_{t}(x_{j}^{*}x_{k})\varphi^{1/2}y_{k}\rangle.\]
The last expression is positive since \(\tilde{\Phi}_{t}\) is completely positive for all \(t\geq 0\). By continuity, it follows that \(\omega(u^{*}u)\geq 0\) for all \(u\in I\).
Let \(\mathscr{H}\) be the GNS Hilbert space associated with \(\omega|_{I}\) and \(\pi_{\omega}\), the GNS representation of \(M\otimes_{\mathrm{bin}}M^{\mathrm{op}}\) on \(\mathscr{H}\), that is, \(\mathscr{H}\) is the completion of \(I\) with respect to the inner product \((x,y)\mapsto\omega(x^{*}y)\) and \(\pi_{\omega}(x)[y]_{\mathscr{H}}=[xy]_{\mathscr{H}}\), were \([\cdot]_{\mathscr{H}}\) denotes the canonical map \(I\to\mathscr{H}\). As \(\omega\) is separately weak\({}^{*}\) continuous, it follows easily that the actions \(x\mapsto\pi_{\omega}(x\otimes 1)\) and \(y^{\mathrm{op}}\mapsto\pi_{\omega}(1\otimes y^{\mathrm{op}})\) are normal. These actions make \(\mathscr{H}\) into an \(M\)-\(M\)-correspondence.
Moreover, let \((e_{\lambda})\) be a right approximate identity for \(I\) consisting of positive contractions. Since \(\|[e_{\lambda}]_{\mathscr{H}}\|=\omega(e_{\lambda}^{2})^{1/2}\leq\|\omega\|^{ 1/2}\), we may assume additionally that \([e_{\lambda}]_{\mathscr{H}}\) converges weakly to some vector \(\xi_{\omega}\in\mathscr{H}\). If \(u\in I\), then
\[[u]_{\mathscr{H}}=[\lim_{\lambda}ue_{\lambda}]=\lim_{\lambda}\pi_{\omega}(u)[e _{\lambda}]_{\mathscr{H}}=\pi_{\omega}(u)\xi_{\omega}.\]
In particular, \(\xi_{\omega}\) is a cyclic vector for \(\pi_{\omega}\).
To define \(\mathscr{J}\), first define an anti-linear map \(\mathscr{J}_{0}\) on \(M\odot M^{\mathrm{op}}\) by \(\mathscr{J}_{0}(x\otimes y^{\mathrm{op}})=y^{*}\otimes(x^{*})^{\mathrm{op}}\). A direct computation shows \(q\mathscr{J}_{0}=Jq\). In particular, \(\mathscr{J}_{0}\) leaves \(\ker q\) invariant.
Furthermore, if \(x_{1}\), \(x_{2},y_{1},y_{2}\in M\), then
\[\omega(\mathscr{J}_{0}(x_{1}\otimes y_{1}^{\mathrm{op}})^{*} \mathscr{J}_{0}(x_{2}\otimes y_{2}^{\mathrm{op}})) =-\frac{1}{2}\langle\varphi^{1/2},\tilde{\mathscr{L}}(y_{1}y_{2} ^{*})\varphi^{1/2}x_{2}^{*}x_{1}\rangle\] \[=-\frac{1}{2}\langle\varphi^{1/2}x_{1}^{*}x_{2},\tilde{\mathscr{L }}(y_{1}y_{2}^{*})\varphi^{1/2}\rangle\] \[=-\frac{1}{2}\langle\varphi^{1/2}\tilde{\mathscr{L}}(x_{1}^{*}x_ {2}),y_{1}y_{2}^{*}\varphi^{1/2}\rangle\] \[=-\frac{1}{2}\langle\varphi^{1/2},y_{1}y_{2}^{*}\varphi^{1/2} \tilde{\mathscr{L}}(x_{2}^{*}x_{1})\rangle\] \[=-\frac{1}{2}\langle\tilde{\mathscr{L}}(x_{1}^{*}x_{2})\varphi^{ 1/2}y_{1}^{*},\varphi^{1/2}\rangle\] \[=-\frac{1}{2}\langle\varphi^{1/2},\tilde{\mathscr{L}}(x_{2}^{*}x _{1})\varphi^{1/2}y_{1}y_{2}^{*}\rangle\] \[=\omega((x_{2}\otimes y_{2}^{\mathrm{op}})^{*}(x_{1}\otimes y_{1 }^{\mathrm{op}})).\]
Therefore the map
\[[I]_{\mathscr{H}}\to[I]_{\mathscr{H}},\;[x\otimes y^{\mathrm{op}}]_{\mathscr{ H}}\mapsto[y^{*}\otimes(x^{*})^{\mathrm{op}}]_{\mathscr{H}}\]
extends to an isometric anti-linear operator \(\mathscr{J}\) on \(\mathscr{H}\). Obviously, \(\mathscr{J}\) is an involution and property (a) follows directly from the definition.
Finally let us construct \(\delta\). For \(a\in M\varphi^{1/2}\cap\varphi^{1/2}M\) let
\[\delta(a)=\pi_{\omega}(\pi_{l}(a)\otimes 1-1\otimes(J\pi_{r}(a)^{*}J)^{\mathrm{ op}})\xi_{\omega}.\]
Note that
\[q(\pi_{l}(a)\otimes 1-1\otimes(J\pi_{r}(a)^{*}J)^{\mathrm{op}})=\pi_{l}(a) \varphi^{1/2}-\pi_{r}(a)\varphi^{1/2}=0\]
so that \(\pi_{l}(a)\otimes 1-1\otimes(J\pi_{r}(a)^{*}J)^{\mathrm{op}}\in I\) and \(\delta(a)=[\pi_{l}(a)\otimes 1-1\otimes(J\pi_{r}(a)^{*}J)^{\mathrm{op}}]_{\mathscr{H}}\).
We have
\[\|\delta(a)\|_{\mathscr{H}}^{2} =\|[\pi_{l}(a)\otimes 1-1\otimes(J\pi_{r}(a)^{*}J)^{\mathrm{op}}]_{ \mathscr{H}}\|^{2}\] \[=\omega(|\pi_{l}(a)\otimes 1-1\otimes(J\pi_{r}(a)^{*}J)^{\mathrm{ op}}|^{2})\] \[=\omega(\pi_{l}(a^{\sharp}a)\otimes 1)+\omega(1\otimes(J\pi_{r}( aa^{\flat})J)^{\mathrm{op}})\] \[\quad-\omega(\pi_{l}(a^{\sharp})\otimes(J\pi_{r}(a)^{*}J)^{ \mathrm{op}})-\omega(\pi_{l}(a)\otimes(J\pi_{r}(a^{\flat})^{*}J)^{\mathrm{op}})\] \[=-\frac{1}{2}\langle\varphi^{1/2},\tilde{\mathscr{L}}_{2}(\Delta^ {1/4}(a^{\sharp}a))\rangle-\frac{1}{2}\langle\Delta^{1/4}J(aa^{\flat}),\tilde {\mathscr{L}}_{2}\varphi^{1/2}\rangle\] \[\quad+\frac{1}{2}\langle\Delta^{1/4}Ja,\tilde{\mathscr{L}}_{2} \Delta^{1/4}(a^{\sharp})\rangle+\frac{1}{2}\langle\Delta^{1/4}Ja^{\flat}, \tilde{\mathscr{L}}_{2}\Delta^{1/4}a\rangle\] \[\overset{(1)}{=}\frac{1}{2}\langle J\Delta^{-1/4}a,\tilde{ \mathscr{L}}_{2}J\Delta^{1/4}a\rangle+\frac{1}{2}\langle\Delta^{-1/4}a, \tilde{\mathscr{L}}_{2}\Delta^{1/4}a\rangle\] \[\overset{(2)}{=}\frac{1}{2}\langle\Delta^{1/4}a,\tilde{ \mathscr{L}}_{2}\Delta^{-1/4}a\rangle+\frac{1}{2}\langle\Delta^{-1/4}a,\tilde {\mathscr{L}}_{2}\Delta^{1/4}a\rangle\] \[\overset{(3)}{=}\langle a,\mathscr{L}_{2}(a)\rangle.\]
Here we used the symmetry of \(\tilde{\mathscr{L}}_{2}\) and \(\tilde{\mathscr{L}}_{2}\varphi^{1/2}=0\) for (1), the symmetry of \(\tilde{\mathscr{L}}_{2}\) and \(\tilde{\mathscr{L}}_{2}J=J\tilde{\mathscr{L}}_{2}\) for (2) and the key property from Lemma 3.4 for (3).
Therefore the map \(\delta\) extends to a bounded linear operator from \(L^{2}(M)\) to \(\mathscr{H}\), and this extension, still denoted by \(\delta\), satisfies \(\delta^{*}\delta=\mathscr{L}_{2}\). Clearly,
\[\delta(ab)=\pi_{l}(a)\cdot\delta(b)+\delta(a)\cdot J\pi_{r}(b)^{*}J\]
for \(a,b\in M\varphi^{1/2}\cap\varphi^{1/2}M\). If we only have \(a\in M\varphi^{1/2}\) and \(b\in\varphi^{1/2}M\), a standard approximation argument [10, Lemma 1.3] shows that this identity continues to hold, which settles property (c).
Property (b) is clear from the definition if \(a\in M\varphi^{1/2}\cap\varphi^{1/2}M\), and can be extended to \(a\in L^{2}(M)\) again by approximation.
Finally, if \(u=\sum_{j=1}^{n}x_{j}\otimes y_{j}^{\mathrm{op}}\in\ker q\) and \(x_{j}\) is analytic for \(\sigma^{\varphi}\) for all \(j\in\{1,\ldots,n\}\), then
\[u =\sum_{j=1}^{n}x_{j}\otimes y_{j}^{\mathrm{op}}\] \[=\sum_{j=1}^{n}(x_{j}\otimes y_{j}^{\mathrm{op}}-1\otimes(\sigma _{i/2}^{\varphi}(x_{j})y_{j})^{\mathrm{op}}),\]
where we used \(0=q(u)=\varphi^{1/2}\sum_{j=1}^{n}\sigma_{i/2}^{\varphi}(x_{j})y_{j}\). Thus
\[[u]_{\mathscr{H}}=\sum_{j=1}^{n}\delta(x_{j}\varphi^{1/2})y_{j}.\]
Since \(\pi_{\omega}\) is normal and \([\ker q]_{\mathscr{H}}\) is dense in \(\mathscr{H}\) by definition, property (d) follows.
_Remark 4.3_.: If one only wants to show the existence of the derivation \(\delta\), one could work directly with the GNS representation of \(\omega\) on \(\ker q\) without passing to the binormal tensor product. Hence Proposition 4.1 and thus the Christensen-Evans theorem is only needed to show that the derivation is inner.
### Uniqueness
We show next that the triple \((\mathscr{H},\mathscr{J},\delta)\) constructed in Theorem 4.2 is uniquely determined by the semigroup \((\Phi_{t})\) up to isomorphism. Let us first introduce some terminology for triples of this kind.
**Definition 4.4**.: We call a pair \((\mathscr{H},\mathscr{J})\) consisting of an \(M\)-\(M\)-correspondence and an anti-unitary involution \(\mathscr{J}\colon\mathscr{H}\to\mathscr{H}\) a _self-dual \(M\)-\(M\)-correspondence_ if
\[\mathscr{J}(x\xi y)=y^{*}(\mathscr{J}\xi)x^{*}\]
for all \(x,y\in M\) and \(\xi\in\mathscr{H}\).
We call a triple \((\mathscr{H},\mathscr{J},\delta)\) consisting of a self-dual \(M\)-\(M\)-correspondence \((\mathscr{H},\mathscr{J})\) and a closed operator \(\delta\colon\operatorname{dom}(\delta)\subset L^{2}(M)\to\mathscr{H}\) a _first-order differential calculus_ if
1. \(J\operatorname{dom}(\delta)=\operatorname{dom}(\delta)\) and \(\delta(Ja)=\mathscr{J}\delta(a)\) for all \(a\in L^{2}(M)\),
2. Whenever \(a\in\operatorname{dom}(\delta)\cap M\varphi^{1/2}\), \(b\in\operatorname{dom}(\delta)\cap\varphi^{1/2}M\), then \(ab\in\operatorname{dom}(\delta)\) and \(\delta(ab)=\pi_{l}(a)\delta(b)+\delta(a)J\pi_{r}(b)^{*}J\),
3. \(\overline{\operatorname{lin}}\{\delta(a)x\mid a\in L^{2}(M),\ x\in M\}= \mathscr{H}\).
With this definition, Theorem 4.2 says that for every bounded Markov generator \(\mathscr{L}_{2}\) on \(L^{2}(M)\) there exists a first-order differential calculus \((\mathscr{H},\mathscr{J},\delta)\) such that \(\mathscr{L}_{2}=\delta^{*}\delta\). In this subsection we will show that \((\mathscr{H},\mathscr{J},\delta)\) is uniquely determined by \(\mathscr{L}_{2}\).
To lighten the notation, we write \(a\xi\) for \(\pi_{l}(a)\xi\) if \(a\in M\varphi^{1/2}\) and \(\xi b\) for \(\xi\cdot\eta_{r}(b)^{*}J\) if \(b\in M\varphi^{1/2}\) for the remainder of this subsection.
The first step towards uniqueness is a purely algebraic consequence of the properties (a) and (b) of a first-order differential calculus.
**Lemma 4.5**.: _If \((\mathcal{H},\,\mathcal{J},\delta)\) is a first-order differential calculus and \(\delta\) is bounded, then_
\[\langle\delta(\Delta^{1/4}a)\Delta^{1/4}b,\delta(\Delta^{-1/4}c) \rangle_{\mathcal{H}}+\langle\delta(\Delta^{-1/4}a)\Delta^{-1/4}b,\delta( \Delta^{1/4}c)\rangle_{\mathcal{H}}\] \[\qquad= \langle\delta(\Delta^{-1/4}(ab)),\,\delta(\Delta^{1/4}c) \rangle_{\mathcal{H}}+\langle\delta(\Delta^{1/4}a),\delta(\Delta^{-1/4}cJ( \Delta^{-1/4}b))\rangle_{\mathcal{H}}\] \[\qquad-\langle\delta(J(\Delta^{1/4}c)\Delta^{1/4}a),\,\delta(J( \Delta^{-1/4}b))\rangle_{\mathcal{H}}\]
_for all \(a,b,c\in M^{a}_{\varphi}\varphi^{1/2}\), where \(M^{a}_{\varphi}\) denotes the set of all entire analytic elements for \(\sigma^{\varphi}\)._
Proof.: As \(a,b,c\in M^{a}_{\varphi}\varphi^{1/2}\), these elements lie in \(\bigcap_{n\in\mathbb{Z}}\operatorname{dom}(\Delta^{n})\), and arbitrary powers of the modular operator map them to left- and right-bounded vectors. In particular, all expressions in the claimed equation are well-defined.
Using properties (a) and (b) of a first-order differential calculus, we can do the following computations. Let \(x,y,z\in M^{a}_{\varphi}\varphi^{1/2}\). Then we have
\[\langle\delta(x)y,\delta(z)\rangle_{\mathcal{H}} =\langle\delta(x)y,\,\delta(z)\rangle_{\mathcal{H}}+\langle x \delta(y),\,\delta(z)\rangle_{\mathcal{H}}-\langle\mathcal{J}(\delta(z)),\, \mathcal{J}(x\delta(y))\rangle_{\mathcal{H}}\] \[=\langle\delta(xy),\,\delta(z)\rangle_{\mathcal{H}}-\langle \delta(Jz),\,\delta(y)\rangle x\rangle_{\mathcal{H}}\] \[=\langle\delta(xy),\,\delta(z)\rangle_{\mathcal{H}}-\langle \delta(Jz)\Delta^{1/2}x,\delta(Jy)\rangle_{\mathcal{H}}\]
and
\[\langle\delta(x)y,\delta(z)\rangle_{\mathcal{H}} =\langle\delta(x),\,\delta(z)J\Delta^{-1/2}y\rangle_{\mathcal{H }}+\langle\delta(x),z\delta(J\Delta^{-1/2}y)\rangle_{\mathcal{H}}-\langle \delta(x),z\delta(J\Delta^{-1/2}y)\rangle_{\mathcal{H}}\] \[=\langle\delta(x),\,\delta(J\Delta^{-1/2}y)\rangle_{\mathcal{H}} -\langle\delta(x),z\delta(J\Delta^{-1/2}y)\rangle_{\mathcal{H}}\] \[=\langle\delta(x),\,\delta(J\Delta^{-1/2}y)\rangle_{\mathcal{H}} -\langle\left\{(\Delta^{1/2}z)\delta(x),\,\delta(J\Delta^{-1/2}y)\right\} \rangle_{\mathcal{H}}.\]
If we take \(x=\Delta^{-1/4}a\), \(y=\Delta^{-1/4}b\), \(z=\Delta^{1/4}c\) in the first identity and \(x=\Delta^{1/4}a\), \(y=\Delta^{1/4}b\), \(z=\Delta^{-1/4}c\) in the second identity and add them up, we obtain
\[\langle\delta(\Delta^{-1/4}a)\Delta^{-1/4}b,\,\delta(\Delta^{1/4 }c)\rangle_{\mathcal{H}}+\langle\delta(\Delta^{1/4}a)\Delta^{1/4}b,\,\delta( \Delta^{-1/4}c)\rangle_{\mathcal{H}}\] \[=\langle\delta(\Delta^{-1/4}(ab)),\,\delta(\Delta^{1/4}c)\rangle _{\mathcal{H}}-\langle\delta(J\Delta^{1/4}c)\Delta^{1/4}a,\delta(J\Delta^{-1/4 }b)\rangle_{\mathcal{H}}\] \[\qquad+\langle\delta(\Delta^{1/4}a),\,\delta((\Delta^{-1/4}c)J \Delta^{-1/4}b)\rangle_{\mathcal{H}}-\langle(J\Delta^{1/4}c)\delta(\Delta^{1/ 4}a),\,\delta(J\Delta^{-1/4}b)\rangle_{\mathcal{H}}\] \[=\langle\delta(\Delta^{-1/4}(ab)),\,\delta(\Delta^{1/4}c)\rangle _{\mathcal{H}}+\langle\delta(\Delta^{1/4}a),\,\delta((\Delta^{-1/4}c)J\Delta^{- 1/4}b)\rangle_{\mathcal{H}}\] \[\qquad-\langle\delta((J\Delta^{1/4}c)\Delta^{1/4}a),\,\delta(J \Delta^{-1/4}b)\rangle_{\mathcal{H}},\]
where we used again property (b) of a first-order differential calculus in the last step.
The significance of this result is that the right side depends only on the inner product of elements from the range of \(\delta\) and not the bimodule generated by the range. If the modular operator \(\Delta\) is trivial, that is, \(\varphi\) is a trace, one can conclude uniqueness of the derivation directly from this lemma. In the general case, substantially more work is needed. In particular, there are analytical difficulties that are absent in the case of tracially symmetric (or more generally GNS-symmetric) quantum Markov semigroups.
One tool we use are spectral subspaces of the analytic generator of the modular group. We recall the definition and some of their properties here. See [11] for more details.
For \(0<\lambda_{1}<\lambda_{2}\) let
\[M[\lambda_{1},\lambda_{2}]=\left\{x\in\bigcap_{t\in\mathbb{R}}\mathrm{dom}( \sigma_{it}^{\varphi}):\overline{\lim}_{t\to\infty}\|\sigma_{it}^{\varphi}(x)\| ^{1/t}\leq\frac{1}{\lambda_{1}},\ \overline{\lim}_{t\to\infty}\|\sigma_{-it}^{\varphi}(x)\|^{1/t}\leq \lambda_{2}\right\}.\]
This is a norm closed subspace of \(M\), invariant under \(\sigma^{\varphi}\), and the spectrum of the restriction of \(\sigma_{-i}^{\varphi}\) to \(M[\lambda_{1},\lambda_{2}]\) is contained in \([\lambda_{1},\lambda_{2}]\) (see [10, (iii)-(v), p. 351]). Moreover, the union \(\bigcup_{0<\lambda_{1}\leq\lambda_{2}<\infty}M[\lambda_{1},\lambda_{2}]\) is weak\({}^{*}\) dense in \(M\)[10, (vi), p. 356]. Additionally, \(\sigma_{t}^{\varphi}\) is given by \(e^{itH}\) for some \(H\in B(M[\lambda_{1},\lambda_{2}])\) with \(\mathrm{sp}(H)\subset[-\ln(\lambda_{1}),\ln(\lambda_{2})]\)[10, Theorem 5.2, p. 349].
**Lemma 4.6**.: _Let \(0<\lambda_{1}<\lambda_{2}\). Define \(X\) to be the completion of \((\mathds{1}_{[\lambda_{1},\lambda_{2}]}(\Delta)L^{2}(M))\odot M[\lambda_{1}, \lambda_{2}]\) with the projective cross norm. Then the bounded operator \(T:X\to X\) defined on pure tensors by_
\[T(\eta\otimes x)=\Delta^{1/4}(\eta)\otimes\sigma_{-i/4}^{\varphi}(x)\]
_is well defined and \(\mathrm{sp}(T)\subset(0,\infty)\)._
Proof.: The spectrum of \(\Delta^{1/4}\) restricted to \(\mathds{1}_{[\lambda_{1},\lambda_{2}]}(\Delta)L^{2}(M)\) is contained in \([\lambda_{1}^{1/4},\lambda_{2}^{1/4}]\) by definition. Since the restriction of \(\sigma_{-i/4}^{\varphi}\) to \(M[\lambda_{1},\lambda_{2}]\) is \(e^{1/4H}\) for some \(H\in B(M[\lambda_{1},\lambda_{2}])\) with \(\mathrm{sp}(H)\subset[-\ln(\lambda_{1}),\ln(\lambda_{2})]\), we know that the spectrum of the restriction of \(\sigma_{-i/4}^{\varphi}\) to \(M[\lambda_{1},\lambda_{2}]\) is also contained in \([\lambda_{1}^{1/4},\lambda_{2}^{1/4}]\). Then \(\Delta^{1/4}\otimes\mathrm{id}\) and \(\mathrm{id}\otimes\sigma_{-i/4}^{\varphi}\) are well defined and have spectra contained in \([\lambda_{1}^{1/4},\lambda_{2}^{1/4}]\), so \(\Delta^{1/4}\otimes\sigma_{-i/4}^{\varphi}\) has spectrum contained in \((0,\infty)\)[11, p. 96].
**Theorem 4.7**.: _Let \((\mathcal{H}_{1},\,\mathcal{J}_{1},\,\delta_{1})\) and \((\mathcal{H}_{2},\,\mathcal{J}_{2},\,\delta_{2})\) be first order differential calculi for \(M\) such that \(\delta_{1}\) and \(\delta_{2}\) are bounded and \(\delta_{1}^{*}\delta_{1}=\delta_{2}^{*}\delta_{2}\). Then there exists a unitary bimodule map \(\Theta:\mathcal{H}_{1}\to\mathcal{H}_{2}\) intertwining \(\mathcal{J}_{1}\) and \(\mathcal{J}_{2}\) such that \(\Theta(\delta_{1}(a))=\delta_{2}(a)\) for all \(a\in L^{2}(M)\)._
Proof.: The unitary bimodule map \(\Theta\) will be given by
\[\Theta(\delta_{1}(a)b)=\delta_{2}(a)b\]
on elements of the form \(\delta_{1}(a)b\) with \(a,b\in L^{2}(M)\). The difficult part of the proof is to show that this map is isometric; the other properties will follow naturally.
Let \(0<\lambda_{1}<\lambda_{2}\) be arbitrary. Let \(X\) and \(T\) be as in Lemma 4.6. Note that \(T\) is invertible since \(0\not\in\mathrm{sp}(T)\). On \((\mathds{1}_{[\lambda_{1},\lambda_{2}]}(\Delta)L^{2}(M))\odot M[\lambda_{1}, \lambda_{2}]\subset X\) we can define the maps \(q_{1}\) and \(q_{2}\) to \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\), respectively, by
\[q_{1}(\eta\otimes x)=\delta_{1}(\eta)x\ \text{and}\ q_{2}(\eta\otimes x)=\delta_{2}( \eta)x.\]
Because \(\delta_{1}\) and \(\delta_{2}\) are bounded and right multiplication is bounded in the operator norm, we can boundedly extend \(q_{1}\) and \(q_{2}\) to \(X\).
Using Lemma 4.5 we can now show that for all \(x,y\in X\) we have
\[\langle q_{1}(T(x)),q_{1}(T^{-1}(y))\rangle_{\mathcal{H}_{1}}+ \langle q_{1}(T^{-1}(x)),q_{1}(T(y))\rangle_{\mathcal{H}_{1}}\\ =\langle q_{2}(T(x)),q_{2}(T^{-1}(y))\rangle_{\mathcal{H}_{2}}+ \langle q_{2}(T^{-1}(x)),q_{2}(T(y))\rangle_{\mathcal{H}_{2}}. \tag{4}\]
Indeed, for all \(j\in\{1,2\}\), \(\eta\), \(\xi\in(\mathds{1}_{\{\lambda_{1},\lambda_{2}\}}(\Delta)L^{2}(M))\) and \(a,b\in M[\lambda_{1},\lambda_{2}]\) we have
\[\langle q_{j}(T(\eta\otimes a)),q_{j}(T^{-1}(\xi\otimes b))\rangle_ {\mathcal{H}_{j}}+\langle q_{j}(T^{-1}(\eta\otimes a)),q_{j}(T(\xi\otimes b)) \rangle_{\mathcal{H}_{j}}\] \[= \langle\delta_{j}(\Delta^{1/4}\eta)\sigma_{-i/4}^{\varphi}(a), \,\delta_{j}(\Delta^{-1/4}\xi)\sigma_{i/4}^{\varphi}(b)\rangle_{\mathcal{H}_{ j}}+\langle\delta_{j}(\Delta^{-1/4}\eta)\sigma_{i/4}^{\varphi}(a),\,\delta_{j}( \Delta^{1/4}\xi)\sigma_{-i/4}^{\varphi}(b)\rangle_{\mathcal{H}_{j}}\] \[= \langle\delta_{j}(\Delta^{1/4}\eta)\sigma_{-i/4}^{\varphi}(ab^{* }),\,\delta_{j}(\Delta^{-1/4}\xi)\rangle_{\mathcal{H}_{j}}+\langle\delta_{j} (\Delta^{1/4}\eta)\sigma_{i/4}^{\varphi}(ab^{*}),\,\delta_{j}(\Delta^{1/4}\xi ))\rangle_{\mathcal{H}_{j}}\] \[= \langle\delta_{j}(\Delta^{-1/4}(\eta\sigma^{1/2}ab^{*})),\, \delta_{j}(\Delta^{1/4}\xi)\rangle_{\mathcal{H}_{j}}+\langle\delta_{j}(\Delta^ {1/4}\eta),\,\delta_{j}(\Delta^{-1/4}\xi J(\Delta^{-1/4}(\sigma^{1/2}ab^{*})) )\rangle_{\mathcal{H}_{j}}\] \[-\langle\delta_{j}(J(\Delta^{1/4}\xi)\Delta^{1/4}\eta),\,\delta_ {j}(J(\Delta^{-1/4}\sigma^{1/2}ab^{*}))\rangle_{\mathcal{H}_{j}},\]
where the last step follows from Lemma 4.5 and the fact that \(va=v\cdot\eta\pi_{r}(\sigma^{1/2}a)^{*}J\) for \(v\in\mathcal{H}_{j}\) and \(a\in M\). Since we have for all \(\eta\), \(\xi\in L^{2}(M)\) that
\[\langle\delta_{1}(\eta),\,\delta_{1}(\xi)\rangle_{\mathcal{H}_{1}}=\langle \eta,\,\delta_{1}^{*}\delta_{1}(\xi)\rangle=\langle\eta,\,\delta_{2}^{*} \delta_{2}(\xi)\rangle=\langle\delta_{2}(\eta),\,\delta_{2}(\xi)\rangle_{ \mathcal{H}_{2}},\]
we can now conclude that
\[\langle q_{1}(T(\eta\otimes a)),q_{1}(T^{-1}(\xi\otimes b)) \rangle_{\mathcal{H}_{1}}+\langle q_{1}(T^{-1}(\eta\otimes a)),q_{1}(T(\xi \otimes b))\rangle_{\mathcal{H}_{1}}\\ =\langle q_{2}(T(\eta\otimes a)),q_{2}(T^{-1}(\xi\otimes b)) \rangle_{\mathcal{H}_{2}}+\langle q_{2}(T^{-1}(\eta\otimes a)),\,q_{2}(T(\xi \otimes b))\rangle_{\mathcal{H}_{2}}.\]
By linearity and density of \((\mathds{1}_{\{\lambda_{1},\lambda_{2}\}}(\Delta)L^{2}(M))\odot M[\lambda_{1},\lambda_{2}]\) in \(X\) we find that (4) holds.
The next part of the proof is to show that (4) implies that
\[\langle q_{1}(x),\,q_{1}(y)\rangle_{\mathcal{H}_{1}}=\langle q_{2}(x),\,q_{2} (y)\rangle_{\mathcal{H}_{2}}\]
for \(x,\,y\in X\). For this, we consider the operator \(Te^{-sT^{2}}\) for \(s>0\), defined by holomorphic functional calculus. We start with the observation that
\[\langle q_{j}(T(Te^{-sT^{2}}(x))),q_{j}(T^{-1}(Te^{-sT^{2}}(y))) \rangle_{\mathcal{H}_{j}}+\langle q_{j}(T^{-1}(Te^{-sT^{2}}(x))),q_{j}(T(Te^{-sT ^{2}}(y)))\rangle_{\mathcal{H}_{j}}\] \[=-\frac{d}{ds}\langle q_{j}(e^{-sT^{2}}(x)),q_{j}(e^{-sT^{2}}(y)) \rangle_{\mathcal{H}_{j}} \tag{5}\]
for \(j\in\{1,2\}\) and \(x,y\in X\). Since \(T\) is bounded and \(\operatorname{sp}(T)\subset(0,\infty)\), we know that \(\lim_{s\to\infty}\|e^{-sT^{2}}\|=0\)[10, Theorem 6.24]. Consequently, we have for \(j\in\{1,2\}\) and \(x,y\in X\) that
\[-\lim_{r\to\infty}\int_{0}^{r}\frac{d}{ds}\langle q_{j}(e^{-sT^{2}}(x)),q_{j}(e ^{-sT^{2}}(y))\rangle_{\mathcal{H}_{j}}\,ds=\langle q_{j}(x),\,q_{j}(y) \rangle_{\mathcal{H}_{j}}.\]
Since the integrand of the above integral is equal for \(j=1\) and \(j=2\) by (4) and (5), we deduce that
\[\langle q_{1}(x),\,q_{1}(y)\rangle_{\mathcal{H}_{1}}=\langle q_{2}(x),\,q_{2} (y)\rangle_{\mathcal{H}_{2}}\]
for all \(x,\,y\in X\). Therefore, we have for all \(\eta\), \(\xi\in(\mathds{1}_{\{\lambda_{1},\lambda_{2}\}}(\Delta)L^{2}(M))\) and \(a,b\in M[\lambda_{1},\lambda_{2}]\) that
\[\langle\delta_{1}(\eta)a,\,\delta_{1}(\xi)b\rangle_{\mathcal{H}_{1}}=\langle \delta_{2}(\eta)a,\,\delta_{2}(\xi)b\rangle_{\mathcal{H}_{2}}.\]
So far we have shown that \(\Theta\) preserves the inner product on certain subsets of \(\mathcal{H}_{1}\), and the goal is to extend this to all of \(\mathcal{H}_{1}\). This takes a few steps. First, note that \(\Theta\) preserves the inner product on all of
\[\lim\{\delta_{1}(\eta)a|\lambda_{1},\lambda_{2}>0,\eta\in(\mathds{1}_{\{\lambda_{ 1},\lambda_{2}\}}(\Delta)L^{2}(M)),a\in M[\lambda_{1},\lambda_{2}]\},\]
since for each \(\eta_{1},\eta_{2}\in\bigcup_{\lambda_{1},\lambda_{2}>0}(\mathds{1}_{\{\lambda_{ 1},\lambda_{2}\}}(\Delta)L^{2}(M))\) and \(a_{1},a_{2}\in\bigcup_{\lambda_{1},\lambda_{2}>0}M[\lambda_{1},\lambda_{2}]\) we can find \(\lambda_{1}^{\prime},\lambda_{2}^{\prime}>0\) such that \(\eta_{1},\eta_{2}\in(\mathds{1}_{\{\lambda_{1}^{\prime},\lambda_{2}^{\prime}\}}( \Delta)L^{2}(M))\) and \(a_{1},a_{2}\in M[\lambda_{1}^{\prime},\lambda_{2}^{\prime}]\). Next, since \(\bigcup_{\lambda_{1},\lambda_{2}>0}(\mathds{1}_{\{\lambda_{1},\lambda_{2}\}}( \Delta)L^{2}(M))\) is dense in \(L^{2}(M)\) and \(\delta_{1}\) is bounded, we can extend this
to \(\ln\{\delta_{1}(\eta)a|\eta\in L^{2}(M),\,a\in\bigcup_{\lambda_{1},\lambda_{2}>0}M[ \lambda_{1},\lambda_{2}]\}\) and subsequently to \(\ln\{\delta_{1}(\eta)a|\eta\in L^{2}(M),\,a\in M\}\) because \(\bigcup_{\lambda_{1},\lambda_{2}>0}M[\lambda_{1},\lambda_{2}]\) is weak* dense in \(M\). Lastly, by property (c) of a first-order differential calculus we conclude that \(\Theta\) is isometric on all of \(\mathcal{H}_{1}\).
We will finish the proof by discussing the other desired properties of \(\Theta\). By property (c) of a first-order differential calculus, \(\ln\{\delta_{2}(\eta)a|\eta\in L^{2}(M),\,a\in M\}\) is dense in \(\mathcal{H}_{2}\). Because the image of an isometric map is closed, we know that \(\Theta\) is surjective and therefore that it is a linear isometric isomorphism. By property (b) of a first-order differential calculus it is a unitary bimodule map, and it is clear that it intertwines \(\mathcal{J}_{1}\) and \(\mathcal{J}_{2}\).
## 5. Derivations for Quantum Markov Semigroups with Unbounded Generators
In this section we study derivations for quantum Markov semigroups that are not necessarily uniformly continuous. In this case the generator can be unbounded, and it is convenient to work with the associated quadratic forms on \(L^{2}(M)\), which we call quantum Dirichlet forms. We show that the bounded vectors in the form domain form an algebra (Theorem 5.2), which gives a suitable domain for a derivation. We then show that there exists a (possibly unbounded) first-order differential calculus associated with our given quantum Dirichlet form (Theorem 5.4).
We keep the notation from the previous section. In particular, \(M\) is a von Neumann algebra and \(\varphi\) is a normal faithful state on \(M\).
Let us recall some basic definitions concerning quadratic forms on Hilbert spaces. A quadratic form on a Hilbert space \(H\) is a map \(q\colon H\to[0,\infty]\) such that
* \(q(\lambda\xi)=|\lambda|^{2}q(\xi)\)
* \(q(\xi+\eta)+q(\xi-\eta)=2q(\xi)+2q(\eta)\)
for all \(\xi,\eta\in H\) and \(\lambda\in\mathbb{C}\). The form \(q\) is called closed if it is lower semicontinuous and densely defined if \(\operatorname{dom}(q)=\{\xi\in H\mid q(\xi)<\infty\}\) is dense.
If \(q\) is a quadratic form, it gives rise to a bilinear form on \(\operatorname{dom}(q)\times\operatorname{dom}(q)\) by the polarization identity. Vice versa, the diagonal of a bilinear form extended by \(\infty\) to the complement of its domain is a quadratic form. We will use both viewpoints interchangeably and denote both objects by the same symbol.
The generator of a closed densely defined quadratic form \(q\) is the positive self-adjoint operator \(L\) given by
\[\operatorname{dom}(L) =\{\xi\in\operatorname{dom}(q)\mid\exists\eta\in H\,\forall\zeta \in\operatorname{dom}(q)\colon q(\xi,\zeta)=\langle\eta,\zeta\rangle\},\] \[L\xi =\eta.\]
Conversely, if \(L\) is a positive self-adjoint operator, then
\[q\colon H\to[0,\infty],\,q(\xi)=\begin{cases}\|L^{1/2}\xi\|^{2}&\text{if }\xi\in \operatorname{dom}(L^{1/2}),\\ \infty&\text{otherwise}\end{cases}\]
is a closed densely defined quadratic form with generator \(q\).
To describe the quadratic forms associated with symmetric Markov semigroups, we need the following piece of notation. For \(a\in L^{2}(M)\) let \(a\wedge\varphi^{1/2}\) be the projection onto the closed convex cone \(\varphi^{1/2}-L^{2}_{+}(M)\).
**Definition 5.1**.: A closed densely defined quadratic form \(\mathcal{E}\colon L^{2}(M)\to[0,\infty]\) is called _conservative Dirichlet form_ if
* \(\mathcal{E}(Ja)=\mathcal{E}(a)\) for all \(a\in L^{2}(M)\),
* \(\mathcal{E}(a\wedge q^{1/2})\leq\mathcal{E}(a)\) for all \(a\in L^{2}(M)\),
* \(\mathcal{E}(\varphi^{1/2})=0\).
The form \(\mathcal{E}\) is called conservative completely Dirichlet form or _quantum Dirichlet form_ if for every \(n\in\mathbb{N}\) the quadratic form
\[\mathcal{E}^{(n)}\colon L^{2}(M_{n}(M))\to[0,\infty],\ \mathcal{E}^{(n)}([a_{jk}])= \sum_{j,k=1}^{n}\mathcal{E}(a_{jk})\]
is a conservative Dirichlet form.
There is a one-to-one correspondence between quantum Dirichlet forms on \(L^{2}(M)\) and KMS-symmetric quantum Markov semigroups on \(M\) (see [21, Theorem 4.11], [19, Theorem 5.7]): If \((\Phi_{t})\) is a KMS-symmetric quantum Markov semigroup on \(M\) and \(\mathcal{L}_{2}\) the KMS implementation of its generator on \(L^{2}(M)\), then the quadratic form associated with \(\mathcal{L}_{2}\) is a quantum Dirichlet form. Vice versa, every quantum Dirichlet form arises this way.
**Theorem 5.2**.: _Let \(M\) be a von Neumann algebra, \(\varphi\) a normal faithful state on \(M\) and \(\mathcal{E}\) a quantum Dirichlet form on \(L^{2}(M)\). If \(a\in\operatorname{dom}(\mathcal{E})\cap M\varphi^{1/2}\), \(b\in\operatorname{dom}(\mathcal{E})\cap\varphi^{1/2}M\), then \(ab\in\operatorname{dom}(\mathcal{E})\) and_
\[\mathcal{E}(ab)^{1/2}\leq\|\pi_{l}(a)\|\mathcal{E}(b)^{1/2}+\mathcal{E}(a)^{1/ 2}\|\pi_{r}(a)\|.\]
_In particular, \(\operatorname{dom}(\mathcal{E})\cap M\varphi^{1/2}\cap\varphi^{1/2}M\) with the involution \(J\) is a \(*\)-algebra._
Proof.: Let \((T_{t})\) be the strongly continuous semigroup associated with \(\mathcal{E}\) and let \(\mathcal{E}_{t}(a)=\frac{1}{t}\langle a,a-T_{t}a\rangle\) for \(a\in L^{2}(M)\). By the spectral theorem, \(\mathcal{E}_{t}(a)\nearrow\mathcal{E}(a)\) as \(t\searrow 0\).
Let \((\Phi_{t})\) be the KMS-symmetric quantum Markov semigroup on \(M\) associated with \((T_{t})\). Since \(\Phi_{t}\) is completely positive, \(\frac{1}{t}(I-\Phi_{t})\) is conditionally completely negative. Thus \(\frac{1}{t}(I-\Phi_{t})\) generates a quantum Markov semigroup on \(M\), which is clearly KMS-symmetric, and the associated symmetric Markov semigroup on \(L^{2}(M)\) has generator \(\frac{1}{t}(I-T_{t})\).
By Theorem 4.2 there exists a bounded first-order differential calculus \((\mathcal{H}_{t},\ \mathcal{J}_{t},\delta_{t})\) such that \(\mathcal{E}_{t}(a)=\|\delta_{t}(a)\|_{\mathcal{H}_{t}}^{2}\) for \(a\in L^{2}(M)\). Thus, if \(a\in\operatorname{dom}(\mathcal{E})\cap M\varphi^{1/2}\) and \(b\in\operatorname{dom}(\mathcal{E})\cap\varphi^{1/2}M\), then
\[\mathcal{E}_{t}(ab)^{1/2} =\|\delta_{t}(ab)\|_{\mathcal{H}_{t}}\] \[=\|\pi_{l}(a)\delta_{t}(b)+\delta_{t}(a)\pi_{r}(b)^{*}J\|_{ \mathcal{H}_{t}}\] \[=\|\pi_{l}(a)\|\|\delta_{t}(b)\|_{\mathcal{H}_{t}}+\|\delta(a)\| _{\mathcal{H}_{t}}\|\pi_{r}(b)\|\] \[=\|\pi_{l}(a)\|\mathcal{E}_{t}(b)^{1/2}+\mathcal{E}(a)^{1/2}\|\pi_ {r}(b)\|.\]
The claim follows by taking the limit \(t\searrow 0\) of both sides.
_Remark 5.3_.: In contrast to the case of GNS-symmetric quantum Markov semigroups [22, Theorem 6.3] we could not show that \(\operatorname{dom}(\mathcal{E})\cap M\varphi^{1/2}\cap\varphi^{1/2}M\) is a form core for \(\mathcal{E}\). The space \(\operatorname{dom}(\mathcal{E})\cap\varphi^{1/4}M\varphi^{1/4}\) is always a form core, but we do not expect it to be an algebra in general.
**Theorem 5.4**.: _Let \(M\) be a von Neumann algebra and \(\varphi\) a faithful normal state on \(M\). If \(\mathcal{E}\) is a quantum Dirichlet form on \(L^{2}(M)\), then there exists a Hilbert space \(\mathcal{H}\) with commuting left
_and right actions of \(M\), an anti-unitary involution \(\mathcal{J}\colon\mathcal{H}\to\mathcal{H}\) such that_
\[\mathcal{J}(x\xi y)=y^{*}(\mathcal{J}\xi)x^{*}\]
_for \(x,y\in M\) and \(\xi\in\mathcal{H}\), a closed operator \(\delta\colon\ \operatorname{dom}(\mathcal{E})\to\mathcal{H}\) such that \(\mathcal{J}\delta=\delta J\) and_
\[\delta(ab)=\pi_{l}(a)\cdot\delta(b)+\delta(a)\cdot J\pi_{r}(a)^{*}J\]
_for \(a\in\operatorname{dom}(\mathcal{E})\cap M\varphi^{1/2}\), \(b\in\operatorname{dom}(\mathcal{E})\cap\varphi^{1/2}M\), and_
\[\mathcal{E}(a,b)=\langle\delta(a),\delta(b)\rangle_{\mathcal{H}}\]
_for \(a,b\in\operatorname{dom}(\mathcal{E})\)._
_Remark 5.5_.: By commuting left and right actions of \(M\) on \(\mathcal{H}\) we mean unital \(\star\)-homomorphisms \(\pi_{l}^{\mathcal{H}}\colon M\to B(\mathcal{H})\), \(\pi_{r}^{\mathcal{H}}\colon M^{\operatorname{op}}\to B(\mathcal{H})\) with commuting images. As usual, we write \(x\xi y\) for \(\pi_{l}^{\mathcal{H}}(x)\pi_{r}^{\mathcal{H}}(y^{\operatorname{op}})\xi\).
We do not claim that the actions of \(M\) on \(\mathcal{H}\) are normal so that \(\mathcal{H}\) is in general not a correspondence. This is not only an artefact of our proof or a feature of KMS symmetry specifically, but happens even for symmetric Markov semigroups on commutative von Neumann algebras. For GNS-symmetric quantum Markov semigroups, the normality of the action is linked to the existence of the carre du champ (see [23, Section 7]).
Proof.: Let \((T_{t})\) be the strongly continuous semigroup associated with \(\mathcal{E}\). As in the proof of Theorem 5.2, we consider the quantum Dirichlet form \(\mathcal{E}_{t}\) given by \(\mathcal{E}_{t}(a)=\frac{1}{t}\langle a,a-T_{t}a\rangle\) and the associated first-order differential calculus \((\mathcal{H}_{t},\mathcal{J}_{t},\delta_{t})\). We will use an ultraproduct construction to define \((\mathcal{H},\mathcal{J},\delta)\) for \(\mathcal{E}\).
Choose \(\omega\in\beta\mathbb{N}\setminus\mathbb{N}\) and a null sequence \((t_{n})\) in \((0,\infty)\) and let \(\mathcal{H}\) be the ultraproduct \(\prod_{n\to\omega}\mathcal{H}_{t_{n}}\). We write \([\xi_{n}]\) for the equivalence class of \((\xi_{n})\) in \(\mathcal{H}\).
We can define commuting left and right actions of \(M\) on \(\mathcal{H}\) by
\[x[\xi_{n}]y=[x\xi_{n}y].\]
As noted in the previous remark, these actions are not necessarily normal.
Moreover, let
\[\mathcal{J}\colon\mathcal{H}\to\mathcal{H},\ \mathcal{J}[\xi_{n}]=[\mathcal{J}_{t_ {n}}\xi_{n}].\]
It is easy to verify that \(\mathcal{J}\) is an anti-unitary involution such that \(\mathcal{J}(x\xi y)=y^{*}(\mathcal{J}\xi)x^{*}\) for \(x\), \(y\in M\) and \(\xi\in\mathcal{H}\).
Finally, if \(a\in\operatorname{dom}(\mathcal{E})\), let \(\delta(a)=[\delta_{t_{n}}(a)]\). Since
\[\|\delta_{t_{n}}(a)\|_{\mathcal{H}_{t_{n}}}^{2}=\mathcal{E}_{t_{n}}(a)\leq \mathcal{E}(a),\]
the map \(\delta\colon\ \operatorname{dom}(\mathcal{E})\to\mathcal{H}\) is well-defined. Furthermore,
\[\langle\delta(a),\delta(b)\rangle_{\mathcal{H}}=\lim_{n\to\omega}\langle \delta_{t_{n}}(a),\delta_{t_{n}}(b)\rangle_{\mathcal{H}_{t_{n}}}=\lim_{n\to \omega}\mathcal{E}_{t_{n}}(a,b)=\mathcal{E}(a,b)\]
The operator \(\delta\) is closed because \(\mathcal{E}\) is closed. The other properties of \(\delta\) can be checked componentwise.
_Remark 5.6_.: At this point we do not know if the triple \((\mathcal{H},\mathcal{J},\delta)\) is uniquely determined. Given the known results for tracially symmetric or more generally GNS-symmetric quantum Markov semigroups, it seems reasonable to suspect that the left and right action are not uniquely determined on \(M\), but only on some \(*\)-subalgebra.
In the case of tracially symmetric or more generally GNS-symmetric quantum Markov semigroups it can be useful to realize the bimodule \(\mathcal{H}\) inside \(L^{2}(\hat{M})\) for some von Neumann algebra \(\hat{M}\) containing \(M\) with (faithful normal) expectation (see [11, 12]). We will show that in certain cases such a von Neumann algebra \(\hat{M}\) can also be found for KMS-symmetric semigroups, although the construction seems less canonical.
Clearly, a necessary condition for the existence is that \(M\) acts normally on \(\mathcal{H}\). Since we do not have a criterion for this to happen in terms of the semigroup, we formulate the result purely in terms of correspondences.
**Proposition 5.7**.: _Let \(M\) be a von Neumann algebra, \(\varphi\) a faithful normal state on \(M\) and \((\mathcal{H},\mathcal{J})\) a self-dual \(M\)-\(M\)-correspondence. If \(M\) is semi-finite or \(M\) has finite-dimensional center and \(\mathcal{H}\) has finite index, then there exists a von Neumann algebra \(\hat{M}\) containing \(M\), a faithful normal conditional expectation \(E\colon\hat{M}\to M\), and an isometric bimodule map \(V\colon\mathcal{H}\to L^{2}(\hat{M})\) such that \(V\mathcal{J}=J_{\varphi\circ E}V\),_
Proof.: We will show that in either case there is a strongly continuous unitary group \((\mathcal{U}_{t})\) on \(\mathcal{H}\) such that \(\mathcal{U}_{t}^{\mathcal{H}}(x\xi y)=\sigma_{t}^{\varphi}(x)(\mathcal{U}_{t} ^{\mathcal{H}}\xi)\sigma_{t}^{\varphi}(y)\) for \(x,y\in M\), \(\xi\in\mathcal{H}\), \(t\in\mathbb{R}\) and \(\mathcal{J}\mathcal{U}_{t}^{\mathcal{H}}=\mathcal{U}_{t}^{\mathcal{H}} \mathcal{J}\) for \(t\in\mathbb{R}\). This makes \((\mathcal{H},\mathcal{J},(\mathcal{U}_{t}^{\mathcal{H}}))\) into a Tomita correspondence in the terminology of [12], and the result follows from [12, Section 4.2].
First assume that \(M\) is semi-finite and let \(\tau\) be a normal semi-finite faithful trace on \(M\). There exists \(\rho\in L^{1}_{+}(M,\tau)\) such that \(\varphi=\tau(\cdot\,\rho)\). Let \(\mathcal{U}_{t}^{\mathcal{H}}\xi=\rho^{it}\xi\rho^{-it}\). Since the actions of \(M\) on \(\mathcal{H}\) are normal, \((\mathcal{U}_{t}^{\mathcal{H}})\) is a strongly continuous unitary group. Moreover,
\[\mathcal{U}_{t}^{\mathcal{H}}(x\xi y)=\rho^{it}x\xi y\rho^{-it}=\sigma_{t}^{ \varphi}(x)\rho^{it}\xi\rho^{-it}\sigma_{t}^{\varphi}(y)=\sigma_{t}^{\varphi} (x)(\mathcal{U}_{t}^{\mathcal{H}}\xi)\sigma_{t}^{\varphi}(y)\]
and
\[\mathcal{U}_{t}^{\mathcal{H}}\mathcal{J}\xi=\rho^{it}(\mathcal{J}\xi)\rho^{- it}=\mathcal{J}(\rho^{it}\xi\rho^{-it})=\mathcal{J}\mathcal{U}_{t}^{\mathcal{H }}\xi\]
for \(x,y\in M\), \(\xi\in\mathcal{H}\) and \(t\in\mathbb{R}\). This settles the claim for semi-finite von Neumann algebras.
Now assume that \(M\) has finite-dimensional center and \(\mathcal{H}\) has finite index. In this case we use Longo's construction from [13], which we briefly recall.
Write \(\pi_{l}^{\mathcal{H}}\) and \(\pi_{r}^{\mathcal{H}}\) for the left and right action of \(M\) on \(\mathcal{H}\) and let \(\varepsilon\colon\pi_{r}^{\mathcal{H}}(M)^{\prime}\to\pi_{l}^{\mathcal{H}}(M)\) be the minimal conditional expectation. Define \(\Delta_{\mathcal{H}}\) to be the spatial derivative of \(\varphi\circ(\pi_{l}^{\mathcal{H}})^{-1}\circ\varepsilon\) with respect to \(\varphi\circ(\pi_{r}^{\mathcal{H}})^{-1}\), that is,
\[\Delta_{\mathcal{H}}=\frac{d(\varphi\circ(\pi_{l}^{\mathcal{H}})^{-1}\circ \varepsilon)}{d(\varphi\circ(\pi_{r}^{\mathcal{H}})^{-1})}.\]
Here \((\pi_{l}^{\mathcal{H}})^{-1}\) and \((\pi_{r}^{\mathcal{H}})^{-1}\) are to be understood as follows: There exist central projections \(p,q\in M\) such that \(\ker\pi_{l}^{\mathcal{H}}=(1-p)M\), \(\ker\pi_{r}^{\mathcal{H}}=(1-q)M\). Then \(\pi_{l}^{\mathcal{H}}\) restricted to \(pM\) (resp. \(\pi_{r}^{\mathcal{H}}\) restricted to \(qM\)) is injective, and we write \((\pi_{l}^{\mathcal{H}})^{-1}\) (resp. \((\pi_{r}^{\mathcal{H}})^{-1}\)) for the inverse of this restriction.
By definition of the spatial derivative, \(\Delta_{\mathcal{H}}\) is a non-singular positive self-adjoint operator on \(\mathcal{H}\).
Since \(M\) has finite-dimensional center, there are central projections \(e_{1},\ldots,e_{n}\in M\) such that \(M\cap M^{\prime}=\bigoplus_{j}\mathbb{C}e_{j}\). Let \(\mathcal{H}_{ij}=e_{i}\mathcal{H}e_{j}\). This is an \(e_{i}M\)-\(e_{j}M\)-correspondence
and \(\mathcal{H}=\bigoplus_{i,j}\mathcal{H}_{ij}\) as Hilbert spaces. Let \(d_{ij}=\sqrt{\operatorname{Ind}(\mathcal{H}_{ij})}\) and
\[D_{\mathcal{H}}=\sum_{i,j}d_{ij}\pi_{l}^{\mathcal{H}}(e_{i})\pi_{r}^{\mathcal{ H}}(e_{j}^{\operatorname{op}}).\]
Finally let \(\mathcal{U}_{t}^{\mathcal{H}}=\Delta_{\mathcal{H}}^{it}D_{\mathcal{H}}^{it}\). The unitary group \((\mathcal{U}_{t})\) satisfies
\[\mathcal{U}_{t}(x\xi y)=\sigma_{t}^{\varphi}(x)(\mathcal{U}_{t}\xi)\sigma_{t}^ {\varphi}(y)\]
for \(x,y\in M\), \(\xi\in\mathcal{H}\) and \(t\in\mathbb{R}\) by [18, Theorem 2.3].
Moreover, let
\[T\colon\mathcal{H}\to\overline{\mathcal{H}},\ \xi\mapsto\overline{\mathcal{J} \,\xi}.\]
By definition of \(\mathcal{J}\), the map \(T\) is a unitary bimodule map. It follows from [18, Theorem 2.3] that
\[\overline{\mathcal{J}\mathcal{U}_{t}^{\mathcal{H}}\xi}=T\mathcal{U}_{t}^{ \mathcal{H}}\xi=\mathcal{U}_{t}^{\tilde{\mathcal{H}}}T\xi=\mathcal{U}_{t}^{ \tilde{\mathcal{H}}}\overline{\mathcal{J}\,\xi}=\overline{\mathcal{U}_{t}^{ \mathcal{H}}\mathcal{J}\,\xi}\]
for \(\xi\in\mathcal{H}\) and \(t\in\mathbb{R}\). Thus \(\mathcal{J}\) commutes with \((\mathcal{U}_{t}^{\mathcal{H}})\).
## Appendix A Alternative proof of Theorem 2.5
The proof of Theorem 2.5 shows that this result can be easily obtained from Theorem 2.4, and therefore that one can prove it in a basis-independent fashion. On the other hand, it gives very little intuition for the \(V_{j}\) mentioned in the theorem. To provide a bit more feeling for the \(V_{j}\), we include an alternative proof of the result.
Recall that by [1, Theorem 4.4] the generator \(\mathcal{L}\) of a KMS-symmetric quantum Markov semigroup on \(M_{n}(\mathbb{C})\) can be written as
( \[\ast\] ) \[\mathcal{L}(A)=(1+\sigma_{-i/2})^{-1}(\Psi(I_{n}))A+A(1+\sigma_{i/2})^{-1}( \Psi(I_{n}))-\Psi(A)\]
with a KMS-symmetric completely positive map \(\Phi\colon M_{n}(\mathbb{C})\to M_{n}(\mathbb{C})\).
Alternative proof of Theorem 2.5.: Let \(\Psi\) be a KMS-symmetric completely positive map such that (\(\ast\)) holds, i.e.
\[\mathcal{L}(A)=(1+\sigma_{-i/2})^{-1}(\Psi(I_{n}))A+A(1+\sigma_{i/2})^{-1}( \Psi(I_{n}))-\Psi(A).\]
By Lemma 2.3(ii), \(\tilde{\Psi}\) is also completely positive and KMS-symmetric. Next, we define the completely positive map \(\Xi\) by
\[\Xi(A)=\rho^{1/4}\tilde{\Psi}(\rho^{-1/4}A\rho^{-1/4})\rho^{1/4}\]
for all \(A\in M_{n}\mathbb{C}\). Now for all \(A,B\in M_{n}(\mathbb{C})\) we have
\[\operatorname{tr}(A\Xi(B))=\operatorname{tr}(\rho^{-1/4}A\rho^{-1/4}\rho^{1/2 }\tilde{\Psi}(\rho^{-1/4}B\rho^{-1/4})\rho^{1/2})=\operatorname{tr}(\Xi(A)B)\]
by the KMS-symmetry of \(\tilde{\Psi}\).
Let \(V_{1},\ldots,V_{N}\) be the Kraus representation of \(\Xi\), meaning that
\[\Xi(A)=\sum_{j=1}^{N}V_{j}^{*}AV_{j}\]
for all \(A\in M_{n}(\mathbb{C})\). Since \(\operatorname{tr}(A\Xi(B))=\operatorname{tr}(\Xi(A)B)\), we can assume without loss of generality that for each \(j\) there exists an index \(j^{*}\) such that \(V_{j}^{*}=V_{j^{*}}\) and \((j^{*})^{*}=j\). Note that
\(\sigma_{-i/4}(V_{1}),\ldots,\sigma_{-i/4}(V_{N})\) is a Kraus representation of \(\vec{\Psi}\). Calculating \(\sum_{j}\langle[V_{j},A],[V_{j},B]\rangle\) then gives
\[\sum_{j}\langle[V_{j},A],[V_{j},B]\rangle_{\rho} =\sum_{j}\operatorname{tr}(\rho^{1/2}(A^{*}V_{j}^{*}-V_{j}^{*}A^{* })\rho^{1/2}(V_{j}B-BV_{j}))\] \[=\sum_{j}\operatorname{tr}(\rho^{1/2}A^{*}\rho^{1/2}(\sigma_{i/2} (V_{j}^{*})V_{j}B+BV_{j}\sigma_{-i/2}(V_{j})\] \[\qquad-\sigma_{i/2}(V_{j}^{*})BV_{j}-V_{j}B\sigma_{-i/2}(V_{j}^{* })))\] \[=\sum_{j}\operatorname{tr}(\rho^{1/2}A^{*}\rho^{1/2}(\sigma_{i/2} (V_{j}^{*})V_{j}B+BV_{j}^{*}\sigma_{-i/2}(V_{j})\] \[\qquad-\sigma_{i/2}(V_{j}^{*})BV_{j}-V_{j}^{*}B\sigma_{-i/2}(V_{j }))).\]
We observe that
\[\sum_{j}\sigma_{i/2}(V_{j}^{*})BV_{j}+V_{j}^{*}B\sigma_{-i/2}(V_{ j}) =\sum_{j}\sigma_{i/4}(\sigma_{-i/4}(V_{j})^{*})B\sigma_{i/4}( \sigma_{-i/4}(V_{j}))\] \[\quad-\sigma_{-i/4}(\sigma_{-i/4}(V_{j})^{*})B\sigma_{-i/4}( \sigma_{-i/4}(V_{j}))\] \[= \mathcal{W}(\vec{\Psi})(B)\]
and consequently that
\[(1+\sigma_{i/2})(\sum_{j}V_{j}^{*}\sigma_{-i/2}(V_{j}))=\sum_{j}\sigma_{i/2}( V_{j}^{*})V_{j}+V_{j}^{*}\sigma_{-i/2}(V_{j})=\mathcal{W}(\vec{\Psi})(I_{n}).\]
Therefore we have
\[\sum_{j}V_{j}^{*}\sigma_{-i/2}(V_{j}) =(1+\sigma_{i/2})^{-1}(\Psi(I_{n})),\] \[\sum_{j}\sigma_{i/2}(V_{j}^{*})V_{j} =(1+\sigma_{-i/2})^{-1}(\Psi(I_{n})).\]
Coming back to our original expression we find
\[\sum_{j}\langle[V_{j},A],[V_{j},B]\rangle_{\rho} =\operatorname{tr}(\rho^{1/2}A^{*}\rho^{1/2}((1+\sigma_{-i/2})^{ -1}(\Psi(I_{n}))B\] \[\quad+B(1+\sigma_{i/2})^{-1}(\Psi(I_{n}))-\Psi(B)))\] \[=\langle\mathscr{L}(A),B\rangle_{\rho}.\qed\]
|
2305.17208 | A Categorical Representation Language and Computational System for
Knowledge-Based Planning | Classical planning representation languages based on first-order logic have
preliminarily been used to model and solve robotic task planning problems.
Wider adoption of these representation languages, however, is hindered by the
limitations present when managing implicit world changes with concise action
models. To address this problem, we propose an alternative approach to
representing and managing updates to world states during planning. Based on the
category-theoretic concepts of $\mathsf{C}$-sets and double-pushout rewriting
(DPO), our proposed representation can effectively handle structured knowledge
about world states that support domain abstractions at all levels. It
formalizes the semantics of predicates according to a user-provided ontology
and preserves the semantics when transitioning between world states. This
method provides a formal semantics for using knowledge graphs and relational
databases to model world states and updates in planning. In this paper, we
conceptually compare our category-theoretic representation with the classical
planning representation. We show that our proposed representation has
advantages over the classical representation in terms of handling implicit
preconditions and effects, and provides a more structured framework in which to
model and solve planning problems. | Angeline Aguinaldo, Evan Patterson, James Fairbanks, William Regli, Jaime Ruiz | 2023-05-26T19:01:57Z | http://arxiv.org/abs/2305.17208v2 | # A Categorical Representation Language and Computational System for Knowledge-Based Planning
###### Abstract
Classical planning representation languages based on first-order logic have been extensively used to model and solve planning problems, but they struggle to capture implicit preconditions and effects that arise in complex planning scenarios. To address this problem, we propose an alternative approach to representing and transforming world states during planning. Based on the category-theoretic concepts of C-sets and double-pushout rewriting (DPO), our proposed representation can effectively handle structured knowledge about world states that support domain abstractions at all levels. It formalizes the semantics of predicates according to a user-provided ontology and preserves the semantics when transitioning between world states. This method provides a formal semantics for using knowledge graphs and relational databases to model world states and updates in planning. In this paper, we compare our category-theoretic representation with the classical planning representation. We show that our proposed representation has advantages over the classical representation in terms of handling implicit preconditions and effects, and provides a more structured framework in which to model and solve planning problems.
## 1 Introduction
In real-world planning problems, tracking all the implicit effects and relationships in the world state can be very challenging, especially when working in complex domains. Classical planning representations often fall short in this regard by making it difficult to model actions that account for implicit effects and relationships in the world state. As a result, planning systems often rely on heuristics and simplifying assumptions, which can lead to suboptimal or even incorrect solutions [11, 12]. To address these challenges, researchers have sought to combine techniques from the fields of knowledge representation and automated planning which has formed the subfield called _knowledge-based planning_[11]. The goal of this research is to develop more structured and efficient representations of the world state that can capture the complex relationships and dependencies that arise in real-world problems. For example, some approaches use structured knowledge representations, such as ontologies [1, 13, 14] and knowledge graphs [1, 15, 16, 17, 18, 19, 20, 21], to model the world state. It is natural to conclude that knowledge-based planning, as it stands, may be better suited to solve the problem of tracking implicit preconditions and effects. On the contrary, knowledge-based planning has not been widely adopted because it requires translating these rich and complex representations into propositional facts so that they can be interpreted using a classical planning representation. Because of this, they become computationally intensive search problems in the absence of search space reduction methods.
As a result, knowledge-based planning is met with the challenge of finding the right balance between abstraction and representation that is both comprehensive and computationally efficient. Domain abstractions should ideally simplify world state and action models without losing essential characteristics. Representations, on the other hand, influence these domain abstractions according to their expressivity and structure. In particular, they dictate how to express criteria that must be met for actions to be taken and how the state will be updated. An ideal knowledge-based planning representation should be able to preserve the semantics of the domain when updating states. Formal languages like propositional and first-order logic are often used to define these representations [11]; however, they lack the structure to express more complex relationships such as hierarchy and composition making them ill-equipped to solve the problems faced by knowledge-based planning representations.
To address these concerns, we propose a world state representation based on the category-theoretic concepts of C-sets [16] and double-pushout (DPO) rewriting [1]. Our representation not only manages structured knowledge about the world state, but also formalizes the semantics of predicates according to a user-provided ontology, ensuring that the semantics of the world state are preserved and implicit preconditions and effects are handled when transitioning between states.
### Outline
The paper begins in Section 2 with preliminaries about three key topics: the frame problem, the model of planning as a state transition system, and category theory. The frame
problem is the well-known difficulty of representing how the world changes as a result of an action, which is a crucial aspect of planning. State transition systems are a mathematical model commonly used to represent the states and actions of a planning problem. For our purposes, category theory is a mathematical framework that will be used to define a planning representation that contrasts the classical representation. In Sections 3 and 4, the paper delves into the features of both the classical and categorical representations based on state transition systems. We then compare how these two representations handle structured knowledge, satisfiability of pre-conditions, and implicit effects in Section 5. In Section 6, the paper discusses related work. The paper concludes in Sections 7 and 8 by discussing future work and summarizing our key points and contributions.
## 2 Preliminaries
In this section, we motivate the planning representation problem by discussing the frame problem, explaining the mathematical model for planning as a state transition system, and briefly discussing the use of category theory.
### The Frame Problem
The frame problem is the long-standing problem in the field of artificial intelligence of capturing, among the possible effects of actions, all and only those that are relevant to the problem at hand McCarthy and Hayes (1969). In other words, the frame problem is the problem of determining which aspects of the world state are affected by a given action and which aspects remain unchanged.
The frame problem can be particularly challenging for classical planners, which use a state-based representation of the world to generate plans. In this representation, the world state is a snapshot of the current state of the world, called a frame, and actions are represented as functions that transform the world state into a new state. The frame problem arises because in many cases, the effects of an action go beyond what is explicitly stated in the action description to include implicit or background changes to the world state. Consider, for example, the seemingly simple action of moving a block from one location to another. In addition to the explicit change in the block's location, the action may also involve changes to the state of the surface it was resting on, the forces acting on the block, and other aspects of the world state that are not immediately obvious.
Classical planners typically rely on a set of predefined axioms to determine which aspects of the world state are affected by an action and which are not Thiebaux et al. (2005); Ivankovic and Haslum (2015). However, these axioms are often incomplete or cumbersome to articulate, and can lead to incorrect plans or inefficient planning. Two commonly used axioms are the law of inertia and the closed world assumption. The law of inertia states that all the propositions that were true in the preceding frame remain true Dascal (2008). The closed world assumption says that all propositions within a frame are true, and all other propositions are false Russell et al. (2021). These assumptions can be problematic in scenarios where an external event causes facts to be created and deleted in the world state. To remedy these situations, a human expert must step in to redefine the domain and execute the planner again.
To address the frame problem, researchers have developed a variety of alternative planning approaches that go beyond the traditional state-based representation used by classical planners. These include approaches based on action languages and situation calculus Batusov and Soutchanski (2019); Levesque et al. (1997) and other formalisms that are designed to explicitly capture the effects of actions and their dependencies on the world state. These methods are still subject to the same limitations as frame axioms. One major limitation is the difficulty of specifying domain knowledge in these formalisms, as it often requires expert knowledge and can be time-consuming. Additionally, these approaches may not be easily interpretable or explainable to human users, making it difficult to understand the rationale behind a planning decision.
### Planning as a State Transition System
The state transition system model of planning is a formal representation of a planning problem that describes the state space of the problem and the possible actions that can be taken to move between states. In this model Ghallab et al. (2004), a _planning problem_ can be defined as a tuple \(P=\langle S,A,\gamma\rangle\), where:
* The _state space_\(S=\{s_{0},s_{1},s_{2},...\}\) is the set of all possible states. A state \(s\in S\) represents a snapshot of the world at a particular point in time. It ideally includes all relevant information about the state of the world, such as the location of objects and their properties.
* The _action space_\(A=\{a_{0},a_{1},a_{2},...\}\) is the set of all possible actions. An action represents a transition from one state to another state.
* The _transition function_\(\gamma:A\times S\to S\) is a partial function that, where it is defined, maps an action and a state to the next state.
An action \(a\in A\) is _applicable_ at state \(s\in S\) if \(\gamma(a,s)\) is defined. A _plan_, \(\pi=\langle a_{1},a_{2},\ldots,a_{n}\rangle,a_{i}\in A\), is any sequence of actions. It is a _solution_ to the planning problem if it transitions from the initial state \(s_{0}\) to the goal state \(s_{g}\), i.e., \(\gamma(a_{i},s_{i-1})=s_{i}\) for \(i=1,\ldots,n\) and \(s_{n}=s_{g}\).
The state transition system model of planning is often used in automated planning systems, which use search algorithms to explore the state space and find a sequence of actions that achieves the goal. The search algorithms typically use heuristics to guide the search and improve its efficiency. The state transition system model of planning provides a structured and formal way to represent planning problems and reason about the possible sequences of actions that can be taken to achieve a goal. The manner in which states, actions, and transition functions are represented both classically and categorically is the focus of this paper.
### Category Theory
Category theory is a mathematical framework that aids in understanding the structure of, and relationships between,
different mathematical objects. In recent years, it has found important applications in a variety of fields, including systems engineering and design [1, 16, 17, 18], robotics and planning [1, 19, 20], and physics [1, 2]. For the purposes of planning, category theory provides an alternative mathematical language, which in turns leads to an alternative computational representation, that are together well-suited to address the aforementioned problems. For readers not familiar with category theory, we have included an appendix that briefly introduces certain key concepts and definitions. This will help readers understand the category theory-based planning representation proposed in this paper.
## 3 Classical Representations
The literature on classical and neoclassical planning encompasses a variety of approaches to modeling the state of the world. Among these, the most widely employed domain-independent representations are the set-theoretic, classical, and state-variable representations [12] and Traverseo [1].
The set-theoretic representation takes propositional logic as its basic formalism. Propositional logic is a simple logic in which models are mappings of propositions to truth values. The _set-theoretic representation_ describes the state of the world using a set of propositions. To augment the set-theoretic approach with typed objects, the _state-variable representation_ is employed. In this approach, objects are defined according to a typed ontology, and relations of arbitrary arity can be defined between objects. Relations are also known as predicates or atomic formulas in first-order logic (FOL). The _classical representation_, a restricted version of first-order logic, is the most popular choice for representing world models. It is equivalent in expressivity to the state-variable representation.
Existing Representations for World States, \(S\)The classical representation of world states is based on a restricted form of first-order logic in which the world states are represented as a conjunction of literals, where a literal is an atomic proposition or its negation. These propositions can encode facts such as the location of objects, the state of various sensors, and other relevant information. Logical operators such as AND, OR, and NOT are used to combine these propositions into more complex expressions that can be used to describe the relationships between different parts of the world state. World states can be lifted to abstract states consisting of a conjunction of predicates, where predicates are \(n\)-ary relations between variable symbols. For example, the predicate \(\mathrm{on}(x,y)\) might represent the fact that something represented by \(x\) is on top of another thing represented by \(y\) or, equally plausible, the converse. It is important to note that the semantics of the predicates must be established informally, say through external documentation or word of mouth. A predicate becomes a grounded literal when the variable symbols, like \(x\) and \(y\), are assigned to symbols that are constant. For instance, the grounded literal \(\mathrm{on}(\mathrm{b}_{1},\mathrm{b}_{2})\) could represent a block called \(\mathrm{b}_{1}\) that is on top of a block called \(\mathrm{b}_{2}\).
Existing Representations for Actions, \(A\)The classical representation defines action models in terms of preconditions, effects, and parameters. Preconditions describe the conditions that must be true for the action to be applicable, while effects describe the changes that the action makes to the state of the world. Parameters are variables that can take on different values depending on the context in which the action is used. The classical planning representation assumes that actions are deterministic, meaning that their effects are completely specified and do not depend on any probabilistic factors. Additionally, the classical representation assumes that actions can be executed in any order, allowing for a wide variety of possible plans to be generated. In practice these action models can be augmented with types; however, types are not a formal feature of either logical system.
ApplicabilityIn classical planning, the applicability of actions to a world state is determined by a set of preconditions specified for each action. In the classical representation, preconditions are represented as conjunctions of literals. To check whether an action is applicable in a particular world state, the planner examines whether the preconditions of the action are satisfied by the current state. To do this, a planner typically uses logical inference techniques within propositional logic, first-order logic, or other logical formalisms. In the case of propositional logic, the world state and the preconditions of the action can be represented as sets of logical propositions. The planner can then check whether the precondition is a subset of the world state, which means that the preconditions are satisfied in the world state. In the case of first-order logic, the world state and the preconditions of the action can be represented as sentences in a formal language. The planner can then use logical inference algorithms to determine whether the preconditions logically entail the world state, which means that the preconditions are satisfied in the world state. Such logical inference algorithms for the classical representation are PSPACE-complete [10]; therefore, heuristics and other optimizations are often employed to make finding solutions more tractable.
Transition, \(\gamma\)A state transition occurs when an action is applied to a world state, yielding a new world state. To apply an action to a world state, the world state is interpreted as a set of literals, as opposed to a conjunction of literals. When an action is applied, the new state is identified by applying set operations to the set representing the current world state. Positive effects add literals to the set and negative effects, those prefixed with NOT, subtract literals from the set to produce the new world state.
### Pddl
PDDL (Planning Domain Definition Language) is a language used to define planning problems in artificial intelligence. A PDDL problem specifies actions, objects, and goals, as well as the initial state of the problem. Currently, PDDL is the most widely used specification language for planning problems and domains. A difficulty with PDDL is
that it does not have a formal semantics: there is no precise mathematical definition of its meaning. This lack of a formal semantics makes it difficult to consistently interpret PDDL specifications across different planners. In other words, two planners may interpret the same PDDL specification in different ways, leading to different results. Such discrepancies can make it difficult to transfer planning solutions from one application domain to another.
PDDL also includes numerous language extensions that are not necessarily grounded in formal logic. For example, PDDL has been extended with features such as preferences [1] and temporal constraints [2] which do not map directly onto the first-order logic formalism that PDDL is based on. A relevant example of such an extension is the typing of objects and relations found in PDDL 1.2 [10]. Another example, also found in PDDL 1.2, is the handling of negative literals, which allows for specifying conditions that must not hold in the state. While these extensions make PDDL a more powerful and flexible language for specifying planning problems and domains, they also introduce additional complexity and possibilities for inconsistency when used with planning algorithms that have different interpretations of these extensions. Therefore, developing a language with clear and consistent formal semantics for planning is crucial for improving the reliability and transferability of planning solutions across different domains.
## 4 Categorical Representation
To effectively manage world states during planning and plan execution, we propose to adopt categorical logic as a logical formalism. An essential feature of categorical logic is to capture the relationship between logical syntax (theories) and semantics (models) using functors, through a paradigm known as _functorial semantics_. This differs from the standard formalism of first order logic because it gives the syntax of the language status as an algebraic object independent of its semantics.
### World States as Objects in C-Set
World states in planning can be effectively modeled using knowledge graphs or scene graphs, which are structured representations of the objects, relationships, and attributes based on a domain-specific ontology. Knowledge graphs typically encode information in the form of nodes, edges, and properties, with nodes representing entities such as people, objects, or concepts, and edges representing the relationships between them [1]. Scene graphs, on the other hand, focus on the visual aspects of a scene, with nodes representing objects and edges representing the spatial relationships between them [1]. Both knowledge graphs and scene graphs can be used to capture a wide range of information about a world state, including the physical and semantic properties of objects, their interactions with each other, and the context in which they exist.
Our proposed representation provides a sort of denotational semantics for knowledge or scene graphs using the category C-Set. Let C denote a small category, called a _schema_. A C-_set_, also known as a _copresheaf on C_, is a functor1 from C to the category Set. The schema is a category whose objects we interpret as types and whose morphisms describe "is-a" and other functional relationships between types. The category Set is the category of sets and functions. Thus, a C-set is a functor that sends types to sets and type relationships to functions. On this interpretation, C-sets are a simple but useful model of relational databases [2].
Footnote 1: We direct the unfamiliar reader to Appendix A.1 and A.2 for the definitions of categories and functors.
In practice, the schema is a category that is finitely presented by generators and relations. As an example, let \(\mathsf{Gr}=\{E\sqsupset V\}\) be the category freely generated by two objects, \(E\) and \(V\), and two parallel morphisms, \(\mathsf{src},\mathsf{tgt}:E\to V\). Then a Gr-set is a graph, which would be called by graph theorists a "directed multigraph." So, another interpretation of C-sets is that they are a generalization of graphs to a broad class of combinatorial data structures.
The category of elements (see Appendix A.6) of a C-set \(X\), denoted \(\int X\), packages the data of \(X\) into a category resembling a knowledge graph. Specifically, a morphism in the category of elements can be interpreted as a Resource Description Framework (RDF) triple [14], which is a common text-based serialization format for knowledge and scene graphs. A C-set is depicted in this style in Figure 1.
**Definition 4.1** (Category of C-sets, C-Set).: For a given schema C, the _category of C-sets_ is the functor category \(\mathsf{C\mbox{-}Set}:=\mathsf{Set}^{\mathsf{C}}\), whose objects are functors from C to Set and whose morphisms are natural transformations between those.
The category of C-sets is a topos, an especially well-behaved kind of category in which, for example, all limits and colimits exist.
### Actions as Spans in C-Set
_Action rules_ are specified as spans in C-Set. Action rules are made up of components similar to those of action schemas in classical representation. Specifically, actions rules are spans (\(I\leftarrow K\to O\)) in C-Set that consists of the _precondition_, \(I\),
Figure 1: An example C-set, \(G\), that stores data about people’s favorite pet. The category of elements contains triples analogous to RDF triples.
on the left-hand side, the _effects_, \(O\), on the right-hand side, and the _glue_, \(K\), in the middle, which gives the data that remains unchanged between the input and the output.
As in the classical representation, the choice needs to be made of how an abstract structure, namely C-Set, should be presented in order to make it computable. We choose to present spans of C-sets as colimits of representable functors. This conversion is possible because, for every C-set, \(F\), in the action rule, a natural transformation exists from \(F\) to a representable functor on C, as per the Yoneda embedding (see Lemma 1 in Appendix A.3). Covariant representable functors, as defined in Definition 4.2, map objects, \(A\in\mathsf{C}\) to the set of morphisms that have \(A\) as its source object. Aside from the benefit of computability, this conversion also ensures that implicit substructure is taken into account when an object, such as \(A\), is explicitly identified.
**Definition 4.2** (Representable Functors).: A (covariant)2 representable functor, \(H^{A}:\mathsf{C}\to\mathsf{Set}\), is functor that sends:
Footnote 2: A representable functor can defined from a covariant (\(H^{A}:\mathsf{C}\to\mathsf{Set}\)) or a contravariant (\(H_{A}:\mathsf{C}^{op}\to\mathsf{Set}\)) view (MacLane, 1971).
* (opistects) \(x\in\mathsf{C},x\mapsto\mathsf{C}(A,x)\)
* (morphisms) \(x\xrightarrow{f}y\mapsto\mathsf{C}(A,x)\xrightarrow{H^{A}(f)}\mathsf{C}(A,y)\),
where \(\mathsf{C}(A,x)\) is the set of all morphisms from \(A\) and \(x\) in the category \(\mathsf{C}\). \(H^{A}(f)\) postcomposes \(f\) with morphisms in \(\mathsf{C}(A,x)\). The functor \(H^{A}\) can also be written as \([\mathsf{C},\mathsf{Set}]\).
The categorical rule specification differs from the classical one in that it takes a declarative approach by not articulating what atoms should be added and removed from the state, but rather discussing what should be in the state and resolving conflicts using the double-pushout (DPO) rewriting procedure which is discussed in Section 4.4.
### Applicability Using Monomorphisms
Recall that in classical planning (Section 3), a precondition is satisfied by a world state when its propositions are a subset of the world state or if there exists a logical entailment between the precondition and the world state. In the category-theoretic context, these notions are generalized via monomorphisms.
Monomorphisms generalize the concept of an injective function to arbitrary categories. In \(\mathsf{Set}\), monomorphisms are precisely injective functions. In \(\mathsf{C}\)-\(\mathsf{Set}\), monomorphisms are natural transformations such that every component is an injective function, e.g., a monomorphism between graphs is a graph homomorphism such that the vertex and edge maps are both injective. The monic condition (see Appendix A.3) applied to morphisms is relevant for applicability because it checks that two entities in the precondition cannot be mapped to the same entity in the world state.
### Transition Using Double-Pushout (DPO) Rewriting
The action rules are exactly double-pushout (DPO) rewriting rules. Double-pushout (DPO) rewriting is a type of graph rewriting that is particularly well-suited for algebraic approaches to graph transformation. In fact, DPO rewriting generalizes directly from graph rewriting to C-set rewriting (Brown et al., 2021). The DPO method, described below, is used to compute all possible matches of the preconditions, and to determine which matches are compatible with the effects. The result is a set of transformation steps that can be applied to the target graph.
DPO rewriting relies on the fundamental concept of a pushout. A pushout is a colimit (See Appendix A.5) of a diagram having the shape of a span (\(\bullet\leftarrow\bullet\to\bullet\)). Given a span \(R\gets Q\to S\), a pushout produces an object that resembles the union of \(R\) and \(S\) joined along \(Q\), \((R\cup S)/Q\). A pushout in \(\mathsf{C}\)-Set is computed by taking the disjoint union of the sets being pushed out, adding the relations between sets based on \(\mathsf{C}\), and quotienting by \(Q\).
Pseudocode for the DPO rewriting procedure is given in Algorithm 1. The first step is to find a monomorphism, \(m\), that matches \(I\) in \(X\), as described in the previous subsection. The pushout complement, \(f\), is computed provided the morphisms \(l\) and \(m\). A pushout complement is a map that manages the deletion of entities that form the complement \(K/I\). Because \(i\) is a monomorphism and we assume the identification and dangling conditions (Brown et al., 2021) are met, the pushout complement exists and is unique up to isomorphism. Having constructed the three sides of the square, \(l,m,f\), the pushout square can be completed by a unique map \(g\). Then, to compute the new world state, the right pushout square is computed provided \(f\) and \(r\).
```
0:(action rule) \(I\gets K\to O\in\mathsf{C}\)-Set
0:(world) \(X\in\mathsf{C}\)-Set \(m=\mathsf{FindHomomorphism}(I,X)\) \(f\) = ComputePushoutComplement(\(l\), \(m\)) \(g\) = CompletePushout(\(l\), \(f\), \(m\)) \(Y\) = ComputePushout(\(f\), \(r\))
```
**Algorithm 1** Double-Pushout (DPO) Rewriting
The time complexity of the \(\mathsf{FindHomomorphism}()\) subroutine is \(O(n^{k})\) where \(k\) is the size of \(I\) and \(n\) is the size of the relevant substructure in \(X\) which is dictated by the objects in \(I\)(Brown et al., 2021). The time complexity of the remaining subroutines is the same as that of computing pushouts in \(\mathsf{Set}\) which is \(O(p)\) where \(p\) is the sum of the sizes of the sets involved in the span.
As an example, Figure 2 shows how an action rule changes the initial state to the final state. In this figure, the rule states that the circle in the initial state should be replaced by a trapezoid and the square and triangle should persist. The initial state, bottom-left, satisfies the precondition because it contains a matching pattern involving the square, triangle, and circle. This means that a monomorphism can
be identified from the precondition to the initial state. The intermediate state is the result of identifying a pattern that, when joined with the precondition along the glue, produces the initial state. This removes the circle from the initial state. The final state is then constructed by taking the pushout for the span (\(\mathrm{Intermediate}\leftarrow\mathrm{Glue}\rightarrow\mathrm{Effect}\)). This produces a pattern where a trapezoid is added in place of the original circle. The example shown in Figure 2 demonstrates that rules can involve both the creation and destruction of entities in the world if the precondition contains an entity and the effect does not contain that entity. This allows for a non-monotonicity in updates of the world state.
Rules With Negative Preconditions and EffectsThe categorical representation is capable of handling constraints in the form of negative preconditions and effects using DPO rewriting with negative application conditions (NACs) [1]. NACs are a way of restricting the application of an action rule by specifying conditions that must not be present in the world state before or after the rule is applied. In other words, a negative application condition specifies a set of patterns that must not match any part of the world state before or after the rule is applied. The use of NACs in action rules allows for more precise and flexible specification of handling negative preconditions and effects.
## 5 Comparison
In this section, we discuss the differences between the classical and the categorical representations. The comparison is summarized in Table 1. We also walk through an example, shown in Figure 3, that highlights the limitations of the classical representation in tracking implicit effects. In this example domain, a bread loaf (\(\mathrm{bread}\) loaf) and slices of the loaf (\(\mathrm{slice\_0}\), \(\mathrm{slice\_1}\), \(\mathrm{slice\_2}\)) are on a countertop (\(\mathrm{countertop}\)). The goal is to move the bread loaf, and implicitly, all its slices, from the countertop to the kitchen table (kitchentable).
### Handling Structured Knowledge
This example presents a few noteworthy semantic features. A planning representation in this domain must be able to encode the following facts, explicitly or implicitly, at different points in time:
1. the bread slices are part of the bread loaf
2. the bread loaf is on the countertop
3. the bread slices are on the countertop
4. the bread loaf is on the kitchen table
5. the bread slices are on the kitchen table
In the case of the categorical representation, fact (a) is captured by the morphism \(\mathrm{is\,part\,of\,:\,}\mathrm{BreadSlices}\rightarrow\mathrm{BreadLoaf}\) in the schema. Fact (b) about the bread loaf being on the countertop is reified through an abstract object called \(\mathrm{Object}\). This is done so that the morphism \(\mathrm{on}:\mathrm{Object}\rightarrow\mathrm{Object}\) can represent the general notion of an object being on top of another object, instead of the more specific relation of a bread loaf being on a countertop. Fact (c) is then captured by the composite morphism on \(\mathrm{o}\) is \(\mathrm{o}\) is \(\mathrm{o}\) is part of \(\mathrm{:BreadSlices}\rightarrow\mathrm{Object}\). In the classical representation, fact (a) is captured by the propositions \(\mathrm{patOf}(\mathrm{slice\_n},\mathrm{bread})\) for \(n=0,1,2\). Fact (b) and (d) are captured by the proposition \(\mathtt{on}(\mathrm{bread},\mathrm{countertop})\) and \(\mathtt{on}(\mathrm{bread},\mathrm{kitchentable})\). Fact (c) and (e) are captured by the propositions \(\mathtt{on}(\mathrm{slice\_n},\mathrm{countertop})\) and \(\mathtt{on}(\mathrm{slice\_n},\mathrm{kitchentable})\). Intuitively, because the bread loaf is on the countertop, the slices that make up the loaf are also on the countertop. In the categorical representation, this captured using a composite morphism. In the classical representation, this is done by explicitly stating for each slice, that it is on the countertop.
### Handling Applicability of Actions
In the categorical representation, applicability of an action is determined by the existence of a monomorphism in \(\mathsf{C}\)-\(\mathsf{Set}\) from the rule input to the world state. Recall that a (covariant) representable functor maps an object, \(x\in\mathsf{C}\), to the
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Representation** & **Categorical** & **Classical** \\ \hline State, \(S\) & Object in category \(\mathsf{C}\)-\(\mathsf{Set}\) & Conjunction of propositions \\ Action, \(A\) & Spans in \(\mathsf{C}\)-\(\mathsf{Set}\) containing: Preconditions, Glue, Effects & Action model containing: Parameters, Preconditions, Effects \\ Applicability & Monomorphisms in \(\mathsf{C}\)-\(\mathsf{Set}\) & Subset inclusion \\ Transition, \(\gamma\) & DPO rewriting & Set-based addition and subtraction \\ \hline \hline \end{tabular}
\end{table}
Table 1: A summary of the differences between the classical representation and the categorical representation aligned to the state transition system model for planning
Figure 2: An illustration of how DPO rewriting is executed on a \(\mathsf{C}\)-\(\mathsf{set}\) where \(\mathsf{C}\) defines the shapes and the arrows between them. Each shape in the figure is assigned to a color to help with readability.
set of morphisms that have a \(x\) as its source. When you present an action using colimits of representables, there are both explicit conditions given by the representables and implicit conditions that appear when the representable is computed. This provides a mechanism for having implicit conditions in rules. In this example, the initial state satisfies the input action rule involving the \(\mathrm{BreadLoad}\), \(\mathrm{Object}\), and \(\mathrm{Countertop}\). In the classical representation, applicability of the action moveObj(bread, countertop, kitchenable) is determined by whether or not the world state contains the element on(bread, countertop) in the set of propositions.
### Handling the Frame Problem
The action of moving the bread loaf from the countertop to the kitchen table is applicable in this example. In the categorical representation, this action is modeled by a span in C-Set whose left foot includes knowledge about the bread load being on the countertop and whose right foot includes knowledge about the bread loaf being on the kitchen table. The apex of the span states that the bread loaf itself is preserved throughout this change. In the classical representation, the action schema describes the generic action of moving an objects \(x\) from one location, s, to another, t. The action operator is grounded by assigning x to the breadloaf, s to the countertop, and t to the kitchenable. Once this action is applied, a desirable outcome would be for the new state to account for the movement of the bread slices from the countertop to the kitchen table because they are part of the breadloaf. In the categorical representation, the same composite morphism that existed in the initial state exists in the final state; however, the target of the morphism has changed from the \(\mathrm{countertop}\) to the kitchenable. This captures the implicit change that occurred to the bread slice locations. In the classical representation, the new state captures the fact that the breadloaf is on the kitchen table, but does not capture the fact that the bread slices are on the kitchen table. This is due to the inertia frame axiom, which states that all facts that are not explicitly accounted for in the effect remain true after the action is applied. In practice, this error would likely cause a planner to instruct an agent to move each bread slice individually due to an inconsistency in the world state.
## 6 Related Work
We foresee the nearest application of our approach as being robotic task planning using scene graphs. Scene graphs are a specialized version of knowledge graphs that restrict its objects, attributes, and relations to facts obtained through vision-based perception and inference [10]. In scene graphs, ontologies are often used to align scene data to class hierarchies. In planning, these ontologies can be used to enrich facts in the planning domain; however, these ontologies are often integrated in the planning decisions in an ad hoc way. For example, Galindo et al present a two-part knowledge representation system, which includes (i) spatial information about the robot environment in the
Figure 3: A comparison of states, actions, and inferences made between the categorical representation and the classical representation for an example that moves a loaf of bread from a countertop to a kitchen table. This example illustrates a failure of the classical representation to preserve the global semantics of the world when actions act on only a part of the world. Using the double-pushout method of C-set rewriting, the categorical representation is able to do so.
form of a scene graph, and (ii) an ontology that describes the hierarchical relationships between concepts [1]. A function mapping objects in the scene graph to concepts in the ontology is defined. Both facts obtained through the scene graph and the facts obtained through ontology are translated into propositions in the domain. Planning proceeds as usual using existing planners that assume a classical planning representation. This approach results in an explosion of facts that requires a pruning step in order to be tractable for classical planning approaches.
Miao et al take an alternate approach to using scene graphs to describe world states and action models [13]. In their approach, action operators are specified in terms of an initial state subgraph, a final state subgraph, and an intermediate subgraph. For each object and relation in these subgraphs, the global scene graph is updated by adding objects into the scene graph that are discussed in the action model. If objects are referenced in the final state subgraph that are not present in the scene, they are introduced as isolated vertices. These vertices are connected to the global scene graph by consulting an external knowledge base that contains a type hierarchy. Their procedure searches for a matching object type, identifies a parent type that exists in the scene graph, and defines an edge from that object to the existing type. This is a fragile approach to resolving changes in the world state because it relies on an external knowledge base to be correctly and completely instantiated in order for new information to be properly integrated.
Evidently, scene graphs are a useful abstraction for representing an environment perceived by a robot, but are often too complex, with numerous vertices and edges, making them difficult to reason over at scale. To mitigate this, planners can employ procedures that determine which attributes of the scene are most relevant while also preserving the semantics provided by the class hierarchy and object features. An example of a planner designed for this purpose is the SCRUB planner [1]. SCRUB is a planner-agnostic procedure that prunes the state space to include only the relevant facts within a scene graph. It is paired with SEEK [1], a planner-agnostic procedure that scores objects in the scene based on an importance score produced by a graph neural network [15]. All objects that are ancestors to the relevant objects, according to some threshold, are preserved as facts in the state. Both the SCRUB and SEEK procedures dramatically reduce the number of facts needed to characterize the world state. The facts in the world are translated into binary predicates and passed to a classical planner. This provides a heuristic-based measure for identifying relevant facts which can, like most heuristics, produce inaccurate approximations. The combinatorial approach we propose uses the existing semantic structure to determine relevance.
Overall, using scene graphs as a representation for world states during planning has gained attention. These methods satisfy the need to support rich and complex representations of the world using ontologies but still suffer in their ability to integrate ontological information in a way that does not cause a state explosion. In our approach, the ontology (schema) is an integral part of the formalism and with it comes specialized tooling for manipulating such data.
## 7 Future Work
In ongoing and future work, we are exploring the full range of utility that the categorical approach has to offer.
* _Empirically comparing domain-independent planners_. Various structures and canonical planning algorithms are currently being implemented using AlgebraicJulia [12], a Julia-based programming ecosystem. Our goal is to compare the performance of existing planning algorithms using both the categorical and classical representations.
* _Enabling online planning_. Action rules can be thought of as rewriting operations for a knowledge-base. With this in mind, we aim to demonstrate that it is possible to construct these rules in real-time and apply them to the world state via human intervention rather than relying solely on an automated planner. This approach would enable interleaved human and machine input in real-time using a common language, eliminating the need to redefine the planning domain when unexpected changes occur.
* _Transferring plans between domains_. The rule shown in Figure 2 is isomorphic to any rule, in an arbitrary domain, that destroys an object and constructs a new one. Therefore, in future work, we aim to show that it is possible to define a collection of rules with generic patterns from which plans can be constructed. Transferring plans between domains would then become a matter of defining maps from the rule pattern to the application domain.
## 8 Conclusion
The limitations of classical planning representation languages in tracking implicit preconditions and effects have motivated us to propose an alternative world state representation based on the category-theoretic concepts of C-sets and DPO rewriting. Our categorical representation accommodates structured knowledge about the world state and formalizes a model of the application domain using a user-provided ontology. This method provides formal semantics for using knowledge graphs and relational databases to model world states and updates in planning. Our comparison between the classical and categorical planning representation languages demonstrates that our proposed representation is more structured and has advantages over the classical one in handling complex planning scenarios. We believe that our proposed representation has the potential to significantly enhance the effectiveness and efficiency of planning systems in various domains.
## Acknowledgments
This work was partially funded by the U.S. Defense Advanced Research Projects Agency under contract #HR00112220004. Any opinions, findings, and conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect these agencies' views. We thank Kristopher Brown and Owen Lynch from
the Topos Institute for heavily contributing to the development of DPO rewriting and C-sets in AlgebraicJulia. We also thank the students of the Ruiz HCI Lab--Alexander Barquero, Rodrigo Calvo, Niriksha Regmi, Daniel Delgado, Andrew Tompkins--who built the task guidance platform that inspired the examples in this paper. Lastly, we thank Dana Nau for valuable feedback about this paper.
## Appendix A Category Theory
### Categories and Functors
**Definition A.1** (Category).: A _category_\(\mathsf{C}\) consists of
* a collection of _objects_, denoted \(\operatorname{Ob}(\mathsf{C})\);
* for every pair of objects \(x,y\in\operatorname{Ob}(\mathsf{C})\), a collection \(\operatorname{Hom}_{\mathsf{C}}(x,y)\) of _morphisms from \(x\) to \(y\)_, whose elements \(f\in\operatorname{Hom}_{\mathsf{C}}(x,y)\) are denoted \(f:x\to y\);
* a _composition_ operation, defining for each pair of morphisms \(f:x\to y\) and \(g:y\to z\), a _composite_ morphism \(g\circ f:x\to z\);
* for every object \(x\), an _identity_ morphism \(1_{x}:x\to x\);
satisfying the _associativity_ law \(h\circ(g\circ f)=(h\circ g)\circ f\) and _unitality_ laws \(f\circ 1_{x}=f\) and \(1_{y}\circ f=f\) whenever these equations make sense.
Categories are themselves the objects of a category, whose morphisms are called _functors_. A functor consists of compatible maps between the objects and between the morphisms that preserve composition and identities.
**Definition A.2** (Functors).: A _functor_\(F\) from a category \(\mathsf{C}\) to another category \(\mathsf{D}\), denoted \(F:\mathsf{C}\to\mathsf{D}\), consists of
* a map between objects \(F:\operatorname{Ob}(\mathsf{C})\to\operatorname{Ob}(\mathsf{D})\), and
* for every pair of objects \(x,y\in\mathsf{C}\), a map between homsets \(F:\operatorname{Hom}_{\mathsf{C}}(x,y)\to\operatorname{Hom}_{\mathsf{D}}(F(x), F(y))\),
such that the following equations hold:
* \(F(g\circ f)=F(g)\circ F(f)\) for every \(x\xrightarrow{f}y\xrightarrow{g}z\) in \(\mathsf{C}\);
* \(F(1_{x})=1_{F(x)}\) for every \(x\in\mathsf{C}\).
**Definition A.3** (Monomorphism).: A morphism \(f:x\to y\) in a category \(\mathsf{C}\) is a _monomorphism_, or _monic_ for short, if for any other object \(z\) and any pair of morphisms \(g_{1},g_{2}:z\to x\), we have \(g_{1}=g_{2}\) whenever \(f\circ g_{1}=f\circ g_{2}\).
In other words, a morphism \(f\) is monic if whenever two morphisms have the same post-composite with \(f\), then they must be equal.
### Universal Properties
Objects in categories can satisfy universal properties such as being limits or colimits [10].
Limits and colimits are taken over diagrams in a category. Let \(\mathsf{J}\) be a small category. A _diagram of shape_\(\mathsf{J}\) in a category \(\mathsf{C}\) is a functor \(D:\mathsf{J}\to\mathsf{C}\).
**Definition A.4** (Limit).: Let \(D:\mathsf{J}\to\mathsf{C}\) be a diagram.
* A _cone_ over \(D\) is an object \(x\in\mathsf{C}\) and a family of arrows in \(\mathsf{C}\), \((x\xrightarrow{f_{j}}D(j))_{j\in\mathsf{J}}\), such that the triangle commutes for every morphism \(u:j\to k\) in \(\mathsf{J}\).
* A _limit_ of \(D\) is a cone \((L\xrightarrow{\pi_{j}}D(j))_{j\in\mathsf{J}}\) over \(D\) with the property that for any cone over \(D\) as above, there exists a unique map \(f:x\to L\) such that \(\pi_{j}\circ f=f_{j}\) for all \(j\in\mathsf{J}\). Colimits are defined dually to limits.
**Definition A.5** (Colimit).: Let \(D:\mathsf{J}\to\mathsf{C}\) be a diagram.
* A _cocone_ under \(D\) is an object \(x\in\mathsf{C}\) and a family of arrows in \(\mathsf{C}\), \((D(j)\xrightarrow{f_{j}}x)_{j\in\mathsf{J}}\) such that the triangle commutes for every morphism \(u:j\to k\) in \(\mathsf{J}\).
* A _colimit_ of \(D\) is a cocone \((D(j)\xrightarrow{\iota_{j}}C)_{j\in\mathsf{J}}\) under \(D\) with the property that for any cocone under \(D\) as above, there exists a unique map \(f:C\to x\) such that \(f\circ\iota_{j}=f_{j}\) for all \(j\in\mathsf{J}\).
### Categorical Constructions
Of the many constructions that can be performed using categories and functors, we mention just a few that are used in the paper.
**Definition A.6** (Category of Elements).: Let \(F:\mathsf{C}\to\mathsf{Set}\) be a functor from a small category \(\mathsf{C}\) to the category of sets and functions. The _category of elements_ of \(F\), denoted \(\int F\), is the category whose
* objects are pairs \((c,x)\), where \(c\in\operatorname{Ob}(\mathsf{C})\) and \(x\in F(c)\);
* morphisms from \((c,x)\) to \((d,y)\) are morphisms \(f:c\to d\) in \(\mathsf{C}\) such that \(F(f)(x)=y\).
Composition and identities in the category of elements are inherited from those in \(\mathsf{C}\)[10].
**Lemma 1** (Yoneda Embedding).: The _Yoneda embedding_ is the full and faithful functor \(\mathsf{C}^{\mathrm{op}}\to[\mathsf{C},\mathsf{Set}]\) that sends:
* (objects) \(c\mapsto H^{c}\), where \(H^{c}(d):=\operatorname{Hom}_{\mathsf{C}}(c,d)\) is the representable functor;
* (morphisms) \((c\xrightarrow{f}d)\mapsto(H^{d}\xrightarrow{H^{f}}H^{c})\), where \(H^{f}\) is the natural transformation between representable functors given by precomposition with \(f\). |
2308.00950 | Beta-trees: Multivariate histograms with confidence statements | Multivariate histograms are difficult to construct due to the curse of
dimensionality. Motivated by $k$-d trees in computer science, we show how to
construct an efficient data-adaptive partition of Euclidean space that
possesses the following two properties: With high confidence the distribution
from which the data are generated is close to uniform on each rectangle of the
partition; and despite the data-dependent construction we can give guaranteed
finite sample simultaneous confidence intervals for the probabilities (and
hence for the average densities) of each rectangle in the partition. This
partition will automatically adapt to the sizes of the regions where the
distribution is close to uniform. The methodology produces confidence intervals
whose widths depend only on the probability content of the rectangles and not
on the dimensionality of the space, thus avoiding the curse of dimensionality.
Moreover, the widths essentially match the optimal widths in the univariate
setting. The simultaneous validity of the confidence intervals allows to use
this construction, which we call {\sl Beta-trees}, for various data-analytic
purposes. We illustrate this by using Beta-trees for visualizing data and for
multivariate mode-hunting. | Guenther Walther, Qian Zhao | 2023-08-02T05:16:27Z | http://arxiv.org/abs/2308.00950v1 | # Beta-trees: Multivariate histograms with confidence statements
###### Abstract
Multivariate histograms are difficult to construct due to the curse of dimensionality. Motivated by \(k\)-d trees in computer science, we show how to construct an efficient data-adaptive partition of Euclidean space that possesses the following two properties: With high confidence the distribution from which the data are generated is close to uniform on each rectangle of the partition; and despite the data-dependent construction we can give guaranteed finite sample simultaneous confidence intervals for the probabilities (and hence for the average densities) of each rectangle in the partition. This partition will automatically adapt to the sizes of the regions where the distribution is close to uniform. The methodology produces confidence intervals whose widths depend only on the probability content of the rectangles and not on the dimensionality of the space, thus avoiding the curse of dimensionality. Moreover, the widths essentially match the optimal widths in the univariate setting. The simultaneous validity of the confidence intervals allows to use this construction, which we call Beta-trees, for various data-analytic purposes. We illustrate this by using Beta-trees for visualizing data and for multivariate mode-hunting.
## 1 Introduction
This paper is concerned with constructing multivariate data summaries for inference. The classical example of such a data summary is the histogram, which approximates a distribution with a distribution that is piecewise uniform over rectangles. The two main purposes of a histogram are to approximate the probability content of subregions and, especially in a lower dimensional setting, to visualize the data. Another important purpose of the histogram is statistical inference. A relevant example, which is addressed in some detail in Section 6, is the analysis of flow cytometry data. Such data represent multiple parameters of a single cell and an important task in the analysis of such data is to detect and isolate subpopulations of cells. One standard approach to this problem is to construct a multivariate histogram or density estimate and to identify subpopulations with high density regions [32]. However, such a density estimate does not provide any confidence statement about the presence of high density regions separated by a region of low density. It is a notoriously difficult problem to construct density estimates in a multivariate setting due to the 'curse of dimensionality'; this problem is compounded by the need to find corresponding standard errors and to adjust for data snooping when searching for high density regions. In fact, while the vast amount of flow cytometry data which has become available
via modern high-throughput instrumentation has spurred the development of a large number of algorithms to automate this analysis [51, 3, 7, 36], those algorithms generally do not provide any confidence statements about their findings, and there is a widely recognized need for a principled analysis based on statistical guarantees [7, 31, 12].
Section 6 shows how the methodology introduced in this paper can be applied to the cytometry problem to provide guaranteed finite-sample confidence bounds that allow the detection of such high density regions. A key point is that the widths of these confidence intervals depend essentially only on the probability content of the region under consideration and not on the dimensionality of the space, thus avoiding the curse of dimensionality. Developing such statistical methodology is crucial for the quest of the flow cytometry community to discover biological insights by increasing the dimension of the measurements without having to incur a large penalty in power compared to performing statistical analyses in a lower dimensional space.
Another example where multivariate density estimates are being used extensively is in databases, where histograms constitute the most common tool for the succinct approximation of data [24, 47, 20, 23, 1]. This is motivated by the fact that often a dataset cannot be stored in its entirety, so it is necessary to construct a summary (synopsis). Databases typically summarize data by means of a histogram, and the summary is then used to answer various types of queries in the same way the original data would have been used [24]. Since such a summary of data via a histogram will result in some loss of information, it is critically important to provide error bounds ('quality guarantees') for these histogram estimates. This has lead to a recent active research effort in the computer science community to derive quality guarantees for histograms [1, 2, 11, 14, 13, 47]. The methodology developed here can be evolved to produce better quality guarantees in that context, and we will report on this aspect in a different paper.
There are a number of other areas where multivariate histograms play an increasingly important role, most notably in astronomy and particle physics, see [37, 40, 35] for surveys. While there has been an intensive effort in the statistical research community during the last 40 years to develop increasingly sophisticated density estimation methods, the histogram continues to be surprisingly popular. The following statement from the astronomical overview paper [40] is illuminating as to why the histogram is a preferred tool in modern astronomical research:
"For example, while smoothed plots of pulses within gamma-ray bursts (GRBs) make pretty pictures, one is really interested in pulse locations, lags,... All of these quantities can be determined directly from the locations, heights, and widths of the blocks [of the histogram] - accurately and free of any smoothness assumptions."
Implicit in this statement is the claim that an appropriately constructed histogram gives a simple summary of the data (in terms of a piecewise constant function) while still allowing to infer the relevant features of the data. Of course, the key point here is that the histogram needs to be constructed appropriately, i.e. by choosing the partition (the number and location of the bins) appropriately. The definition of the histogram does not specify these parameters and leaves that choice to the data analyst [17]. The main contribution of this paper is to provide such a specification which results in favorable statistical properties. Our method is motivated by \(k\)-d trees in computer science, which produce an efficient partition of space that adapts to the data. It turns out that important statistical properties of such \(k\)-d trees can be described by the beta distribution. We then show how this fact can be used to prune the \(k\)-d tree in a data-adaptive way such the resulting partition has the following two properties: First, with high confidence the distribution is close to uniform on each rectangle of the partition. Second, despite the data-dependent construction we can give guaranteed finite sample simultaneous confidence bounds for the probabilities (and hence for the average densities) of each rectangle in the partition. These two properties show
that the resulting histogram is an appropriate summary of multivariate data that allows finite sample inference for the tasks described above. Moreover, using the multi-scale Bonferroni adjustment of [50] results in widths of these confidence intervals that do not depend on the dimension of the space, and furthermore the widths match the optimal widths in a univariate setting. In that sense the methodology avoids the curse of dimensionality.
\(k\)-d trees are a popular data structure in computer science that is effective for several important applications involving multivariate data. In contrast, its statistical properties in terms of the beta distribution do not yet seem to be available in the literature about \(k\)-d trees. These statistical properties together with the multi-scale Bonferroni adjustment of [50] are the key components for the proposed methodology, which we therefore call Beta-tree.
Before describing our method in Section 2, we give a review of prior work that is relevant for the problem considered here. In the univariate setting, [16] and [6] derive rules for choosing the number of bins in a histogram when the bin widths are of equal size. The common approach is to regard the histogram as a density estimator and to minimize the asymptotic mean integrated square error, which is of order \(n^{-2/3}\). In contrast, in the \(d\)-dimensional setting this optimal error is of the order \(O(n^{-\frac{2}{2+d}})\), [41, Section 3.4]. Analogous results obtain when employing other standard density estimators, such as the kernel density estimator, see [45]. These results show that in order to achieve the same mean integrated square error as in the univariate case, the number of observations needs to increases _exponentially_ with the dimension \(d\). This phenomenon is known as the curse of dimensionality. While the name was introduced by [4] in connection with computational effort, the statistical version of the curse of dimensionality refers to the phenomenon that data become sparse in high dimensional space. For instance, if one samples \(n\) observations from a uniform distribution in a \(d\)-dimensional unit cube, then the number of points in a sub-cube with side length \(r\) is about \(nr^{d}\). Hence in order to obtain the same number of observations in the sub-cube as in the univariate case, the number of observations needs to increase exponentially with the dimension \(d\).
There are several proposals in the literature that involve an adaptive partition of multivariate space, see [34, 25, 37, 30, 28, 29, 26]. These proposals use a penalty criterion or maximum likelihood in order to select a partition out of a collection of candidate partitions, where the candidate partitions are obtained from a starting rectangle by recursively subdividing according to Lebesgue measure. The main statistical results of these papers are rates of convergence for the density estimator resulting from the adaptive partition. For example, [28] show that when the underlying density can be approximated well by functions that are piecewise constant on a dyadic partition, then the rate of convergence does not depend on the dimension of the space. However, none of these proposals provide statistical guarantees such as confidence bounds, which are required for statistical inference such as the mode-hunting problem addressed below. A key distinction in our construction is that the recursive partitioning is done according to empirical measure rather than Lebesgue measure. This allows to obtain finite sample confidence bounds for the probability content of the resulting rectangles despite the fact that the construction was performed in a data-dependent way.
### Contributions of his paper
The prior methods reviewed above assess histogram accuracy in terms of _aggregated_ accuracy over the entire space, such as Hellinger distance and KL-divergence. The resulting rates of convergence do not provide any statistical guarantees such as confidence bounds. In this paper, we consider the original purpose of the histogram: providing good simultaneous estimates for probabilities over rectangles. We construct a concise summary of multivariate data in terms of a histogram such that with high confidence the distribution is close to uniform on each rectangle in the partition. We obtain simultaneous confidence intervals for the probabilities of these rectangles that satisfy finite sample guarantees. We also show that the lengths of these confidence intervals are
close to the optimal lengths in the univariate setting, so this method pays only a very small price for analyzing multivariate data and therefore avoids the curse of dimensionality. These theoretical results are derived assuming only that the distribution is continuous.
The Beta-tree histogram can be seen as the statistical counterpart of the \(k\)-d tree, and it shares the advantageous property of a compact representation of the data. The Beta-tree prunes subtrees of the \(k\)-d tree such that the resulting histogram has fewer regions in the partition while still passing an appropriate goodness-of-fit test. As in the case of the \(k\)-d tree, the computational complexity of the Beta-tree is essentially linear in the sample size, irrespective of the dimension.
Our approach is motivated by the essential histogram of [27], which is the univariate histogram with the fewest number of bins that still passes the generalized likelihood ratio test. The essential histogram also provides statistical guarantees, but it is not clear how to extend the essential histogram to the multivariate setting. The key difficulty is to construct an adaptive partition; furthermore, it is not clear how to carry out the likelihood ratio test on such a data-adaptive partition. The Beta-tree addresses the first problem by pruning subtrees of the \(k\)-d tree rather than adding splits to the partition based on an optimization problem. It addresses the second problem by applying the multiscale Bonferroni adjustment of [50] to the resulting exact beta distributions rather than applying a scale-dependent penalty on the likelihood ratio statistic. The Beta-tree may therefore have some superfluous splits, but we submit that there is not much gained by insisting on the minimum number of splits. In turn, the Beta-tree is much faster to compute than the univariate essential histogram and moreover appears to produce tighter confidence bounds in the univariate setting.
In the remainder of this article, we will describe our method in Section 2 and illustrate our method for data visualization in Section 4, for mode hunting in Section 5, and with a real data example in Section 6. In particular, Section 5 shows how the simultaneous inference may be used for multivariate cluster analysis, which may be of independent interest. Proofs are deferred to Section 8.
## 2 Constructing the Beta-tree
The construction of the Beta-tree proceeds as follows: We grow a \(k\)-d tree and then on each node (i.e. rectangle) that is bounded, we perform a goodness-of-fit test to decide whether the data on that rectangle follow a uniform distribution; if so, we cut the sub-tree below that rectangle.
The goodness-of-fit test checks uniformity on a rectangle \(R\) by examining the empirical density on the sub-rectangles of \(R\) in the \(k\)-d tree. This analysis requires to construct simultaneous confidence intervals for the probability contents of all rectangles in the \(k\)-d tree. It turns out that it is possible to derive such confidence intervals with an exact finite sample level, despite the data-adaptive nature of the \(k\)-d tree. Moreover, by applying a certain weighted multiscale Bonferroni adjustment, it is possible to construct the simultaneous confidence intervals such that their widths match the optimal univariate widths.
The details for these various items are given in the following subsections.
### Building a space partition with a \(k\)-d tree
Given \(d\)-dimensional data \(X_{i}=(X_{i1},\ldots,X_{id}),\;i=1,\ldots,n\), we apply the following recursive space-partitioning scheme. Apart from some details, this amounts to building a \(k\)-d tree, see [5, 19]:
We split \(R_{0}:=\mathbf{R}^{d}\) into two halfspaces by cutting the \(p\)th coordinate (starting with \(p=1\)) at an order statistic
of \(\{X_{ip},i=1,\ldots,n\}\) such that (about) half of the observations fall in each of the resulting two halfspaces:
\[R_{1}:=\{x\in R_{0}:\;x_{p}<X_{(\lceil\frac{n}{2}\rceil),p}\},\;\;\;R_{2}:=\{x \in R_{0}:\;x_{p}>X_{(\lceil\frac{n}{2}\rceil),p}\}.\]
Then we recursively apply this partitioning scheme in \(R_{1}\), using only the observations that fall into \(R_{1}\) when computing the median, and likewise for \(R_{2}\). Thus the rectangle \(R_{k}\) is split into two children rectangles \(R_{2k+1}\) and \(R_{2k+2}\) at a marginal median of the data in \(R_{k}\). Note that the observations that determine the boundaries do not belong to any \(R_{k}\). We let \(p\) cycle through \(\{1,\ldots,d\}\) as we progress through the recursion. That is, we set \(p=D\mod d+1\), where \(D=\lfloor\log_{2}(k+1)\rfloor\) is the tree depth of \(R_{k}\). We stop splitting \(R_{k}\) once it has fewer than \(4\log n\) observations.
Some rectangles \(R_{k}\) in the \(k\)-d tree will be unbounded, whereas the construction of a histogram is necessarily resticted to bounded rectangles. The following modification produces a \(k\)-d tree with only bounded rectangles, which allows to extend the construction of a histogram further out into the tails of the data: We create a bounding box by discarding the observations with the smallest and with the largest order statistic in the first coordinate, and we use these two order statistics (or some other order statistics if one wishes to cut a larger fraction of the observations) as bounding values in the first coordinate. We iterate this process through all \(d\) coordinates. Then we run the above space-partitioning algorithm on the remaining observations with \(R_{0}\) equaling the bounding box. Importantly, all of the statistical methodology described below continues to hold with this modification, in particular the crucial finite sample result given in Proposition 1.
If the data \(X_{1p},\ldots,X_{np}\) are distinct for each coordinate \(p\), then the number of observations \(n_{k}\) in \(R_{k}\) is a function of \(n\) and \(k\) only. It is convenient to define the empirical measure as \(F_{n}(R_{k})=\frac{n_{k}+1}{n}\) rather than \(\frac{n_{k}}{n}\). We discuss this as well as other relevant aspects of \(k\)-d trees in Appendix A.
### Deriving exact confidence bounds for the rectangles in the \(k\)-d tree
We denote the distribution of the \(X_{i}\) by \(F\). The following proposition is the starting point for our construction. It shows that \(F(R_{k})\) follows a beta distribution whose parameters depend only on \(n_{k}\) and the sample size \(n\):
**Proposition 1**: _Let \(X_{i}\), \(i=1,\ldots,n\), be i.i.d. \(F\), where \(F\) is a continuous distribution on \(\mathbf{R}^{d}\). Then every rectangle \(R_{k}\) generated by the partitioning scheme in Section 2.1 is a random set containing a deterministic number \(n_{k}\) of observations in its interior and the random variable \(F(R_{k})\) satisfies_
\[F(R_{k})\sim\text{Beta}\,(n_{k}+1,n-n_{k}).\]
This result is related to early results by Wald (1943) and Tukey (1947), see the comments in the proof in Section 8. We note that Proposition 1 remains valid for other ways of choosing the axis \(p\) for the split, e.g. choosing \(p\) randomly, as well as for other ways to set \(n_{k}\), as long as these do not depend on the data \(\{X_{i}\}_{i=1}^{n}\).
An important consequence of Proposition 1 is that \(F(R_{k})\) is a pivotal quantity, i.e. its distribution does not depend of \(F\). Furthermore, this distribution is known exactly. This is a multivariate generalization of the well known fact that in the univariate case \(F((X_{(i)},X_{(j)}))\sim\text{Beta}\,(j-i,n+1-(j-i))\), see Shorack and Wellner (1986) (43, Chapter 3.1). We note that this property depends crucially on employing the data adaptive collection \(\{R_{k}\}\) rather than constructing a partition by splitting according to Euclidean distance as is usually done in the literature.
Proposition 1 implies an exact \((1-\alpha)\) level confidence interval for \(F(R_{k})\):
\[C_{k}(\alpha):=\ \Big{(}q\mathrm{Beta}(\frac{\alpha}{2},n_{k}+1,n-n_{k}),\,q \mathrm{Beta}(1-\frac{\alpha}{2},n_{k}+1,n-n_{k})\Big{)} \tag{1}\]
where \(q\mathrm{Beta}(\alpha,\cdot,\cdot)\) denotes the \(\alpha\)-quantile of the beta distribution with the given degrees of freedom. Strictly speaking, this is a prediction interval or a tolerance region since \(F(R_{k})\) is a random variable, which measures the probability content of the random set \(R_{k}\). Likewise, an exact \((1-\alpha)\) level confidence interval for the average density \(f(R_{k})=F(R_{k})/|R_{k}|\) is given by dividing the bounds in (1) by the volume \(|R_{k}|\), see (3) below.
### Constructing simultaneous confidence bounds with a multiscale Bonferroni adjustment
In order to construct confidence intervals for the \(F(R_{k})\) and \(f(R_{k})=F(R_{k})/|R_{k}|\) that are simultaneously valid for all rectangles \(R_{k}\) in the \(k\)-d tree, we use the weighted Bonferroni adjustment that [50] propose for univariate scan statistics. The motivation for using that weighted adjustment is to obtain good statistical performance across all scales, which in this context means across all tree depths \(D\). The prescription given in [50] is to assign the same significance level to each interval at a given scale, and to weigh the significance level across scales according to a harmonic sequence so that the smallest scale is weighted with a factor \(\frac{1}{2}\), the second smallest scale with a factor \(\frac{1}{3}\) etc. Translated to the setting of a \(k\)-d tree with \(R_{0}\) being a bounding box, this prescription assigns each of the \(N_{D}\) bounded rectangles at tree depth \(D\) the significance level
\[\alpha_{D}=\frac{\alpha}{N_{D}(D_{max}-D+2)\sum_{B=2}^{D_{max}+1}\frac{1}{B}}, \ D\geq 1, \tag{2}\]
where \(D_{max}\) is the maximum depth of the Beta-tree, and \(\alpha_{0}=0\). Therefore, if \(R_{k}\) has tree depth \(D\), then
\[\mathrm{lower}(R_{k})=\frac{q\text{Beta}\;(\frac{\alpha_{D}}{2},n_{k}+1,n-n_ {k})}{|R_{k}|},\quad\mathrm{upper}(R_{k})=\frac{q\text{Beta}\;(1-\frac{ \alpha_{D}}{2},n_{k}+1,n-n_{k})}{|R_{k}|}, \tag{3}\]
provide lower and upper confidence bounds for \(f(R_{k})=F(R_{k})/|R_{k}|\), and these confidence bounds have simultaneous coverage level \(1-\alpha\) for all bounded \(R_{k}\) since the corresponding \(\alpha_{D}\) sum to \(\alpha\).
Likewise, using \(C_{k}(\alpha_{D})\) in (1) gives simultaneous confidence bounds for the \(F(R_{k})\).
If \(R_{0}\) equals \(\mathbf{R}^{d}\) rather than a bounding box, then there will be no bounded rectangles at the smallest depths \(D\) and (2) changes accordingly. Appendix A gives the details.
### Pruning the \(k\)-d tree by checking goodness-of-fit
A key principle for constructing a histogram is to find a parsimonious representation which still gives an good approximation to the distribution, see e.g. the discussion in [27]. This principle can be readily implemented for a nested partition as follows: We keep the largest rectangles \(R_{k}\) for which we are confident that the distribution of the data on \(R_{k}\) is the uniform distribution specified by a histogram that uses \(R_{k}\) as a bin1. Assessing whether the distribution is uniform on a rectangle \(R_{k}\) amounts to a goodness-of-fit test. Such a test can be readily implemented for the multiscale partition given by the \(k\)-d tree since the rectangles \(R\subset R_{k}\) constitute an appropriate collection of test sets for which we can compare the empirical density to that on \(R_{k}\). This simply amounts to checking whether the empirical density on \(R_{k}\) lies in all the confidence intervals (3) for the \(f(R)\), \(R\subset R_{k}\), i.e.
in the intersection of these confidence intervals. This intersection is given by \((\widetilde{\text{lower}}(R_{k}),\widetilde{\text{upper}}(R_{k}))\), where these bounds are defined recursively as follows:
\[\widetilde{\text{lower}}(R_{k}) =\max\Big{(}\text{lower}(R_{k}),\,\widetilde{\text{lower}}(1st \,\text{child}\,\text{of}\,R_{k}),\,\widetilde{\text{lower}}(2nd\,\text{child }\,\text{of}\,R_{k})\Big{)} \tag{4}\] \[\widetilde{\text{upper}}(R_{k}) =\min\big{(}\text{upper}(R_{k}),\,\widetilde{\text{upper}}(1st \,\text{child}\,\text{of}\,R_{k}),\,\widetilde{\text{upper}}(2nd\,\text{child }\,\text{of}\,R_{k})\big{)}\]
if \(R_{k}\) has children, otherwise \(\widetilde{\text{lower}}(R_{k})=\text{lower}(R_{k})\) and \(\widetilde{\text{upper}}(R_{k})=\text{upper}(R_{k})\).
Now we can define the Beta-tree as the collection of the maximal (with respect to inclusion) bounded rectangles of the \(k\)-d tree that pass the goodness-of-fit test. That is, we take all rectangles \(R_{k}\) that are bounded and satisfy \(\widetilde{\text{lower}}(R_{k})\leq h_{k}\leq\widetilde{\text{upper}}(R_{k})\) while none of the ancestors of \(R_{k}\) satisfy these two conditions. Here \(h_{k}\) is the empirical average density of \(R_{k}\):
\[h_{k}:=\frac{F_{n}(R_{k})}{|R_{k}|}=\frac{n_{k}+1}{n|R_{k}|} \tag{5}\]
The tree structure makes it easy to find these maximal rectangles recursively or iteratively, see Appendix A.
The Beta-tree histogram is the histogram constructed using the rectangles in the Beta-tree. That is, on the rectangle \(R_{k}\) the histogram has height \(h_{k}\) given by (5). The Beta-tree histogram lies in the \((1-\alpha)\) confidence set for \(F\) that is given by the multiscale goodness-of-fit test, and it is the most parsimonious distribution in that confidence set among histograms that use rectangles in the \(k\)-d tree as potential bins.
## 3 The simultaneous confidence intervals attain the optimal univariate widths
The Beta-tree satisfies a key goal of the histogram: It provides a parsimonious summary of the data while still giving a good approximation to the distribution. Importantly, the data-adaptive construction using the goodness-of-fit test described in the previous section does not invalidate the statistical guarantees (3) for \(f(R_{k})\), since those confidence bounds are simultaneous for all \(R_{k}\). This raises the question whether this simultaneity results in overly conservative confidence bounds. It turns out that this is not the case: In fact, the widths of the confidence intervals essentially match the optimal simultaneous widths in the _univariate_ setting. This shows that this data-adaptive construction avoids the curse of dimensionality, and there is only an asymptotically negligible price to pay compared to the univariate setting for effectively summarizing multivariate data with a histogram.
To make this precise, we first summarize the relevant lower bound in the univariate case. It is well known that the empirical measure of an interval \(I\), \(F_{n}(I)\), estimates \(F(I)\) with precision \(\sqrt{n}\frac{|F_{n}(I)-F(I)|}{\sqrt{F(I)(1-F(I))}}=O_{p}(1)\). If one wishes to estimate \(F(I)\) simultaneously for all intervals \(I\), then there is an unavoidable penalty of size \(\sqrt{2\log\frac{e}{F(I)}}\): Theorem 1 in [27] shows that if \(\mathcal{J}_{n}=\bigcup_{i}I_{i,n}\) is a partition of the line such that \(F(I_{i,n})=p_{n}\), \(i=1,\ldots,\lfloor\frac{1}{p_{n}}\rfloor\) and \(\frac{\log^{2}n}{n}\leq p_{n}\to 0\), then
\[\mathbf{P}_{F}\Big{(}\text{for some }I\in\mathcal{J}_{n}:\sqrt{n}\frac{|F(I)-F_{n} (I)|}{\sqrt{F(I)(1-F(I))}}\ \geq\ \Big{(}\sqrt{2}-\epsilon_{n}\Big{)}\,\sqrt{\log\frac{e}{F(I)}}\Big{)}\ \to 1 \qquad(n\to\infty) \tag{6}\]
with \(\epsilon_{n}\to 0\) at a certain rate. Moreover, this penalty cannot be improved with any other estimator in place of \(F_{n}\). The constant \(\sqrt{2}\) is important as it measures the difficulty of the estimation problem, see [50]. The lower
bound (6) implies a lower bound for any confidence interval \(C\) for \(F(I)\), because if \(|F(I)-F_{n}(I)|\) satisfies a lower bound, then the radius \(\sup_{G\in C}|G-F_{n}(I)|\) must also satisfy this bound.
Theorem 1 shows that despite the data-adaptive construction in multivariate space, the simultaneous confidence intervals \(C_{k}(\alpha_{D})\) attain the critical constant \(\sqrt{2}\) of the univariate lower bound (6), up to a term \(\epsilon_{n}\to 0\):
**Theorem 1**: _If \(F\) is a continuous distribution on \(\mathbf{R}^{d}\), then for every \(k\) with \(n_{k}\in[\log^{2}n,n^{q}]\), \(q\in(0,1)\), the confidence interval \(C_{k}(\alpha_{D})\) for \(F(R_{k})\) satisfies_
\[\sup_{G\in C_{k}(\alpha_{D})}\sqrt{n}\frac{|G-F_{n}(R_{k})|}{\sqrt{G(1-G)}}\ \leq\ \left(\sqrt{2}+\frac{4}{\sqrt{\log n}}\right)\sqrt{\log\frac{e}{G}}\]
Some remarks:
1. Note that the theorem applies to all confidence intervals in the \(k\)-d tree, not just those of the pruned Beta-tree.
2. The inequality in the theorem is deterministic: While \(R_{k}\) is random, \(F_{n}(R_{k})=\frac{n_{k}+1}{n}\) as well as \(C_{k}(\alpha)\) are deterministic. In particular, the above inequality holds uniformly in \(k\).
3. The theorem applies to rectangles \(R_{k}\) that are not very large, i.e. \(n_{k}\leq n^{q}\). A different Bonferroni weighting would allow to extend the theorem to all rectangle sizes, but the discussion in [50] for the univariate regression setting suggests that it is worthwhile to trade off the optimality for large rectangles in order to obtain a better finite sample performance for smaller rectangles, which are typically more relevant in practice.
## 4 Summarizing data using the Beta-tree histogram
In this section we apply the Beta-tree histogram to summarize two- and three- dimensional data from various distributions. The left plot in Figure 1 shows the Beta-tree for a sample from a two-dimensional normal distribution with correlation coefficient 0.5, while right plot shows the Beta-tree with a bounding box for a bivariate uniform distribution. Both samples have sizes \(n=1000\) and the bounding box is constructed as described in Section 2.1 by exluding 0.5% of the data in each tail of each coordinate.
Figure 1: Beta-tree histograms for samples from a bivariate normal (left) and a bivariate uniform (right), \(n=1000\).
Figure 2 shows the Beta-tree histogram with and without bounding box for a larger sample of size \(n=20000\) from the same bivariate normal distribution. Due to the construction via the multiscale goodness-of-fit test, the Beta-tree histogram produces larger bins where the density does not change much and smaller bins where the density changes quickly. This is an important and necessary feature in order to obtain a summary that is both parsimonious and accurate. In particular, the uniform distribution in Figure 1 results in a single bin for the Beta-tree. This desirable outcome is notoriously difficult to achieve with other histogram rules.
In order to evaluate the Beta-tree for more complex distributions we consider two- and three- dimensional data from mixtures of multivariate Gaussian distributions. We consider the following two scenarios:
In the first scenario, we sample \(n=2000\) observations from the following two-dimensional mixture:
\[\frac{2}{5}\mathcal{N}\left(\begin{pmatrix}-1.5\\ 0.6\end{pmatrix},\begin{pmatrix}1&0.5\\ 0.5&1\end{pmatrix}\right)+\frac{2}{5}\mathcal{N}\left(\begin{pmatrix}2\\ -1.5\end{pmatrix},\begin{pmatrix}1&0\\ 0&1\end{pmatrix}\right).\]
In the second scenario, we sample \(n=20,000\) observations from the three-dimensional mixture
\[\frac{2}{5}\mathcal{N}\left(\begin{pmatrix}-1.5\\ 0.6\\ 1\end{pmatrix},\begin{pmatrix}1&0.5&0.5\\ 0.5&1&0.5\\ 0.5&0.5&1\end{pmatrix}\right)+\frac{2}{5}\mathcal{N}\left(\begin{pmatrix}2\\ -1.5\\ 0\end{pmatrix},\begin{pmatrix}1&0&0\\ 0&1&0\\ 0&0&1\end{pmatrix}\right)\\ +\frac{1}{5}\mathcal{N}\left(\begin{pmatrix}-2.6\\ -3\\ -2\end{pmatrix},\begin{pmatrix}1&-0.4&0.6\\ -0.4&1&0\\ 0.6&0&1\end{pmatrix}\right).\]
We chose these two distributions because some of the components are correlated and because individual components are relatively disjoint from each other. For example, if we were to classify which component an observation is sampled from using a Bayes classifier, then the accuracy is over 98% in both scenarios.
We compare three ways to visualize the data. First, we use a kernel density estimate. In more detail, we use a Gaussian kernel and select the bandwidth with biased cross-validation [39] in two dimensions and with the plug-in bandwidth estimator [42, 52, 10] in three dimensions. Both estimates are implemented in the R package ks. The kernel density estimates are shown in Figures 2(a) and 3(a). In Figure 4 we plot the estimated density along
Figure 2: Beta-tree histograms without a bounding box (left) and with a bounding box (right), \(n=20000\).
the plane \(z=1\) as well as observations that lie in a slab where \(0.8\leq z\leq 1.2\).
Second, we plot a histogram with a fixed number of 15 equally sized bins in each dimension, see Figures 2(b) and 3(b). For the three-dimensional mixture the resulting histogram has \(15^{3}=3375\) bins, of which only 834 are not empty.
Third, we show the Beta-tree histogram with confidence level \(1-\alpha=90\%\) in Figures 2(c) and 3(c). The Beta-tree histogram consists of only 25 rectangles in the two-dimensional setting and 125 rectangles in the three-dimensional setting. This exemplifies how the Beta-tree histogram yields a more parsimonious summary of the data compared to a histogram with a fixed number of bins.
Figures 2(d) and 3(d) show the Beta-tree histogram with bounding box, which is obtained by exluding 0.5% of the data in each tail of each coordinate. The Beta-tree histogram with bounding box consists of 36 rectangles in the two-dimensional setting and 315 rectangles in the three-dimensional setting.
We note that Figure 4 suggests that all four methods are able to distinguish the first and the second components in the mixture. However, only the two Beta-tree methods provide a confidence statement to this effect. Section 5 gives more details for such multivariate mode hunting with finite sample guarantees.
Figure 3: Density estimate and histograms for a mixture of two-dimensional Gaussians, \(n=2000\).
Figure 4: Density estimate and histograms for a mixture of three-dimensional Gaussians, \(n=20000\). The scatterplot shows all observations within a slab perpendicular to the \(z\) coordinate with \(0.8\leq z\leq 1.2\). We plot the histograms and kernel density estimates along a slice where \(z=1\), and we only show rectangles where the empirical density is at least \(2\times 10^{-4}\) so that we do not plot empty rectangles.
## 5 Multivariate mode hunting
This section gives an example of how the Beta-tree can be used for inference, namely to perform multivariate mode detection with finite sample guarantees. The interest in multivariate mode hunting derives from the important problem of detecting subpopulations in a distribution. One prominent approach to this problem identifies such subpopulations with high density regions, see e.g. [22, 21, 44, 33, 18, 34, 9, 46]. This gives rise to the problem of finding confidence bounds for the number and location of modes in a density. Multivariate mode hunting is considered to be a difficult statistical problem due to the inherent multiple testing and due to the curse of dimensionality. Indeed, the statistical statements in the above references are typically of approximate or asymptotic nature.
Here we show how mode hunting can be performed by using the Beta-tree as a summary of the data. The advantage of using the Beta-tree for this task is that the analysis inherits the statistical guarantees that come with the Beta-tree.
In order to check that a density \(f\) has two separate modes at locations \(x\) and \(y\), it is necessary to check that on every (possible curved) path that connects \(x\) and \(y\), there is a point \(z\) with \(f(z)<\min(f(x),f(y))\). This poses not only a statistical problem due to the simultaneous estimation of \(f\), but also a computational problem since all paths would need to be checked. This has motivated various approximations proposed in the literature, such as checking only along convex combinations \(z_{\alpha}=\alpha x+(1-\alpha)y\), \(0\leq\alpha\leq 1\), see [9]. A Beta-tree makes it possible to avoid this restriction: Since the Beta-tree segments \(\mathbf{R}^{d}\) into rectangles for which we are confident that \(f\) is approximately constant, it is sufficient to check all paths of adjacent2 rectangles in the Beta-tree. This is typically a manageable computational task since the number of rectangles in the Beta-tree is not large. For example, the group of rectangles in dark green in the top left and lower right corners in Figure 2(c) have higher density compared to the rectangles in between. Indeed, the underlying mixture distribution has two modes at \((1.5,0.6)\) and \((2,-1.5)\). Moreover, the Beta-tree provides simultaneous confidence bounds (3) for the average density \(f(R)=\int_{R}f(x)\mathrm{d}x/|R|\) (which equals \(f\) on \(R\) if \(f\) is constant on \(R\)). This makes it straightforward to check the condition \(f(R_{1})<\min(f(R_{2}),f(R_{3}))\) by checking whether the upper confidence bound for \(f(R_{1})\) is smaller than the lower confidence bounds for \(f(R_{2})\) and \(f(R_{3})\). Since these confidence bounds are simultaneous, any statement involving such inequalities along multiple paths will inherit the finite sample confidence level \(1-\alpha\). In particular, it is possible to claim the existence of a certain number of modes with a finite sample guarantee.
Footnote 2: Since \(R_{i}=\times_{p=1}^{d}(l_{ip},u_{ip})\), \(R_{i}\) and \(R_{j}\) are adjacent iff \([l_{ip},u_{ip}]\cap[l_{jp},u_{jp}]\neq\emptyset\) for all \(p=1,\ldots,d\).
We summarize our algorithm in Algorithm 1. In short, we first tag the rectangle with highest empirical density as a mode. Then, we iterate through the rectangles in the Beta-tree in descending order of their empirical density. For each rectangle \(R_{i}\), we iterate through every path from \(R_{i}\) to every mode that has been tagged so far, and if we find a path where _no_ rectangle along the path has lower density than both endpoints, then \(R_{i}\) will not be tagged as a mode.
We now apply our procedure to identify modes in the two mixture scenarios considered in Section 4. For faster computation we only considered paths with lengths at most 6. In the two-dimensional Gaussian mixture we identify two modes, whose corresponding rectangles are striped in Figure 5 (left). The true modes (shown as red asterisks) are close to these two rectangles. Figure 5 (right) shows the confidence intervals for the \(f(R)\) along the shortest path (in terms of number of rectangles) between the two identified modes. The plot shows that there exists a rectangle whose upper confidence bound is below the minimum of the lower bounds for the two modal rectangles, which is marked by a dashed line.
For the three-dimensional mixture we are able to identify three modes. We report the locations of true modes
and the centers of the modal rectangles identified by the algorithm in Table 1.
The estimated modes are again close to the true modes.
## 6 A real data example
We now apply our approach to visualize and identify cell populations in flow cytometry data. In [8], flow cytometry was used to analyze peripheral blood samples collected from patients who underwent a bone marrow transplant. The objective was to identify biomarkers which indicate graft-versus-host disease (GvHD), which occurs in allogeneic hematopoietic stem cell transplant recipients when donor-immune cells attack tissue of the recipient. Researchers initially identified 121 subpopulations from 6 biomarkers, among which they pinpointed the population identified as CD3+ CD4+ CD8b+ to have the highest correlation with the development of acute GvHD.
A subset of the data from this research is publicly available in RvHD in the R package mclust. The data contain 9083 observations from a patient with GvHD and 6809 observations from a control patient. The data includes four biomarkers CD4, CD8b, CD3, and CD8. Since the sample size is limited, we will construct a histogram using only the the first two variables CD4 and CD8b.
We constructed a Beta-tree histogram using the data of the patient who developed GvHD (who we refer to as the case patient) and Algorithm 1 identifies two modes, which indicates the presence of two cell populations, see Figure 5(a). We report the centers of the two modal rectangles in the column "Center" in Table 2. If these two
\begin{table}
\begin{tabular}{|c|c|c|} \hline Index & Estimated modes & True modes \\ \hline
1 & (-2.7,-3.0,-2.4) & (-2.6, -3, -2) \\
2 & (-1.0,0.8,1.7) & (-1.5, 0.6, 1) \\
3 & (1.7,-1.6,0.3) & (2, -1.5, 0) \\ \hline \end{tabular}
\end{table}
Table 1: The coordinates of the centers of the modal rectangles identified by the algorithm (“Estimated modes”) and of the true modes.
populations are indicative of GvHD, then the empirical density in these regions from the control patient, which is given in the column "Density (control)", should be lower compared to that of the case patient. In this example, the empirical density of the control patient is indeed well below the confidence intervals for the case patient in both regions, suggesting that these regions might be specific to GvHD.
Finally, we compute a bounded Beta-tree histogram for CD4, CD8b, CD3 and visualize one slice of the histogram along the axis \(CD3=1.0\). We also plot observations within the slab \(0.8\leq CD3\leq 1.2\). The Beta-tree histogram captures one cluster of observations characterized by high values of CD4, CD8b and CD3, see Figure 6b.
## 7 Discussion
This paper introduces Beta-trees for summarizing multivariate data. The Beta-tree possesses several important-properties: It provides a compact summary of the data by partitioning the sample space into rectangles on which the distribution is close to uniform. The parition is adaptive to the data, which is key for avoiding the curse of dimensionality. The probability content of each rectangle in the partition has a sampling distribution that is known exactly and thus allows to set finite sample confidence bounds. A multiscale Bonferroni adjustment results in simultaneous confidence bounds whose widths do not depend on the dimension and match the optimal univariate widths, thus avoiding the curse the dimensionality. The simultaneous validity of the confidence intervals allows
\begin{table}
\begin{tabular}{|c|c|c|} \hline Center & CI (Case) & Density (Control) \\ \hline (-0.13,0.01) & (0.42, 0.74) & 0.009 \\ (1.87, 1.59) & (0.03, 0.06) & 0.0012 \\ \hline \end{tabular}
\end{table}
Table 2: Column “Center” shows the the coordinates of the center of the modal rectangles in the sample collected from the case patient. Column “CI (Case)” shows the confidence intervals for the average densities of these two modal regions. Column “Density (Control)” reports the empirical density of the data collected from the control patient.
Figure 5: Left: Beta-tree histogram of a two-dimensional Gaussian mixture with confidence level \(1-\alpha=0.9\). The modal rectangles are indicated by stripes and the red stars mark the two modes of the mixture distribution. Right: The confidence intervals for \(f(R)\) for every rectangle along the shortest path between the two modes. The points indicate the empirical density in each rectangle and the dashed line shows the minimum of the lower confidence bounds for the two modal rectangles.
to use Beta-trees for various data-analytic tasks. As an example, we showed how Beta-trees can be used for multivariate mode hunting with finite sample guarantees. We illustrated this with flow cytometry data.
## 8 Proofs
### Proof of Proposition 1
The proof of Proposition 1 is based on the following result:
**Proposition 2**: _Let \(X_{1},\ldots,X_{n}\in\mathbf{R}^{d}\) be i.i.d. with distribution \(F\) and let \(g,h:\mathbf{R}^{d}\to\mathbf{R}\) be measurable functions. Write \(g(X)_{(k)}\) for the kth order statistic of the \(g(X_{i})\), \(i=1,\ldots,n\). For fixed \(j\) and \(k\) with \(1\leq j<k\leq n\) set_
\[R := \Big{\{}x\in\mathbf{R}^{d}:\;g(x)<g(X)_{(k)}\Big{\}},\] \[\{Y_{1},\ldots,Y_{k-1}\} := \{X_{1},\ldots,X_{n}\}\cap R,\] \[S := \Big{\{}x\in R:\;h(x)<h(Y)_{(j)}\Big{\}}.\]
_Assume that \(F,g,h\) are such that the \(g(X_{i})\) and the \(h(Y_{i})\) have a continuous cdf, so \(R\) contains \(\#R=k-1\) observations a.s. and \(\#S=j-1\) a.s. Then_
1. \(F(R)\sim\text{Beta }(\#R+1,n-\#R)\)__
2. \(\frac{F(S)}{F(R)}\sim\text{Beta }(\#S+1,\#R-\#S)\)__
3. \(\frac{F(S)}{F(R)}\) _and_ \(F(R)\) _are independent_
4. \(F(S)\sim\text{Beta }(\#S+1,n-\#S)\)__
Figure 6: Beta-tree histograms of data from the case patient, \(1-\alpha=0.9\). (a) A two-dimensional histogram of CD8b vs. CD4. The two identified modal rectangles are marked by red stripes. A random sample of 2000 observations is also shown. (b) A three-dimensional bounded Beta-tree histogram of CD4, CD8b, and CD3. The display shows rectangles in the Beta-tree histogram that intersect the plane \(\mathrm{CD3}=1.0\). The displayed points are observations within the slab \(0.8\leq\mathrm{CD3}\leq 1.2\).
_._
* _The results (a)-(d) continue to hold when the above construction is iterated, starting with the above_ \(S\) _in place of_ \(R\) _and using prescribed functions_ \(g\) _and_ \(h\) _that may be different from the initial functions._
We point out that this result requires that \(k\) and \(j\), and hence \(\#R\) and \(\#S\), are prescribed (i.e. do not depend on the data), and that the inequalities in the definition of \(R\) and \(S\) are strict as using '\(\leq\)' for \(R\) will add one observation to \(R\) which will generally invalidate the beta distribution for \(F(S)\).
The first such result about the beta distribution of \(F(S)\) for multivariate \(S\) constructed from order statistics appears to be Wald (1943), who considered the special case where \(g\) and \(h\) are univariate marginals [49]. Wald's proof has an important gap which was patched by Tukey (1947) with a lemma that he called 'Wald's Principle' [48]. The works of Wald and Tukey are hampered by the methodology available in the 1940s. We provide a short and elementary proof of Proposition2 and a more general version of Wald's Principle in Lemma 1.
**Lemma 1**: _(Wald's Principle) Let \(X_{1},\ldots,X_{n}\in\mathbf{R}^{d}\) be i.i.d. with distribution \(F\) and let \(g:\mathbf{R}^{d}\rightarrow\mathbf{R}\) be a measurable function. Suppose \(F\) and \(g\) are such that the univariate random variables \(g(X_{i})\) have a continuous cdf. Fix \(k\in\{1,\ldots,n\}\) and write \(g(X)_{(k)}\) for the \(k\)th order statistic of the \(g(X_{i})\), \(i=1,\ldots,n\). Then divide the \(X_{i}\) into two groups according to whether \(g(X_{i})\) is smaller or larger than \(g(X)_{(k)}\):_
\[\{Y_{1},\ldots,Y_{k-1}\} :=\{X_{i}:\ g(X_{i})<g(X)_{(k)}\}\] \[\{Z_{1},\ldots,Z_{n-k}\} :=\{X_{i}:\ g(X_{i})>g(X)_{(k)}\}\]
_where the \(Y_{i}\) and the \(Z_{i}\) are enumerated in the original order of outcome among the \(X_{i}\)._
_Then, conditional on \(g(X)_{(k)}=t\):_
* _The_ \(\{Y_{i}\}\) _and the_ \(\{Z_{i}\}\) _are independent._
* _The_ \(Y_{i},\ldots Y_{k-1}\) _are i.i.d. with distribution_ \(G(\cdot)=\frac{F(\cdot\cap R_{t})}{F(R_{t})}\) _and the_ \(Z_{1},\ldots,Z_{n-k}\) _are i.i.d. with distribution_ \(K(\cdot)=\frac{F(\cdot\cap R_{t}^{c})}{F(R_{t}^{c})}\)_, where_ \(R_{t}:=\{x\in\mathbf{R}^{d}:\ g(x)<t\}\)_._
The lemma can be seen as a generalization of the following well known fact about univariate order statistics: Conditional on \(X_{(k)}=t\), the vectors \((X_{(1)},\ldots,X_{(k-1)})\) and \((X_{(k+1)},\ldots,X_{(n)})\) are independent and the joint law of \((X_{(1)},\ldots,X_{(k-1)})\) is that of the order statistics of \(k-1\) i.i.d. random variables with distribution \(\frac{F(\cdot(\neg\infty,t))}{F((-\infty,t))}\), see Theorem 1.8.1 in [38]. Lemma 1 generalizes this result by establishing that the unordered \(X_{i}\) corresponding to \((X_{(1)},\ldots,X_{(k-1)})\) are i.i.d., and by generalizing this result to multivariate \(X_{i}\) ordered via \(g\).
**Proof of Lemma 1:** We first note that \(g(X_{1})\) having a continuous cdf implies that \(g(X_{i})=g(X)_{(k)}\) for exactly one index \(i\) a.s., hence there are a.s. \(k-1\) observations \(Y_{i}\) and \(n-k\) observations \(Z_{i}\).
For Borel sets \(B_{i},\tilde{B}_{i}\in\mathbf{R}^{d}\):
\[\begin{split}&\mathrm{I\!P}\Big{(}Y_{1}\in B_{1},\ldots,Y_{k-1} \in B_{k-1},Z_{1}\in\tilde{B}_{1},\ldots,Z_{n-k}\in\tilde{B}_{n-k}\Big{|}g(X)_ {(k)}\in[t,t+dt)\Big{)}\\ &=\frac{\mathrm{I\!P}\Big{(}\mathcal{A}:=\Big{\{}Y_{i}\in B_{i} \cap R_{t}\text{ for }i=1,\ldots,k-1,Z_{j}\in\tilde{B}_{j}\cap R_{t+dt}^{c}\text{ for }j=1,\ldots,n-k,g(X)_{(k)}\in[t,t+dt)\Big{\}}\Big{)}}{ \mathrm{I\!P}\Big{(}g(X)_{(k)}\in[t,t+dt)\Big{)}}\end{split} \tag{7}\]
We now write \({\cal A}\) in terms of the \(X_{i}\): Let \(T_{k,n}\) be the set of permutations \(\tau\) of \(\{1,\ldots,n\}\) such that \(\tau(1)<\tau(2)<\ldots<\tau(k-1)\) and \(\tau(k+1)<\tau(k+2)<\ldots\tau(n)\). Since the \(Y_{i}\) and the \(Z_{j}\) are enumerated in the original order of outcome among the \(X_{i}\) we must have \(Y_{i}=X_{\tau(i)}\) and \(Z_{j}=X_{\tau(k+j)}\) for some \(\tau\in T_{k,n}\). In fact, it is readily seen that \({\cal A}=\bigcup_{\tau\in T_{k,n}}B_{\tau}\), where
\[B_{\tau}\,:=\,\Big{\{}X_{\tau(i)}\in B_{i}\cap R_{t}\mbox{ for }i=1,\ldots,k-1,\,X_ {\tau(k+j)}\in\tilde{B}_{j}\cap R_{t+dt}^{c}\mbox{ for }j=1,\ldots,n-k,\,X_{\tau(k)}\in R_{t+dt}\! \setminus\!R_{t}\Big{\}}.\]
The \(B_{\tau}\) are mutually disjoint because different \(\tau\) result in different assignments of the \(X_{i}\) to \(R_{t}\), \(R_{t+dt}\setminus R_{t}\) and \(R_{t+dt}^{c}\). There are \({n\choose k-1}(n-k+1)={n!\over(k-1)!(n-k)!}\) permutations in \(T_{k,n}\): there are \({n\choose k-1}\) ways to choose \(\tau(1)<\ldots<\tau(k-1)\) and \(n-k+1\) possibilities for \(\tau(k)\), which then uniquely determine \(\tau(k+1)<\ldots<\tau(n)\). Therefore
\[{\rm I\!P}({\cal A})=\sum_{\tau\in T_{k,n}}{\rm I\!P}(B_{\tau})={n!\over(k-1)! (n-k)!}\left(\prod_{i=1}^{k-1}F(B_{i}\cap R_{t})\right)\left(\prod_{j=1}^{n-k} F(\tilde{B}_{j}\cap R_{t+dt}^{c})\right)\ dF_{g}(t) \tag{8}\]
where \(F_{g}\) denotes the cdf of \(g(X_{1})\) and \(dF_{g}(t)=F_{g}(t+dt)-F_{g}(t)\). As for the denominator of (7), since \(F_{g}\) is continuous it is known that the univariate \(k\)th order statistic \(g(X)_{(k)}\) has the following density w.r.t. the cdf \(F_{g}\):
\[{\rm I\!P}\Big{(}g(X)_{(k)}\in[t,t+dt)\Big{)}={n!\over(k-1)!(n-k)!}\Big{(}F_{ g}(t)\Big{)}^{k-1}\Big{(}1-F_{g}(t)\Big{)}^{n-k}dF_{g}(t),\]
see Theorem 1.5.1 in [38]. Using \(F_{g}(t)=F(R_{t})\) and (8) shows that (7) equals
\[\prod_{i=1}^{k-1}{F(B_{i}\cap R_{t})\over F(R_{t})}\ \prod_{j=1}^{n-k}{F(\tilde{B}_{j} \cap R_{t}^{c})\over F(R_{t}^{c})}\]
which establishes the claims of the lemma.
As an aside we note that it appears to be surprisingly difficult to establish this result via conditional distributions rather than by calculating probabilities of \([t,t+dt)\). In the univariate setting, Theorem 1.8.1 in [38] shows that conditional on \(X_{(k)}\), the order statistics to the left of \(X_{(k)}\) are independent of those to the right, and the proof of Corollary 1.8.4 and problem 1.33 can be used to extend this result to the unordered observations as in claim (a) of the Lemma. Already in that univariate setting these proofs with conditional distributions are rather complicated. \(\Box\)
**Proof of Proposition 2:** We will use the following two well known facts:
**Fact 1**: _If the univariate \(Z_{1},\ldots,Z_{n}\) are i.i.d. with a continuous cdf \(G\), then \(G(Z_{(k)})\sim\ Beta\ (k,n+1-k)\)._
**Fact 2**: _If \(V\sim\ Beta\ (\alpha,\beta)\) and \(W\sim\ Beta\ (\alpha+\beta,\gamma)\) are independent, the \(VW\sim\ Beta\ (\alpha,\beta+\gamma)\)._
Write \(F_{g}\) for the univariate cdf of \(g(X_{1})\). Then \(F(R)=F_{g}\Big{(}g(X)_{(k)}\Big{)}\sim\ Beta\ (k,n+1-k)\) by Fact 1, proving (a).
By Lemma 1, conditional on \(g(X)_{(k)}\) the \(Y_{1},\ldots,Y_{k-1}\) are i.i.d. with distribution \(G(\cdot)={F(\cdot\cap R)\over F(R)}\). Write
for the cdf of \(h(Y_{1})\), so for real \(t\):
\[G_{h}(t)\;=\;G\Big{(}\{x\in{\bf R}^{d}:\;h(x)\leq t\}\Big{)}\;=\;\frac{F\Big{(} \{x\in{\bf R}^{d}:h(x)\leq t\}\cap R\Big{)}}{F(R)}.\]
By the definition of \(S\):
\[\frac{F(S)}{F(R)}\;=\;G_{h}\Big{(}h(Y)_{(j)}\Big{)}\sim\;\mbox{Beta }(j,k-j)\]
by Fact 1. Since this conditional distribution given \(g(X)_{(k)}\), i.e. given \(R\), does not depend on \(R\) as \(\#R=k-1\) is fixed, it follows that this result also holds unconditionally and that \(\frac{F(S)}{F(R)}\) and \(F(R)\) are independent, proving (b) and (c).
(d) follows from Fact 2 and (a)-(c): set \(\alpha:=j\), \(\beta:=k-j\), \(\gamma:=n+1-k\) in Fact 2.
As for (e), the above proof goes through if one iterates this construction starting with \(S\) in place of \(R\). In particular, applying Lemma 1 again to the conditional distribution given \(R\), \(G(\cdot)=\frac{F(\cdot\cap R)}{F(R)}\), shows that the conditional distribution given \(S\) is
\[\frac{\frac{F(\cdot\cap S)}{F(R)}}{\frac{F(S\cap R)}{F(R)}}\;=\;\frac{F(\cdot \;\cap S)}{F(S)}\quad\mbox{since }S\subset R\;.\;\Box\]
**Proof of Proposition 1**: This follows from Proposition 2 by taking \(g,h\) to be functions of the form \(x\mapsto\pm x_{p}\) for \(p\in\{1,\ldots,d\}\), i.e. by selecting a certain univariate marginal and choosing the sign of \(x_{p}\) to select the observations to the left or to the right of the order statistic of that marginal. Note that \(F\) being continuous implies that these \(g(X_{i})\) and \(h(Y_{i})\) have a continuous cdf. This proposition also applies to selecting observations inside a bounding box if that box is constructed as described in Section 2.1. \(\Box\)
### Proof of Theorem 1
Consider \(R_{k}\) at tree level \(D\) and write \(p_{D}:=\frac{n_{k}+1}{n+1}\). Then Proposition 1 gives \(F(R_{k})\sim\;\mbox{Beta }((n+1)p_{D},(n+1)(1-p_{D}))\). Let \(x_{up}:=q\mbox{Beta }\left(1-\frac{\alpha_{D}}{2},(n+1)p_{D},(n+1)(1-p_{D})\right)\) be the upper confidence bound of \(C_{k}(\alpha_{D})\). Then
\[\frac{\alpha_{D}}{2}\;=\;\mbox{IP}\left(F(R_{k})\geq x_{up}\right)\;\leq\; \exp\Bigl{(}-(n+1)\Psi(x_{up},p_{D})\Bigr{)}\]
by Proposition 2.1 in [15], where \(\Psi(x,p):=p\log\frac{p}{x}+(1-p)\log\frac{1-p}{1-x}\). Hence \(\Psi(x_{up},p_{D})\leq\frac{1}{n+1}\log\frac{2}{\alpha_{D}}\). Using the inequality at the end of said proposition, this implies
\[x_{up}-p_{D}\;\leq\;\sqrt{2p_{D}(1-p_{D})\frac{\log\frac{2}{\alpha_{D}}}{n+1} }+(1-2p_{D})^{+}\frac{\log\frac{2}{\alpha_{D}}}{n+1}\]
In the same way one finds an (even tighter) inequality for the lower confidence bound \(x_{low}\) of \(C_{k}(\alpha_{D})\):
\[x_{low}-p_{D}\;\geq\;-\sqrt{2p_{D}(1-p_{D})\frac{\log\frac{2}{\alpha_{D}}}{n+1}}\]
Therefore
\[\sup_{x\in C_{k}(\alpha_{D})}\sqrt{n}\frac{|x-p_{D}|}{\sqrt{p_{D}(1-p_{D})}}\,\leq \,\sqrt{2\log\frac{2}{\alpha_{D}}}+\frac{\log\frac{2}{\alpha_{D}}}{\sqrt{np_{D} (1-p_{D})}} \tag{9}\]
Using \(2^{D}\sim\frac{1}{p_{D}}\), \(D_{max}\sim\log_{2}n\) and \(\sum_{B=2}^{D_{max}}\frac{1}{B}\sim\log\log_{2}n\), we obtain \(\frac{2}{\alpha_{D}}\leq 2\frac{(\log_{2}n)(\log\log_{2}n)}{\alpha_{D}}\), hence
\[\log\frac{2}{\alpha_{D}}\leq(1+\epsilon_{n})\log\frac{e}{p_{D}},\qquad\text{ with }\epsilon_{n}:=\frac{\log\Bigl{(}\frac{2}{\alpha}(\log_{2}n)(\log\log_{2}n)\Bigr{)}}{ \log n^{1-q}}\]
since \(p_{D}\leq n^{q-1}\). Furthermore, \(p_{D}\geq\frac{\log^{2}n}{n}\) yields
\[\frac{\log\frac{2}{\alpha}}{\sqrt{np_{D}(1-p_{D})}}\leq\frac{\frac{3}{2}\log \frac{e}{p_{D}}}{\log n}\leq\frac{3}{2}\sqrt{\frac{\log\frac{e}{p_{D}}}{\log n}}\]
This allows to bound (9) as follows:
\[\sup_{x\in C_{k}(\alpha_{D})}\sqrt{n}\frac{|x-p_{D}|}{\sqrt{p_{D }(1-p_{D})}} \leq\sqrt{2(1+\epsilon_{n})\log\frac{e}{p_{D}}}+\frac{3}{2}\sqrt{ \frac{\log\frac{e}{p_{D}}}{\log n}}\] \[\leq\Bigl{(}\sqrt{2}+\epsilon_{n}+\frac{3}{2}\sqrt{\frac{1}{\log n }}\Bigr{)}\sqrt{\log\frac{e}{p_{D}}} \tag{10}\]
This essentially establishes the claim apart from having \(p_{D}\) in place of \(x\) in the denominator. However, (10) yields \(\sup_{x\in C_{k}(\alpha_{D})}|x-p_{D}|\leq\sqrt{\frac{p_{D}}{n}}\sqrt{3\log \frac{e}{p_{D}}}\), so
\[\sup_{x\in C_{k}(\alpha_{D})}\left|\frac{x}{p_{D}}-1\right|\,\leq\,\sqrt{ \frac{3\log\frac{e}{p_{D}}}{np_{D}}}\,\leq\,\sqrt{\frac{3}{\log n}}\]
which gives \(\sup_{x\in C_{k}(\alpha_{D})}\sqrt{\frac{p_{D}(1-p_{D})}{x(1-x)}}\leq 1+\sqrt{\frac{3}{\log n}}\). Therefore
\[\sup_{x\in C_{k}(\alpha_{D})}\sqrt{n}\frac{|x-p_{D}|}{\sqrt{x(1-x )}} \leq\Bigl{(}\sqrt{2}+\epsilon_{n}+\frac{3}{2\sqrt{\log n}}\Bigr{)} \left(1+\sqrt{\frac{3}{\log n}}\right)\sqrt{\log\frac{e}{p_{D}}}\] \[\leq\Bigl{(}\sqrt{2}+\frac{4}{\sqrt{\log n}}\Bigr{)}\sqrt{\log \frac{e}{p_{D}}}\]
for \(n\) large enough. \(\Box\)
|
2305.08328 | FedAds: A Benchmark for Privacy-Preserving CVR Estimation with Vertical
Federated Learning | Conversion rate (CVR) estimation aims to predict the probability of
conversion event after a user has clicked an ad. Typically, online publisher
has user browsing interests and click feedbacks, while demand-side advertising
platform collects users' post-click behaviors such as dwell time and conversion
decisions. To estimate CVR accurately and protect data privacy better, vertical
federated learning (vFL) is a natural solution to combine two sides' advantages
for training models, without exchanging raw data. Both CVR estimation and
applied vFL algorithms have attracted increasing research attentions. However,
standardized and systematical evaluations are missing: due to the lack of
standardized datasets, existing studies adopt public datasets to simulate a vFL
setting via hand-crafted feature partition, which brings challenges to fair
comparison. We introduce FedAds, the first benchmark for CVR estimation with
vFL, to facilitate standardized and systematical evaluations for vFL
algorithms. It contains a large-scale real world dataset collected from
Alibaba's advertising platform, as well as systematical evaluations for both
effectiveness and privacy aspects of various vFL algorithms. Besides, we also
explore to incorporate unaligned data in vFL to improve effectiveness, and
develop perturbation operations to protect privacy well. We hope that future
research work in vFL and CVR estimation benefits from the FedAds benchmark. | Penghui Wei, Hongjian Dou, Shaoguo Liu, Rongjun Tang, Li Liu, Liang Wang, Bo Zheng | 2023-05-15T03:34:42Z | http://arxiv.org/abs/2305.08328v1 | # FedAs: A Benchmark for Privacy-Preserving CVR Estimation with Vertical Federated Learning
###### Abstract.
Conversion rate (CVR) estimation aims to predict the probability of conversion event after a user has clicked an ad. Typically, online publisher has user browsing interests and click feedbacks, while demand-side advertising platform collects users' post-click behaviors such as dwell time and conversion decisions. To estimate CVR accurately and protect data privacy better, vertical federated learning (vFL) is a natural solution to combine two sides' advantages for training models, without exchanging raw data. Both CVR estimation and applied vFL algorithms have attracted increasing research attentions. However, standardized and systematical evaluations are missing: due to the lack of standardized datasets, existing studies adopt public datasets to _simulate_ a vFL setting via hand-crafted feature partition, which brings challenges to fair comparison. We introduce FedAds, the first benchmark for CVR estimation with vFL, to facilitate standardized and systematical evaluations for vFL algorithms. It contains a large-scale real world dataset collected from Alibaba's advertising platform, as well as systematical evaluations for both effectiveness and privacy aspects of various vFL algorithms. Besides, we also explore to incorporate unaligned data in vFL to improve effectiveness, and develop perturbation operations to protect privacy well. We hope that future research work in vFL and CVR estimation benefits from the FedAds benchmark.
Ad Ranking, Vertical Federated Learning, Deep Generative Model +
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †:
from multiple participants via exchanging _intermediate results_ (e.g., hidden representations and gradients) rather than _raw data_ (e.g., features and labels). Figure 1 (b) shows the vFL framework for training a neural network based CVR estimation model. There are two participants, where the **non-label party** is online publisher and the **label party** is advertising platform that owns conversion labels. The training data for vFL is **feature-partitioned**: before model training, the two participants first perform private set intersection (PSI) (Krizhevsky et al., 2014) to obtain an aligned sample ID set, and each sample's features come from both non-label party (e.g., behaviors on publisher page) and label party (e.g., behaviors on ad page).
The whole model is split into two submodels owned by non-label and label parties respectively. During the _forward_ pass, for a given input sample, the non-label party's submodel sends a _hidden representation_ to the label party. Then the label party combines such representation with its own one, and produces the predicted conversion probability and cross-entropy loss according to the sample's conversion label. During the _backward_ propagation, the label party computes _gradient_ w.r.t. the non-label party's hidden representation and sends it back, thus the update of non-label party's submodel parameters depends on the label party.
Both CVR estimation (Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014) and applied vFL algorithms (Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014) have attracted increasing research attentions. Specifically, to improve privacy-preserving CVR estimation, two research problems in vFL should be tackled. The first is about **effectiveness**: Traditional vFL training procedure only employs _aligned_ samples among multiple participants, however the size of aligned samples is quite limited, which restricts the model performance. Various approaches based on self-supervised learning (Krizhevsky et al., 2014; Krizhevsky et al., 2014) and semi-supervised learning (Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014) are proposed to explore the potential usage of a large amount of unaligned samples owned by each party for improving vFL. The second is about **privacy**: Although vFL only exchange intermediate results rather than raw features and labels, recent studies revealed that it still suffers from privacy leakage risks such as label inference attack, which means that a honest-but-curious non-label party can successfully infer private labels. To defend against such attack, many studies focus on random perturbation based approaches (Krizhevsky et al., 2014; Krizhevsky et al., 2014; Krizhevsky et al., 2014) for protecting private label information.
However, standardized datasets and systematical evaluations for vFL algorithms are missing: due to the lack of standardized datasets, existing studies usually adopt public datasets to _simulate_ a vFL experiment setting via hand-crafted feature partition, rather than adopting datasets from real-world vFL applications. This situation brings challenges to fair comparison of various models and hinders further research of vFL and privacy-preserving CVR estimation. As a result, above-mentioned vFL algorithms are not compared under the same dataset and evaluation procedure currently. Therefore, there is a great need for a comprehensive benchmark to facilitate vFL research.
In this paper we introduce FedAds, the first benchmark for privacy-preserving CVR estimation with vFL, to facilitate systematical evaluations for vFL algorithms. It contains 1) a large-scale real-world dataset from our online advertising platform, collected from an ad delivery business relying on vFL-based ranking models, as well as 2) systematical evaluations for both effectiveness and privacy aspects of various neural network based vFL algorithms through extensive experiments. Besides, to improve vFL effectiveness, we explore to incorporate unaligned data via generating unaligned samples' feature representations using generative models. To protect privacy well, we also develop perturbation based on mixup and projection operations. We hope that future research work in both vFL algorithms and CVR estimation benefits from our FedAds benchmark.
The main contributions of this work are:
* We provide a real-world CVR estimation dataset collected from our ad delivery business relying on vFL-based ranking models. To our knowledge, this is the first large-scale dataset for vFL research.
* We conduct systematical evaluations to recently proposed vFL algorithms for effectiveness and privacy aspects respectively on the proposed dataset, which promotes fair comparison of various studies.
* We propose two approaches for incorporating unaligned samples and protecting private label information in vFL respectively, and experiments on the proposed dataset verify their performance.
## 2. Preliminaries
### Conversion Rate Estimation
The goal of a post-click conversion rate (CVR) estimation model \(f(\cdot)\) is to produce the conversion probability if a user has clicked an ad: \(\hat{p}_{\text{CVR}}=f(\mathbf{x})\), where \(\mathbf{x}\) denotes input feature of a sample. The model \(f(\cdot)\) is trained using click \(\log\mathcal{D}=\{(\mathbf{x},y)\}\) with cross-entropy loss, where \(y\in\{0,1\}\) is the sample's binary conversion label. CVR estimation models are usually deployed to the ranking module of online advertising and recommendation systems, and they are crucial for improving user experiences, satisfying advertiser demands and increasing the revenue of advertising platform.
### Vertical Federated Learning
We first give a brief introduction to two-party vFL framework, and then discuss the issues in traditional vFL.
Figure 1. (a) The feedback behaviors after a user browses an ad. (b) Vertical federated learning framework for training conversion rate estimation model. Online publisher and advertising platform provide the collected user feedback data and collaboratively train the model.
#### 2.2.1. Two-Party vFL
We consider the overall framework in Figure 1 (b). Without loss of generality, we assume that there are two separate participants, namely non-label party and label party. They cooperate with each other to learn a model \(f:\mathcal{X}\rightarrow\mathcal{Y}\). The feature space \(\mathcal{X}=\mathcal{X}_{\mathrm{N}}\cup\mathcal{X}_{\mathrm{L}}\) is composed of two parts, where \(\mathcal{X}_{\mathrm{N}}\) / \(\mathcal{X}_{\mathrm{L}}\) represents the feature subspace owned by Non-label party / Label party. The label party owns the one-hot label space \(\mathcal{Y}\).
The whole model \(f(\cdot)\) is split into two submodels \(f_{\mathrm{N}}(\cdot)\) and \(f_{\mathrm{L}}(\cdot)\) which are owned by non-label party and label-party respectively:
\[f\coloneqq f_{\mathrm{N}}\circ f_{\mathrm{L}}\,, \tag{1}\]
here \(f_{\mathrm{N}}(\cdot)\) is the non-label party's submodel which produces hidden representation for each sample and sends it to label party, while \(f_{\mathrm{L}}(\cdot)\) is the label party's submodel that produces the predicted probability distribution.
Before model training, the two parties first perform PSI to obtain an **aligned** sample set \(\mathcal{D}_{\mathrm{aligned}}=\{(\mathbf{x}_{\mathrm{N}},\mathbf{x}_{ \mathrm{L}},\mathbf{y})\}\), where \(\mathbf{x}_{\mathrm{N}}\) and \(\mathbf{x}_{\mathrm{L}}\) denote the sample's feature provided by two parties respectively, and its label \(\mathbf{y}\) is a one-hot vector. Consider the forward pass, for each sample the model \(f(\cdot)\) outputs the predicted distribution \(\hat{\mathbf{y}}\), and then computes the cross-entropy loss \(\mathcal{L}\):
\[\mathbf{h}_{\mathrm{N}}=f_{\mathrm{N}}(\mathbf{x}_{\mathrm{N}})\,,\quad \mathbf{h}_{\mathrm{L}}=f_{\mathrm{L}}^{(\mathrm{b})}(\mathbf{x}_{\mathrm{L}}) \tag{3}\] \[\mathbf{l}=f_{\mathrm{L}}^{(\mathrm{t})}(\mathbf{h}_{\mathrm{N}},\mathbf{h}_{ \mathrm{L}})\,,\quad\hat{\mathbf{y}}=\operatorname{softmax}(\mathbf{l})\] (4) \[\mathcal{L}=-\mathbf{y}^{\top}\log\hat{\mathbf{y}} \tag{2}\]
where \(\mathbf{h}_{\mathrm{N}}\) is the hidden representation that the non-label party sends to the label party, and the output layer of \(f_{\mathrm{N}}(\cdot)\) is also known as **cut layer**. The label party's submodel \(f_{\mathrm{L}}(\cdot)\) is composed of a bottom part \(f_{\mathrm{L}}^{(\mathrm{b})}(\cdot)\) and a top part \(f_{\mathrm{L}}^{(\mathrm{t})}(\cdot)\).
For the backward pass, the label party's submodel \(f_{\mathrm{L}}(\cdot)\) is updated normally based on the gradient of loss \(\mathcal{L}\) w.r.t. the submodel parameters. To update the parameters of non-label party's submodel \(f_{\mathrm{N}}(\cdot)\), the label party further computes the gradient w.r.t. the hidden representation \(\mathbf{h}_{\mathrm{N}}\), and sends it back to the non-label party:
\[\mathbf{g}\coloneqq\nabla_{\mathbf{h}_{\mathrm{N}}}\mathcal{L}=\frac{ \partial\mathbf{h}}{\partial\mathbf{h}_{\mathrm{N}}}\frac{\partial\mathcal{L}}{ \partial\mathbf{l}}=\frac{\partial\mathbf{l}}{\partial\mathbf{h}_{\mathrm{N}}}(\hat{\mathbf{y} }-\mathbf{y})\,. \tag{5}\]
After receiving the gradient \(\mathbf{g}\), the non-label party computes the gradients of the remaining submodel parameters using chain rule, and thus continues the backward pass to update the submodel \(f_{\mathrm{N}}(\cdot)\).
In the above training process, the two participants do not share their raw data (features \(\mathbf{x}_{\mathrm{N}}\), \(\mathbf{x}_{\mathrm{L}}\) and label \(\mathbf{y}\)) to each other. On the contrary, they only exchange intermediate results \(\mathbf{h}_{\mathrm{N}}\) and \(\mathbf{g}\).
#### 2.2.2. Issues of Effectiveness and Privacy
vFL has been successfully applied in healthcare informatics (Zhu et al., 2019), computational advertising (Zhu et al., 2019) and many other domains. To further improve vFL algorithms, recent studies pay more attention to the following perspectives: effectiveness and privacy.
**Effectiveness: Limited Aligned Samples.**
The training procedure of traditional vFL algorithms relies on aligned, feature-partitioned data. That is, each participant first provides its own sample ID set, and then a PSI process is adopt to align samples from all participants, and finally produces the _intersection_ sample set, namely aligned samples. In the aligned data, for a specific sample, each participant owns a part of feature of it, and all participants need to collaboratively train the vFL model.
We can see that the vFL model performance greatly depends on the size of aligned samples. However the size is usually limited, which restricts the effectiveness of vFL models. To tackle this, a direction is to make use of local samples of each participant, namely unaligned samples. In the case of CVR estimation, the advertising platform usually also collects conversion events from in-station ads (which are not displayed on the extra publishers). These local data can be used as auxiliary training samples to enhance the model trained on aligned samples only. To exploit such unaligned samples that only have partly features into the vFL training framework, we focus on how to synthesize the features of other participants.
**Privacy: Potential Label Leakage.**
vFL is often considered to be privacy-oriented, because during training process participants only exchange intermediate hidden representations and gradients rather than raw features and labels. However, recent studies revealed that it still suffers from potential privacy leakage risks: (1) label inference attack (Zhu et al., 2019; Zhu et al., 2019; Zhu et al., 2019), which means that a honest-but-curious non-label party can successfully infer private labels, and (2) input reconstruction attack (Zhu et al., 2019; Zhu et al., 2019), which means that the label party can reconstruct the raw input features of the non-label party.
In this work we focus on defending against label inference attacks, aiming to guarantee that the private label information owned by label party can be protected. Specifically, although the label party only sends gradients to the non-label party during training, from Equation 5 we can observe that the mathematical expression of the gradient \(\mathbf{g}\) w.r.t. hidden representation contains label information \(\mathbf{y}\), which results in potential label leakage.
During the training procedure of vFL models, privacy-preserving computing techniques can be applied to protect private information. For instance, the open-source framework EFLS1 from Alibaba Group provides APIs of cryptography-based homomorphic encryption (Zhu et al., 2019) and perturbation-based differential privacy (Beng et al., 2019). In this work we focus on random perturbation-based algorithms.
Footnote 1: [https://github.com/alibaba/Elastic-Federated-Learning-Solution](https://github.com/alibaba/Elastic-Federated-Learning-Solution)
## 3. Proposed Benchmark
Currently, the lack of a comprehensive benchmark brings challenges to fair comparison of various algorithms, and also hinders further research for tacking the effectiveness and privacy issues of vFL. To address this, we introduce FedAds, the first benchmark for privacy-preserving CVR estimation with vFL, to facilitate systematical evaluations for vFL algorithms.
### Overview
The FedAds benchmark contains:
* A large-scale dataset from Alibaba's advertising platform, collected from the log of an ad delivery business relying on vFL-based ranking models. Details in Section 3.2.
* Systematical evaluations for both effectiveness and privacy aspects of various vFL algorithms. Details in Section 5.
We release the FedAds benchmark at the page [https://github.com/alibaba/Elastic-Federated-Learning-Solution/tree/FedAds](https://github.com/alibaba/Elastic-Federated-Learning-Solution/tree/FedAds).
### Real-World Dataset Construction
We first introduce the background of the collected data, and then give statistics and features of the dataset.
#### 3.2.1. **Data Description**
The dataset is built upon the click log of our e-commerce and delivery business, in which both the online publisher and the advertising platform belong to Alibaba Group. Although the two parties belong to the same company, they still cannot share user behavior information to each other. Specifically, the online publisher is a mobile app that contains ad positions. As shown in Figure 2, the advertising platform bids for ad impressions in real-time, and for each request the predicted CVR score \(\hat{p}_{\mathrm{CVR}}\) is a key factor in the bid price. If an ad from the advertising platform wins a bid, it will be displayed to the user. The user will arrive at another e-commerce mobile app that manages the ad landing page if he/she clicks on the ad, and may take further behaviors such as add-to-wishlist and purchase.
The above ad delivery business is a typical application of vFL, where the online publisher and the advertising platform collaboratively train the CVR estimation model for ranking candidate ads and improving the delivery performance. Therefore, we believe that further vFL research can benefit from our benchmark.
#### 3.2.2. **Dataset Construction**
We built the dataset based on the above collected data. Specifically, we collect 1-month consecutive user click events of the delivery business, and each sample in the dataset is corresponding to a unique click event. We record context information for each sample, such as the timestamps of request and click event. Generally, the dataset is composed of features from both parties, and conversion labels from label party.
**Conversion label.**
A sample's label is set to 1 if the user purchases the item described by the clicked ad, where the attribution window is set to 24 hours. Here we employ last-touch attribution, which means that if a user clicks on the ad multiple times and finally purchases the item, we regard that this conversion event is attributed by the last click event.
**Features and processing.**
The feature set for each sample consists of two parts: one part from the label party (i.e., advertising platform) and another one from the non-label party (i.e., online publisher). Specifically,
* Features from label party: we construct user-side, ad-side and context features.
* The user-side features contain user profile information (e.g., user ID and activity level), as well as purchase-related behaviors such as user's historial purchased items.
* Context feature is the timestamp of conversion event.
* Features from non-label party: similarly, we construct user-side, ad-side and context features.
* The user-side features are click-related behaviors such as user's historial clicked ads.
* The ad-side features are statistical information such as historial impression count level.
* Context feature is the timestamp of click event.
In summary, there are 16 features owned by label party and 7 features owned by non-label party. For the considerations of fair comparison and removing personal identifiable information, in our dataset we release the processed features rather than original values. Specifically, for discrete features we map the original values to IDs. For each continuous feature, we perform equi-frequency discretization to transform the original values to bin IDs.
#### 3.2.3. **Statistics**
Table 1 lists the statistics of our constructed dataset. Totally the dataset contains 11.3 million samples, and to our knowledge it is the largest public dataset for evaluating CVR estimation models and vFL algorithms. We split it to training set and test set based on click timestamp, where the last week's samples are selected for test set. Details about how to use the dataset for evaluating effectiveness and privacy are introduced in Section 5.
We compare our proposed dataset with current commonly-used datasets in vFL research in Table 2. More importantly, our dataset is constructed from real world applications, thus we do not need to simulate a vFL experiment setting.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Split & \# Samples & \# Users & \# Ads & CVR \\ \hline Training+Test & 11.3 mil. & 4.1 mil. & 1.3 mil. & 0.6\% \\ Training & 10.0 mil. & 3.7 mil. & 1.2 mil. & 0.6\% \\ Test & 1.3 mil. & 0.7 mil. & 0.4 mil. & 0.6\% \\ \hline \hline \end{tabular}
\end{table}
Table 1. Statistics of the dataset.
\begin{table}
\begin{tabular}{c c c} \hline \hline Split & \# Samples & Type \\ \hline BHI (Zhou et al., 2017) & 0.3 mil. & Image \\ Yahoo Answers (Zhou et al., 2017) & 1.5 mil. & Text \\ Give Me Some Credit (Zhou et al., 2017) & 0.3 mil. & Tabular \\ Epsilon (Zhou et al., 2017) & 0.5 mil. & Tabular \\ Avazu (Zhou et al., 2017) & 4 mil. & Tabular \\ Criteo (Zhou et al., 2017) & 4.5 mil. & Tabular \\ FedAds (Ours) & 11.3 mil. & Tabular \\ \hline \hline \end{tabular}
\end{table}
Table 2. Comparison of datasets.
Figure 2. Brief illustration of the ranking stage in ad delivery procedure and user behaviors. The proposed dataset is built upon the click log collected from this procedure.
## 4. Methodology
Before the systematical evaluations of existing vFL models, we propose two approaches for improving effectiveness and privacy.
### Exploiting Label Party's Unaligned Samples
As statement in Section 2.2.2, the limited size of aligned samples \(\mathcal{D}_{\mathrm{aligned}}=\{(\mathbf{x}_{\mathrm{N}},\mathbf{x}_{\mathrm{L}}, \mathbf{y})\}\) restricts the effectiveness of vFL models. From the perspective of advertising platform, a nature way to tackle this problem is to incorporate its local and unaligned samples \(\mathcal{D}_{\mathrm{unaligned}}=\left\{\left(\mathbf{x}_{\mathrm{L}}^{\mathrm{ u}},\mathbf{y}^{\mathrm{u}}\right)\right\}\) into the vFL training procedure.
However, the challenge of exploiting label party's unaligned samples in vFL is that they do not have non-label party's features (i.e., \(\mathbf{x}_{\mathrm{N}}^{\mathrm{u}}\) is missing). We propose Diffu-AT, an enhanced vFL training framework which first generates missing feature with a diffusion model, and then performs alternatively training to incorporate unaligned samples into the traditional vFL framework.
1.1. **Generating Federated Embedding \(\tilde{\mathbf{h}}_{\mathrm{L}}^{\mathrm{u}}\) with Conditional Diffusion Model**
As shown in Figure 1, the key of traditional vFL training procedure is that the non-label party sends the representation \(\mathbf{h}_{\mathrm{N}}\) (named **federated embedding** in this work) to enhance the prediction. To effectively incorporate unaligned samples into vFL, we employ deep generative models to synthesize federated embeddings \(\tilde{\mathbf{h}}_{\mathrm{N}}^{\mathrm{u}}\) for those samples.
**Problem formulation.**
We formulate the synthesis process is to learn a generation model \(\tilde{f}_{\mathrm{N}}(\cdot)\), which can generate a federated embedding \(\tilde{\mathbf{h}}_{\mathrm{N}}^{\mathrm{u}}\) given label party's feature \(\mathbf{x}_{\mathrm{L}}^{\mathrm{u}}\) and label \(\mathbf{y}^{\mathrm{u}}\), namely \(\tilde{\mathbf{h}}_{\mathrm{N}}^{\mathrm{u}}=\tilde{f}_{\mathrm{N}}\left(\mathbf{x}_{ \mathrm{L}}^{\mathrm{u}},\mathbf{y}^{\mathrm{u}}\right)\).
**Step 1: vFL pretraining.**
To this end, we first perform vFL training, that is, pretrain a vFL model \(f(\cdot)\) using aligned samples \(\mathcal{D}_{\mathrm{aligned}}\) with the loss in Eq. 4. Therefore, based on the pretrained vFL model, for each aligned sample we obtain its federated embedding \(\mathbf{h}_{\mathrm{N}}\) and label party's representation \(\mathbf{h}_{\mathrm{L}}\). Similarly, for each unaligned sample, we obtain its label party's representation \(\tilde{\mathbf{h}}_{\mathrm{L}}^{\mathrm{u}}\).
Next, we use the data \(\{(\mathbf{h}_{\mathrm{N}},\mathbf{h}_{\mathrm{L}},\mathbf{y})\}\) of aligned samples to learn the generation model \(\tilde{f}_{\mathrm{N}}(\cdot)\), so as to perform inference on the data \(\left\{\left(\mathbf{h}_{\mathrm{L}}^{\mathrm{u}},\mathbf{y}^{\mathrm{u}}\right)\right\}\) of unaligned samples to generate \(\{\tilde{\mathbf{h}}_{\mathrm{N}}^{\mathrm{u}}\}\).2
Footnote 2: Note that compared to previous problem formulation, we simplify the synthesis process via replacing the input \(\mathbf{x}_{\mathrm{L}}^{\mathrm{u}}\) with \(\mathbf{h}_{\mathrm{L}}^{\mathrm{u}}\).
**Step 2: Learning a conditional diffusion model as \(\tilde{f}_{\mathrm{N}}(\cdot)\)**.
Inspired by recent studies on diffusion-based deep generative models ((Glorini et al., 2018; Glorini et al., 2019; Chen et al., 2020; Wang et al., 2020)), we propose to synthesize unaligned samples' federated embeddings via a conditional diffusion model.
Formally, we regard the concatenation of label party's representation \(\mathbf{h}_{\mathrm{L}}\) and label \(\mathbf{y}\) as the condition \([\mathbf{y};\mathbf{h}_{\mathrm{L}}]\). We define the forward (noising) process with \(T\) steps as:
\[\begin{split} q\left(\mathbf{h}_{\mathrm{N},t}\mid\mathbf{h}_{\mathrm{N },t-1}\right)&=\mathcal{N}\left(\mathbf{h}_{\mathrm{N},t};\sqrt{1- \beta_{t}}\mathbf{h}_{\mathrm{N},t-1},\beta_{t}\mathrm{I}\right)\\ & z_{t}&\coloneqq[\mathbf{y};\mathbf{h}_{\mathrm{L}};\mathbf{h}_ {\mathrm{N},t}]\end{split} \tag{6}\]
where \(\mathbf{h}_{\mathrm{N},0}\coloneqq\mathbf{h}_{\mathrm{N}}\) and \(z_{0}\coloneqq[\mathbf{y};\mathbf{h}_{\mathrm{L}};\mathbf{h}_{\mathrm{N}}]\), which means that at each step we incrementally add Gaussian noise on the part of \(\mathbf{h}_{\mathrm{N}}\) only, while keeping the condition part unchanged. \(\beta_{t}\) is a hyperparameter that controls the degree of noise at each step, and after \(T\) steps the \(\mathbf{h}_{\mathrm{N},T}\) is approximately Gaussian.
Further, the reverse (denoising) process is to reconstruct the original \(z_{0}\) given \(\mathbf{z}_{T}\):
\[\begin{split} p_{\Theta}(\mathbf{z}_{t-1}\mid\mathbf{z}_{t})=\mathcal{N} \left(\mathbf{z}_{t-1};\mathbf{\mu}_{\Theta}(\mathbf{z}_{t},t),\Sigma_{\Theta}(\mathbf{z}_{t},t )\right)\end{split} \tag{7}\]
where \(\mathbf{\mu}_{\Theta}(\mathbf{z}_{t},t)\) and \(\Sigma_{\Theta}(\mathbf{z}_{t},t)\) are the predicted mean and standard deviation of \(p_{\Theta}(\mathbf{z}_{t-1}\mid\mathbf{z}_{t})\) respectively, parameterized by learnable \(\Theta\). The objective is to maximize the marginal likelihood \(\mathbb{E}_{q(\mathbf{z}_{0})}\left[\log p_{\Theta}(\mathbf{z}_{0})\right]\), and it can be optimized using the variational lower bound (Wang et al., 2020). In practice, we follow DDPM (Glorini et al., 2018) to use a simplified form that the training stability has been empirically proved:
\[\begin{split}\mathcal{L}_{\mathrm{DDPM}}=\sum_{t=1}^{T}\mathbb{E} _{q(\mathbf{z}_{t}|\mathbf{z}_{0})}\left[\|\tilde{\mathbf{\mu}}_{t}(\mathbf{z}_{t},\mathbf{z}_{0} )-\mathbf{\mu}_{\Theta}(\mathbf{z}_{t},t)\|^{2}\right]\end{split} \tag{8}\]
where \(\tilde{\mathbf{\mu}}_{t}(\mathbf{z}_{t},\mathbf{z}_{0})=\frac{\sqrt{\mathbf{\mu}_{t}(1-\tilde{ \mathbf{\mu}}_{t-1})}}{1-\tilde{\mathbf{\mu}}_{t}}\mathbf{z}_{t}+\frac{\sqrt{\mathbf{\mu}_{t}- \tilde{\mu}_{t}}\tilde{\mathbf{\mu}}_{t}}{1-\tilde{\mathbf{\mu}}_{t}}\mathbf{z}_{0}\) is the mean of posterior \(q(\mathbf{z}_{t-1}\mid\mathbf{z}_{t},\mathbf{z}_{0})\), and here \(\mathbf{\alpha}_{t}=1-\beta_{t},\tilde{\mathbf{\alpha}}_{t}=\prod_{i=1}^{t}\alpha_{t}\).
After learning the diffusion model, we perform conditional generation to obtain each unaligned sample's federated embedding \(\tilde{\mathbf{h}}_{\mathrm{N}}^{\mathrm{u}}\) given the conditional input \(\mathbf{z}_{T}=\left[\mathbf{y}^{\mathrm{u}};\mathbf{h}_{\mathrm{L}}^{\mathrm{u}};\mathbf{h}_{ \mathrm{T}}\right]\), where \(\mathbf{h}_{T}\sim\mathcal{N}(0,1)\) is the initiate state. The generation repeats \(T\) steps of denoising operation using the learned model \(\Theta\) for sampling \(\mathbf{z}_{T-1},\mathbf{z}_{T-2},\ldots,\mathbf{z}_{0}\) using \(\mathbf{z}_{t-1}\sim p_{\Theta}(\mathbf{z}_{t-1}\mid\mathbf{z}_{t})\). Note that at each step, we replace the condition part of the generated \(\mathbf{z}_{t}\) to the original condition \(\left[\mathbf{y}^{\mathrm{u}};\mathbf{h}_{\mathrm{L}}^{\mathrm{u}}\right]\). Finally at the last step we obtain \(\tilde{\mathbf{h}}_{\mathrm{N}}^{\mathrm{u}}\coloneqq\mathbf{h}_{0}\) as the synthesized federated embedding, where \(\mathbf{h}_{0}\) is the part in \(\mathbf{z}_{0}\).
#### 4.1.2. **Alternative Training Framework**
Now we have aligned samples \(\mathcal{D}_{\mathrm{aligned}}=\{(\mathbf{x}_{\mathrm{N}}.\mathbf{x}_{\mathrm{L}}, \mathbf{y})\}\) as well as unaligned samples with synthesized federated embeddings \(\tilde{\mathcal{D}}_{\mathrm{unaligned}}=\left\{\left(\tilde{\mathbf{h}}_{ \mathrm{N}}^{\mathrm{u}},\mathbf{x}_{\mathrm{L}}^{\mathrm{u}},\mathbf{y}^{\mathrm{u}} \right)\right\}\).
To effectively fuse them for learning an enhanced federated model that improves effectiveness, we propose to combine the two
Figure 3. (a) Learning a conditional diffusion model for generating federated embeddings of label party’s unaligned samples. (b) The Diffu-AT framework for exploiting unaligned samples in vFL training.
sample set in an alternative training fashion. As shown in Figure 3 (b), the label party augments an auxiliary submodel \(\tilde{f}_{\text{L}}(\cdot)\) composed of \(\left(\tilde{f}_{\text{L}}^{(\text{b})}\text{,}\tilde{f}_{\text{L}}^{(\text{t}) }\text{,}\tilde{f}_{\text{N}}\right)\) to exploit unaligned samples, and recall that \(\tilde{f}_{\text{N}}(\cdot)\) is the learned diffusion model.
Our proposed vFL framework named Diffu-AT contains a federated branch \(f(\cdot)\) and a local branch \(\tilde{f}_{\text{L}}(\cdot)\) for learning from aligned samples \(\mathcal{D}_{\text{aligned}}\) and unaligned samples \(\tilde{\mathcal{D}}_{\text{unaligned}}\) respectively in an alternative training fashion, and their bottom parts \(f_{\text{L}}^{(\text{b})}\) and \(\tilde{f}_{\text{L}}^{(\text{b})}\) share the parameter set. Specifically, at each training iteration, we randomly sample a mini-batch from \(\mathcal{D}_{\text{aligned}}\) or \(\tilde{\mathcal{D}}_{\text{unaligned}}\), and then update the parameters of the corresponding branch. The sampling probability of selecting a mini-batch from \(\mathcal{D}_{\text{aligned}}\) is set to \(p=|\mathcal{D}_{\text{aligned}}|/(|\mathcal{D}_{\text{aligned}}|+| \tilde{\mathcal{D}}_{\text{unaligned}}|)\).
Put all together, Algorithm 1 shows the training procedure of our Diffu-AT. Note that for large-scale deep learning based estimation models in online advertising and recommendation systems, the number of epochs is usually set to one.
#### 4.1.3. **Online Inference**
During online inference, only the federated branch \(f(\cdot)\) is needed for producing the real-time predictions, and the local branch \(\tilde{f}_{\text{L}}(\cdot)\) is dropped.
We notice that some studies focus on performing online inference based on a local model owned by label party via distilling knowledge from the federated model, motivated by reducing response time of receiving federated embedding from the non-label party (Kumar et al., 2018; Kumar et al., 2018). However, in practice we found that the performance of the distilled local model drops drastically compared to the federated model, which is unacceptable in Alibaba's advertising business.
### Defending Label Inference Attack
As statement in Section 2.2.2, due to the mathematical expression of the gradient \(\mathbf{g}\) w.r.t. federated embedding contains label information \(\mathbf{y}\), vFL models may suffer from potential label leakage, which means that a honest-but-curious non-label party can infer private labels. We focus on employ random perturbation methods to protect label information during vFL training.
Given the fact that the label leakage mainly comes from the difference between the magnitudes and directions of samples, an intuitive way to address such problem is to applying random convex combination of gradients while transmitting in the cut layer, also known as mixup (Kumar et al., 2018). To alleviate label leakage, we propose a simple-yet-effective gradient mixup and projection approach named MixPro, which performs convex combinations and projections on in-batch sample gradients for protecting private label information. It employs mixup operation as initial perturbation to original gradients, and then performs projection to further remove useless information contained in original gradients.
MixPro does not make any assumption about gradient distribution and can be seamlessly integrated into neural network based vFL framework. Next we first introduce the gradient mixup strategy, then adopt gradient projection to further modify the gradient directions to achieve better privacy-preserving performance during vFL model training.
#### 4.2.1. **Gradient Mixup**
For a batch of samples during training, we denote \(\{\mathbf{g}_{i}\}_{i=1}^{B}\) to be the collection of gradients w.r.t. federated embeddings at the cut layer, where \(B\) is the batch size. We formulate the mixup-based perturbed gradient of the \(i\)-th sample to be the convex combination of two sample gradients in the form below:
\[\mathbf{g}_{mixed,i}=\lambda\cdot\mathbf{g}_{i}+(1-\lambda)\cdot\mathbf{g}_{r} \tag{9}\]
where \(\mathbf{g}_{r}\) is a random sample's gradient from the given batch. As stated in the original mixup method (Kumar et al., 2018), we choose \(\lambda\sim\mathrm{Beta}(\alpha,\alpha)\) and set \(\lambda>0.5\), where \(\alpha\) is a hyperparameter.
Note that we set \(\lambda>0.5\) since the perturbed gradients should preserve more information from the original gradient including magnitude and direction to maintain the prediction performance of the vFL model. We only use the convex combination between two sample gradients in order to simplify the calculation, and we empirically found that this strategy also achieves similar performance with more gradients to be mixed.
#### 4.2.2. **Gradient Projection**
The above gradient mixup strategy may still suffer from the label leakage problem induced by the directions of gradients, since the direction information of the original gradient remains in the perturbed gradient to some extent.
In order to keep the directions of gradients confined to a smaller region, we propose to further perform gradient projection on the mixed gradients. This is inspired by the studies in multi-task learning models (Zhu et al., 2017; Wang et al., 2019), in which the gradients for different tasks are projected along the direction of main one to avoid the conflict in gradient directions and thus achieve better performance. Given the intuition that higher similarity of gradients in orientation will effectively alleviate the label leakage from sample gradients, such projection technique in multi-task optimization can be further adopted in defending label leakage in vFL.
Specifically, we denote \(\bar{\mathbf{g}}=\frac{1}{B}\sum_{i=1}^{B}\mathbf{g}_{i}\) as the average gradient for a batch of samples. We set the goal of the cosine similarity between the an in-batch gradient \(\mathbf{g}_{mixed,i}\) and the average gradient \(\bar{\mathbf{g}}\) is that it should be larger than a pre-defined threshold \(\phi_{goal}\in[-1,1]\):
\[\phi_{i}=\cos\left(\mathbf{g}_{mixed,i},\bar{\mathbf{g}}\right)\geq\phi_{goal}\;. \tag{10}\]
For a sample where the similarity goal is not achieved (that is, \(\phi_{goal}>\phi_{i}\)), the following projection operation will be applied and we obtain the projected gradient:
\[\mathbf{g}_{projected,i}=\mathbf{g}_{mixed,i}+\] \[\frac{\|\mathbf{g}_{mixed,i}\|\left(\phi_{goal}\sqrt{1-(\phi_{i})^{ 2}}-\phi_{i}\sqrt{1-(\phi_{goal})^{2}}\right)}{\|\bar{\mathbf{g}}\|\sqrt{1-(\phi_{goal })^{2}}}\cdot\bar{\mathbf{g}}. \tag{11}\]
And finally, the operation of our MixPro is:
\[\mathbf{g}_{perturbed,i}=\begin{cases}\mathbf{g}_{mixed,i}\;,&\text{if }\phi_{i}\geq\phi_{goal}\;,\\ \mathbf{g}_{protected,i}\;,&\text{if }\phi_{i}<\phi_{goal}\;.\end{cases} \tag{12}\]
During training process, based on our defense approach MixPro, for a batch of aligned training samples, the label party sends the perturbed gradients \(\left\{\mathbf{g}_{perturbed,i}\right\}_{i=1}^{B}\) rather than the original ones \(\left\{\mathbf{g}_{i}\right\}_{i=1}^{B}\) to the non-label party for updating the submodel \(f_{\text{N}}(\cdot)\), aiming to improve the privacy of the vFL model training.
MixPro does not make any assumption about gradient distribution and can be seamlessly integrated into arbitrary neural network based vFL training framework.
## 5. Systematical Evaluations
As another contribution of our FedAds benchmark, we conduct systematical evaluations of various vFL models for both effectiveness and privacy aspects. For existing vFL algorithms that focus on improving effectiveness, we evaluate their performance in Section 5.1. We then compare representative approaches for defending label inference attack, and results are shown in Section 5.2.
### Experiments on Effectiveness
We first introduce experimental setup, evaluation metrics and comparative approaches in experiments, and then list implementation details and show experimental results.
#### 5.1.1. **Experimental Setup**
We use our proposed dataset for evaluation. Specifically, we use 20% of the training set as aligned samples \(\mathcal{D}_{\mathrm{aligned}}\), and the rest 80% of the training set is used as unaligned samples \(\mathcal{D}_{\mathrm{unaligned}}\) by removing their features from non-label party. The performance is evaluated using the test set.
This setup allows us to know the upper bound of exploiting unaligned samples in vFL on our dataset: if we use the full training data as aligned samples and train a vFL model, the model's performance is the upper bound because we "leak" the non-label party's features of \(\mathcal{D}_{\mathrm{unaligned}}\).
#### 5.1.2. **Evaluation Metrics**
We use AUC and negative log likelihood (NLL for short) as the evaluation metrics for effectiveness. The former measures ranking performance on candidates, and the latter reflects calibration performance of predicted scores.
#### 5.1.3. **Comparative Approaches**
We compare the following approaches to evaluate effectiveness:
* Local is a model trained on label party's features only, without using any features from non-label party.
* VanillaVFL is a vFL model trained on aligned samples using the loss in Equation 4.
* HeuristicallyFL further exploits label party's unaligned samples, in which the missing non-label party's features are synthesized with a heuristic way: for each unaligned sample, we retrieve the user ID from aligned samples, and compute the averaged federated embedding of this user in VanillaVFL. Then we perform alternative training (see Section 4.1.2).
* SS-VFL [(4)] exploits unaligned samples with self-supervised learning. Each party first employs its local samples to perform unsupervised pretraining, and then a VanillaVFL is trained as the final model.
* FedCVT [(16)] exploits unaligned samples with both self-supervised learning and semi-supervised learning. It first performs unsupervised pretraining to learn a two-stream network. Then a similarity function is used to generate unaligned samples' features. Finally it combines unlabeled unaligned samples and labeled aligned samples with semi-supervised learning in a co-training fashion.3
Footnote 3: Note that the time complexity of similarity computing process in FedCVT is too high, and thus we re-implement it with an approximate way through randomly dropping some candidates before similarity computing.
* VFL-MPD [(22)] exploits unaligned samples with a specific self-supervised learning task, where a matched pair detection
Figure 4. Illustration of gradient mixup and projection operations in MixPro for a training sample’s gradient w.r.t. the federated embedding.
task is proposed to learn powerful representations using large size of unaligned samples.
* FedHSSL (Krizhevsky et al., 2017) exploits unaligned samples with two-stage pretraining. The first stage is cross-party pretraining that fits learned representations each other. The second stage is local pretraining where the learning objective is based on data augmentation.
* JPL (Krizhevsky et al., 2017) exploits unaligned samples via synthesizing non-label party's features based on learning a mapping function between label party's features and non-label party's features, with several constraints like representation equivalence and label discrimination.4
Footnote 4: Note that the original JPL approach distills the knowledge of federated model to a local model, which results in performance drop. For fair comparison, we replace the distillation with the alternative training as we stated in Section 4.1.2.
* Diffu-AT is our proposed approach.
* ORALE is a model trained on the full training set that the non-label party's features of unaligned samples are known in advance. Therefore its performance is the upper bound of exploiting unaligned samples in our dataset.
#### 5.1.4. **Implementation Details**
We choose YouTube-DNN (Dun et al., 2017) as the backbone. Each feature is represented as an 8-dim embedding. The non-label party submodel \(f_{\text{N}}\) contains an embedding table and a two-layered DNN with output sizes of (Krizhevsky et al., 2017; Krizhevsky et al., 2017). The label party's submodel contains two branches \(f_{\text{L}}\) and \(\tilde{f}_{\text{L}}\), where they share the embedding table and bottom part (a two-layered DNN with output sizes of (Krizhevsky et al., 2017; Krizhevsky et al., 2017)), and each top part is a single layer with logistic function to produce predicted score. The two branches do not share the batch normalization operations. The batch size is set to 256. In the conditional diffusion model, the total step \(T\) is set to 1000 as in previous work (Krizhevsky et al., 2017). For timestep \(t\) we use sine and cosine functions to encode, and the schedule for \(\beta\) is linear schedule. We implement the models using XDL5 and EFLS.
Footnote 5: [https://github.com/allahua/~deeplearning](https://github.com/allahua/~deeplearning)
#### 5.1.5. **Experimental Results**
Table 3 shows the evaluation results for effectiveness of all the comparative approaches. Note that 0.3% improvement on AUC can be regarded as a large improvement in large-scale industrial datasets (Dun et al., 2017). We observe that VanillaVFL outperforms Local by around 1.1% AUC, which demonstrates that incorporating non-label party's features to perform federated training is effective to improve the estimation performance. Furthermore, exploiting unaligned samples usually boost the AUC and NLL compared to VanillaVFL. For instance, the simple approach HeuristicVFL outperforms VanillaVFL by 1.0% AUC which shows the potential of this direction. By comparing the approaches that exploits unaligned samples, we can see that the models considering label information of unaligned data (such as Diffu-AT and JPL) generally perform better than the self-supervised learning models (such as FedHSSL, VFL-MPD and SS-VFL), and thus the labeled samples from label party are the key for improving traditional vFL.
Our proposed Diffu-AT shows the best performance on the ranking ability among all comparative approaches, verifying that the synthesized federated embeddings with diffusion model can enhance the representation of unaligned samples. We also see that VFL-MPD performs the best on the calibration performance, and we suggest that its pretraining objective obtains better model initialization. We leave the improvement of Diffu-AT's calibration performance in future work.
### Experiments on Privacy
#### 5.2.1. **Evaluation Metrics for Privacy**
We first introduce how a non-label party can perform label inference attack to steal private label information, and then gives the evaluation metrics for privacy.
**Label inference attack.**
Specifically, the non-label party is the **attacker** that performs label inference attack. The objective of an attacker is to infer the private labels \(\mathbf{y}\) owned by the label party based on exchanged federated embeddings \(\mathbf{h}_{\text{N}}\) and/or gradients \(\mathbf{g}\). Specifically, the attack can be performed between two iterations during training phrase, or after the training is finished. We assume that the non-label party is honest-but-curious, which means that it _cannot_ interfere with the training process (such as sending wrong hidden representations to the label party). Under this situation, the non-label party can employ arbitrary classifiers to infer labels.
From Equation 5 we observe that the form of gradient \(\mathbf{g}\) actually contains label information \(\mathbf{y}\). Besides, Sun et al. (2019) also found that the federated embeddings gradually have correlations with labels during vFL training. We introduce two attack strategies based on gradient and federated embedding respectively.
1. Gradient-based attack (Krizhevsky et al., 2017). Given the observation that a model tends to be less confident about "a positive sample being positive" than "a negative sample being negative", if a sample's gradient norm \(\|\mathbf{g}\|_{2}\) is larger than a threshold, the attacker infers that it belongs to positive class.
2. Federated embedding-based attack (Sun et al., 2019). After vFL model training, we perform clustering on the federated embeddings to place all training samples into two clusters. Given the prior knowledge that positive samples' size is smaller than that of negative samples, the attacker infers that the samples in the smaller cluster belong to the positive class.
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Method** & **Ranking**: AUC & **Calibration**: NLL \\ \hline _Label party’s features_ & \(\{(\mathbf{x}_{\text{L}},\mathbf{y})\}\) & \\ Local & 0.609 & 0.0391 \\ \hline _+ vFL training_ & \(\{(\mathbf{x}_{\text{N}},\mathbf{x}_{\text{L}},\mathbf{y})\}\) & \\ VanillaVFL & 0.620 & 0.0389 \\ \hline _+ Unaligned samples_ & \(\left\{(\mathbf{x}_{\text{L}}^{\text{u}},\mathbf{y}^{\text{u}})\right\}\) & \\ HeuristicVFL & 0.630 & 0.0387 \\ SS-VFL & 0.636 & 0.0381 \\ FedCVT & 0.639 & 0.0379 \\ VFL-MPD & 0.641 & **0.0373** \\ FedHSSL & 0.642 & 0.0375 \\ JPL & 0.644 & 0.0374 \\ Diffu-AT (Ours) & **0.645** & 0.0375 \\ \hline _Upper bound_ & \(\left\{\left(\mathbf{x}_{\text{N}}^{\text{u}},\mathbf{x}_{\text{L}}^{\text{u}},\mathbf{y}^{ \text{u}}\right)\right\}\) & \\ ORALE & 0.658 & 0.0367 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Effectiveness evaluation of comparative vFL algorithms for CVR estimation.
**Evaluation metrics.**
For a vFL model equipping defense approach, we evaluate two aspects including _utility_ and _privacy_. The utility aspect is about the prediction performance of the model, and we choose AUC used in previous experiments on effectiveness.
The privacy aspect evaluates the defense ability to label inference attack. Because the evaluation relies on the attack strategy that may be stochastic to some extent, we design a metric named \(\Lambda\mathrm{LeakAUC}\) to compute relative improvement to a base model: 1) we first use the model without any defense approach as the base, and compute the AUC value given the labels inferred by attacker and the true labels, namely \(\mathrm{LeakAUC}_{\mathrm{base}}\). 2) Consider a model with a specific defense approach, similarly we compute the \(\mathrm{LeakAUC}_{\mathrm{exp}}\). 3) Finally we compute the relative improvement \(\Lambda\mathrm{LeakAUC}=(\mathrm{LeakAUC}_{\mathrm{exp}}-\mathrm{LeakAUC}_{ \mathrm{base}})/\mathrm{LeakAUC}_{\mathrm{base}}\) as the evaluation metric. For the LeakAUC metric, **lower** is better, because it is expected that the attacker cannot recover true labels.
#### 5.2.2. **Comparative Approaches**
We compare the following random perturbation based defense approaches.
* No Defense is the vFL model that does not equip any defense approach during training.
* DP [1] employs differential privacy to obtain a generic gradient perturbation framework, in which DP is enforced on the transmitted information.
* Marvell [20] adds Gaussian noise to perturb original gradients, where the covariance matrices of noise distribution are computed based on minimizing leakage level under a noise power constraint. The noise form assumes that the original gradient also follows Gaussian distribution.
* MixPro is our proposed defense approach.
#### 5.2.3. **Implementation Details**
We choose the VanillaVFL model used in previous experiments on effectiveness as the base model to compute \(\mathrm{LeakAUC}_{\mathrm{base}}\), and the label inference attack strategy is the federated embedding-based attacker. There are two key hyper-parameters in our proposed MixPro: \(\alpha\) to control the gradient mixup strategy and \(\phi_{goal}\) to determine how the mixed gradient should be projected. Generally, a larger \(\alpha\) will force the mixup strategy to show less uncertainty and the mixup weight \(\lambda\) will thus become closer to 0.5, leading to better privacy preservation but greater compromise on AUC. A larger \(\phi_{goal}\) will also narrow down the region of gradients' directions and provide better privacy performance with more trade-off on AUC. In experiments we set \(\alpha=0.6\) and \(\phi_{goal}=\sqrt{3}/2\) by default.
#### 5.2.4. **Experimental Results**
Table 4 shows the experimental results on our proposed datasets of our proposed MixPro and other compared approaches against the label inference attack. The straightforward DP only achieves very limited privacy performance compared to No Defense, which means that more sophisticated approaches are needed for protecting label information.
Marvell reduces the LeakAUC by a large margin and achieves an acceptable level, demonstrating that the a well-designed random perturbation strategy is very effective for defending label inference attack. Our proposed MixPro performs much better than DP, verifying its privacy performance in vFL model training, and Marvell is still the state-of-the-art defense approach. We suggest that in MixPro the random sampling operation for mixup and projection may restrict its defense ability, because the size of positive samples is very small and thus two combined gradients usually come from the same class. We also notice that both MixPro and Marvell result in a drop of around 2.0% AUC compared to No Defense. Therefore, better utility-privacy trade-off is a key direction in further defense approaches during vFL model training.
## 6. Related Work
There are many great benchmarks for FL, such as LEAF [3], FedML [11], Flower [2], FedScale [19], FLamby [33] and pFL-bench [5]. They mainly focus on horizontal FL and personalized FL, and to our knowledge no vFL benchmarks have been proposed for fair comparing existing approaches, especially for neural network based vFL approaches. In this work, we propose the vFL benchmark named FedAs, which provides a large-scale dataset collected from Alibaba's advertising system, as well as systematical evaluations for existing approaches. Therefore we believe that the FedAs makes good contributions to facilitate vFL research.
## 7. Conclusion and Future Work
We introduce FedAs, the first benchmark for privacy-preserving CVR estimation, to facilitate systematical evaluations for vFL algorithms. It contains 1) a large-scale real-world dataset from our online advertising platform, collected from an ad delivery business relying on vFL-based ranking models, as well as 2) systematical evaluations for both effectiveness and privacy aspects of various neural network based vFL algorithms through extensive experiments. Besides, to improve vFL effectiveness, we explore to incorporate unaligned data via generating unaligned samples' feature representations using generative models. To protect privacy well, we also develop perturbation based on mixup and projection operations. Experiments show that they achieve reasonable performances.
In future work, we shall explore the following directions: 1) Improving the calibration performance of vFL models [27; 37]. 2) Alleviating the sample selection bias issue in CVR estimation models through debiasing approaches [10; 39] for vFL models. 3) Improving vFL training efficiency. 4) Extending the usage of vFL fashion from ranking stage to retrieval stage in online advertising systems.
## Acknowledgement
This work was supported by Alibaba Innovative Research project. We thank all the anonymous reviewers for their valuable comments. We also thank Jinquan Liu and Prof. Baoyuan Wu for assistance.
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Method** & **Privacy**: \(\Lambda\mathrm{LeakAUC}\) & **Utility**: AUC \\ \hline No Defense & - & **0.620** \\ DP & -5.7\% & 0.602 \\ Marvell & **-24.7\%** & 0.601 \\ \hline MixPro (Ours) & -11.3\% & 0.602 \\ \hline \hline \end{tabular}
\end{table}
Table 4. Privacy evaluation of comparative defense approaches to label inference attack. |
2303.17186 | Structure of cell decompositions in Extremal Szemerédi-Trotter
examples | The symmetric case of the Szemer\'edi-Trotter theorem says that any
configuration of $N$ lines and $N$ points in the plane has at most $O(N^{4/3})$
incidences. We describe a recipe involving just $O(N^{1/3})$ parameters which
sometimes (that is, for some choices of the parameters) produces a
configuration of N point and N lines. (Otherwise, we say the recipe fails.) We
show that any near-extremal example for Szemer\'edi Trotter is densely related
to a successful instance of the recipe. We obtain this result by getting
structural information on cell decompositions for extremal Szemer\'edi-Trotter
examples. We obtain analogous results for unit circles. | Nets Katz, Olivine Silier | 2023-03-30T06:49:14Z | http://arxiv.org/abs/2303.17186v1 | # Structure of cell decompositions in Extremal Szemeredi-Trotter examples
###### Abstract
The symmetric case of the Szemeredi-Trotter theorem says that any configuration of \(N\) lines and \(N\) points in the plane has at most \(O(N^{4/3})\) incidences. We describe a recipe involving just \(O(N^{1/3})\) parameters which sometimes (that is, for some choices of the parameters) produces a configuration of N point and N lines. (Otherwise, we say the recipe fails.) We show that any near-extremal example for Szemeredi Trotter is densely related to a successful instance of the recipe. We obtain this result by getting structural information on cell decompositions for extremal Szemeredi-Trotter examples. We obtain analogous results for unit circles.
## 1 Introduction
If \(l\) is a line and \(p\) a point in the real plane \(\mathbb{R}^{2}\), we say that \((l,p)\) is an incidence if \(p\in l\). The most fundamental result in the theory of incidences between points and lines in the plane is the Szemeredi-Trotter theorem [11] which bounds their number:
**Theorem 1.1** (Szemeredi-Trotter 1983).: _Let \(\mathcal{L}\) be a set of \(n\) lines in the plane and \(\mathcal{P}\) be a set of \(m\) points. Then if \(I(\mathcal{L},\mathcal{P})\) is the set of incidences between lines of \(\mathcal{L}\) and points of \(\mathcal{P}\), we have the bound_
\[|I(\mathcal{L},\mathcal{P})|\lesssim n^{\frac{2}{3}}m^{\frac{2}{3}}+n+m.\]
One thing that is remarkable about the Szemeredi-Trotter theorem is that as far as the exponents are concerned, it is sharp. A number of examples are known, but we are far from classifying all possible examples. To do so remains one of the central open problems in incidence geometry of the plane.[6] We restrict ourselves to the symmetric case where \(n=m\), although the question is interesting whenever \(\sqrt{n}<m<n^{2}\).
**Inverse Szemeredi Trotter problem** Let \(\mathcal{L}\) be a set of \(n\) lines and \(\mathcal{P}\) be a set of \(n\) points with
\[I(\mathcal{L},\mathcal{P})\geq n^{\frac{4}{3}-}.\]
What can be said about the structure of \(\mathcal{L}\) and \(\mathcal{P}\)?
A related question which motivates this study is the unit distance problem.
**Unit distance problem** Let \(\mathcal{P}\) be a set of \(n\) points in the plane. Let \(U(\mathcal{P})\) be the set of pairs of points in \(\mathcal{P}\) which are at Euclidean distance \(1\). What upper bound can one put on \(|U(\mathcal{P})|\)?
The conjectured bound in the unit distance problem is \(n^{1+}\) but the best known bound is \(n^{\frac{4}{3}}\). This is not a coincidence [forgive the pun]. Unit distances are incidences between the points of \(\mathcal{P}\) and the unit circles centered at those points. Now unit circles are not lines, but they do share some properties in common. Each unit circle is defined by two parameters and while it is not the case that unit circles intersect in at most one point, they do intersect in at most two. Essentially every technique which has been used in a proof of the Szemeredi-Trotter theorem can be applied in the case of unit distances and this is the source of the \(n^{\frac{4}{3}}\) bound.
A connection between the unit distance problem and the inverse Szemeredi-Trotter problem is that if one had an inverse theorem for unit distances at the exponent \(\frac{4}{3}\), one could gain a small improvement in the exponent by showing that the inverse cases don't exist. This is, in fact, a big part of our motivation which is why we don't mind restricting to the symmetric case in Szemeredi-Trotter.
To illustrate the source of the difficulty in obtaining an inverse Szemeredi-Trotter theorem, we describe a simpler, related problem in which the inverse theorem is fairly straightforward to obtain. We note that the Szemeredi-Trotter theorem uses a great deal more about the structure of the plane than the fact that two lines intersect at a simple point. If we had restricted ourselves to using only that fact, we would have obtained this weaker result.
**Theorem 1.2** (Cauchy-Schwarz).: _Let \(\mathcal{L}\) be a set of \(n\) lines in the plane and \(\mathcal{P}\) be a set of \(m\) points. Then if \(I(\mathcal{L},\mathcal{P})\) is the set of incidences between lines of \(\mathcal{L}\) and points of \(\mathcal{P}\), we have the bound_
\[|I(\mathcal{L},\mathcal{P})|\lesssim n^{\frac{1}{2}}m+n.\]
The inverse Cauchy-Schwarz problem is to describe all sets of \(n\) lines and \(n\) points with \(n^{\frac{3}{2}}\) incidences. "It's easy," the reader should exclaim, "there are none by Szemeredi-Trotter." But we will suspend disbelief and nevertheless try to describe them despite their nonexistence. What follows is a sketch.
In a configuration of \(n\) points and \(n\) lines with \(n^{\frac{3}{2}-}\) incidences, the typical point is incident to \(n^{\frac{1}{2}-}\) lines and the typical line is incident to \(n^{\frac{1}{2}-}\) points. We pick an initial point \(p_{1}\). Let \(B(p_{1})\), the "bush" of \(p_{1}\) be the set of points incident to one of the lines incident to \(p_{1}\). We should have
\[|B(p_{1})|\gtrsim n^{1-}.\]
Already, a substantial proportion of the point set \(\mathcal{P}\) belongs to \(B(p_{1})\) and lies on the lines going through the point \(p_{1}\). We can go one step further and do this twice. We can choose points \(p_{1}\) and \(p_{2}\) so that
\[|B(p_{1})\cap B(p_{2})|\gtrsim n^{1-}.\]
In other words, a substantial proportion of the point set consists of points lying on a line incident to \(p_{1}\) and a line incident to \(p_{2}\). After a projective transformation sending \(p_{1}\) and \(p_{2}\) to points at infinity, we get that a substantial portion of the point set lies on a product set \(A\times B\) with each of \(|A|\) and \(|B|\) of size \(n^{\frac{1}{2}-}\). This is not yet an inverse theorem, but it is what we refer to as a **proto-inverse
theorem**. Recall that an inverse theorem gives a complete characterization of the solution set to the inverse problem. A proto-inverse theorem on the other hand gives a looser characterization which must include all solutions to the inverse theorem but may also include non-examples. We have parametrized (a substantial portion of) the point set and whereas _a priori_ we needed \(O(n)\) parameters to describe the point set, we now need just \(O(n^{\frac{1}{2}})\) parameters. This is an important step which has hitherto not been available in the case of Szemeredi-Trotter.
To go from the proto-inverse theorem for Cauchy Schwarz to an actual inverse theorem we consider the case of \(n^{1-}\) lines having at least \(n^{\frac{1}{2}-}\) incidences each with a product set \(A\times B\) with each of \(A\) and \(B\) having size at most \(n^{\frac{1}{2}}\). By rescaling, we can have \(0,1\in A\) with \(n^{1-}\) lines having an incidence with each of \(\{0\}\times B\) and \(\{1\}\times B\). Thus the lines are identified with pairs of points to which they are incident \((0,b_{1}),(1,b_{2})\). If the same line is incident to a point of \(\{a\}\times B\), we get that \(ab_{2}+(1-a)b_{1}\in B\). Thus for a typical \(a\in A\), the quotient \(\frac{1-a}{a}\) has \(n^{\frac{3}{2}-}\) representations as a member of \(\frac{B-B}{B-B}\). This is true for at least \(n^{\frac{1}{2}-}\) choices of \(a\). For subsets of the reals, this phenomenon is ruled out by the sum-product theorem. In other settings, (finite fields, the \(\delta\)-discretized setting) things are bit more delicate because the Szemeredi-Trotter theorem isn't true. Inverse Cauchy-Schwarz, although it didn't go by that name, played an important role in the development of sum-product theory in those settings. (See [7] and [2].)
In this paper, we obtain the first, to our knowledge, proto-inverse theorem for Szemeredi-Trotter and for the unit distance problem at the exponent \(\frac{4}{3}\).
**Theorem 1.3**.: _There is collection \(A\) of \(N^{\frac{1}{3}}\) parameters and maps \(\mathcal{L}\) and \(\mathcal{P}\) so that for some values of the parameters \(A\), \((\mathcal{L}(A),\mathcal{P}(A))\) is a configuration of at least \(N^{1-}\) and at most \(N\) lines and points. If \((\mathcal{L},\mathcal{P})\) is an extremal configuration of between \(N^{1-}\) and \(N\) lines and points for Szemeredi-Trotter then so is \((\mathcal{L}\cap\mathcal{L}(A),\mathcal{P}\cap\mathcal{P}(A))\). The analogous result is true for unit circles._
We obtain the parametrization in the theorem (which appears in Theorems 4.1 and 5.22 in the body of the paper) by a deep study of the cell decompositions which prove the Szemeredi-Trotter theorem. There is a rather strong analogy to the proto-inverse theorem for Cauchy-Schwarz mentioned above. In the Cauchy-Schwarz setting most of the points lie on a pair of bushes, or after a projective transformation, a product set. In the Szemeredi-Trotter setting, it is the cell decomposition which is given by two bushes. We give this as Theorem 3.20 and the analogous result for unit circles as Theorem 5.21. The main idea is that we combine the ideas of cell decomposition by choosing random lines and of the crossing number inequality. By counting the crossings inside cells we are able to organize extremal examples using heuristics suggested by random selections.
## 2 Extremal examples and Cell decompositions
We shall be concerned in this paper with "extremal examples" for the Szemeredi-Trotter theorem. Our examples will be (almost) symmetric consisting of approximately \(N\) lines and approximately \(N\) points. We will use the notation that the inequality
\[A\lesssim B,\]
between two non-negative quantities \(A\) and \(B\) will mean that there is a constant \(C\) independent of \(N\) so that
\[A\leq CB.\]
We would like to allow ourselves losses of small powers of \(N\). We choose at the beginning of the paper a small exponent \(\epsilon_{0}\). Implicitly, at each line of the paper, there will be a different exponent
\(\epsilon\) depending on the line of the paper, with each \(\epsilon\) having the property \(\epsilon\lesssim\epsilon_{0}\). We will abbreviate \(A\lesssim N^{O(\epsilon_{0})}\) by \(A\lesssim N^{+}\) or \(A\lesssim N^{0+}\). Similarly we introduce \(A\gtrsim N^{-}\) to mean \(AN^{O(\epsilon_{0})}\gtrsim 1\) and \(A\sim N^{\pm}\) to mean
\[N^{-}\lesssim A\lesssim N^{+}.\]
We let exponents add in the natural way.
Our definition of an extremal example for the Szemeredi Trotter theorem will allow \(N^{+}\) errors. This will be slightly unusual for the study of point-line incidences in the plane. The reason is that the Szemeredi Trotter theorem is totally sharp in the \(\lesssim\) sense. For that reason, the main tools used in studying the problem have been honed to be sharp in the \(\lesssim\) sense. However, there are two reasons we will allow \(N^{+}\) errors. The first is that what we're really after are inverse theorems and these will be stronger and more useful if they apply to examples that fail to be sharp by \(N^{+}\). The second reason is that we will be studying the properties of probabilistically constructed cell decompositions. While these have been refined to the \(\lesssim\) level, the probabilistic construction for doing that is a bit more sophisticated, and we will be taking advantage of the ease of use of the simpler one.
**Definition 2.1** (Extremal configuration).: _We say that with \(\mathcal{L}\) a collection of at most \(N\) and at least \(N^{1-}\) lines in the plane and \(\mathcal{P}\) a collection of at most \(N\) and at least \(N^{1-}\) points in the plane and with \(\mathcal{I}(\mathcal{L},\mathcal{P})\) denoting the set of incidences (that is pairs \((L,P)\) of a line from \(\mathcal{L}\) and a point from \(\mathcal{P}\) with the point on the line), we say that \((\mathcal{L},\mathcal{P})\) is an_ **extremal configuration** _if_
\[|\mathcal{I}(\mathcal{L},\mathcal{P})|\gtrsim N^{\frac{4}{3}-}.\]
Sometimes, we wish to restrict our attention to only a large subset of the incidences of an extremal configuration. We introduce the notion of an extremal partial configuration.
**Definition 2.2** (Extremal partial configuration).: _We say that with \(\mathcal{L}\) a collection of at most \(N\) and at least \(N^{1-}\) lines in the plane and \(\mathcal{P}\) a collection of at most \(N\) and at least \(N^{1-}\) points in the plane and with \(\mathcal{I}(\mathcal{L},\mathcal{P})\) denoting the set of incidences (that is pairs \((L,P)\) of a line from \(\mathcal{L}\) and a point from \(\mathcal{P}\) with the point on the line) and with_
\[\mathcal{J}(\mathcal{L},\mathcal{P})\subset\mathcal{I}(\mathcal{L},\mathcal{ P})\]
_we say that \((\mathcal{L},\mathcal{P},\mathcal{J}(\mathcal{L},\mathcal{P}))\) is an_ **extremal partial configuration** _if_
\[|\mathcal{J}(\mathcal{L},\mathcal{P})|\gtrsim N^{\frac{4}{3}-}.\]
Any pair \((\mathcal{L},\mathcal{P})\) wth \(\mathcal{L}\) consisting of lines in the plane and with \(N^{1-}\lesssim|\mathcal{L}|\leq N\) and with \(\mathcal{P}\) consisting of points in the plane with \(N^{1-}\lesssim|\mathcal{P}|\leq N\) will be called a **configuration**.
**Definition 2.3** (Cell decomposition, line weighted).: _Given a configuration \((\mathcal{L},\mathcal{P})\), we say that a partition of \(\mathcal{P}\) into \(r^{2}\) disjoint subsets (called cells) \(C_{1},\dots,C_{r^{2}}\) is a **line weighted cell decomposition** if no line in \(L\in\mathcal{L}\) is incident to points in \(\gtrsim r\) cells and no cell has \(\gtrsim\frac{N^{1+}}{r}\) lines of \(\mathcal{L}\) incident to any of its points. A decomposition having all these properties except the bound on the number of cells a line can be incident to points in will be called a **provisionally line weighted cell decomposition**._
**Definition 2.4** (Cell decomposition, point weighted).: _Given a configuration \((\mathcal{L},\mathcal{P})\), we say that a partition of \(\mathcal{P}\) into \(r^{2}\) disjoint subsets (called cells) \(C_{1},\dots,C_{r^{2}}\) is a **point weighted cell decomposition** if no line in \(L\in\mathcal{L}\) is incident to points in \(\gtrsim r\) cells and no cell contains \(\gtrsim\frac{N^{1+}}{r^{2}}\) points of \(\mathcal{P}\)._
Next we show that any extremal configuration with a line weighted cell decomposition into approximately \(N^{2/3}\) parts can be refined into an extremal configuration with a point weighted cell decomposition using the same partition. This is true because cells with too many points do not produce enough incidences per point because of the bound on the number of lines and the Szemeredi Trotter theorem. We simply remove the points of those cells.
**Theorem 2.5**.: _Let \((\mathcal{L},\mathcal{P})\) be an extremal configuration with a line weighted cell decomposition \(C_{1},\ldots C_{r^{2}}\) with \(N^{\frac{1}{3}-}\lesssim r\lesssim N^{\frac{1}{3}}\). Then there is a subset \(\mathcal{P}^{\prime}\) of \(\mathcal{P}\) with \(|\mathcal{P}^{\prime}|\gtrsim N^{1-}\) and \((\mathcal{L},\mathcal{P}^{\prime})\) an extremal configuration so that the nonempty elements of the list \(C_{1}\cap\mathcal{P}^{\prime},\ldots,C_{r^{2}}\cap\mathcal{P}^{\prime}\) is a point weighted cell decomposition._
Proof.: For the remainder of this proof, we fix the value of \(\epsilon\) which corresponds to the current line of the paper. We have \(|I(\mathcal{L},\mathcal{P})|\gtrsim N^{\frac{4}{3}-\epsilon}\), we have \(N^{1-\epsilon}\lesssim|\mathcal{P}|,|\mathcal{L}|\leq N\) and we have \(N^{\frac{1}{3}-\epsilon}\lesssim r\lesssim N^{\frac{1}{3}}\). We divide the cells into two classes \(\mathcal{C}_{big}\) and \(\mathcal{C}_{notsobig}\) where \(C_{j}\) is placed into \(\mathcal{C}_{big}\) if \(|C_{j}|>N^{\frac{1}{3}+10\epsilon}\) and into \(\mathcal{C}_{notsobig}\) otherwise. It suffices to take
\[\mathcal{P}^{\prime}_{\epsilon}=\bigcup_{C\in\mathcal{C}_{notsobig}}C,\]
and show that
\[|\mathcal{P}^{\prime}_{\epsilon}|\gtrsim N^{1-20\epsilon},\]
and
\[|I(\mathcal{L},\mathcal{P}^{\prime}_{\epsilon})|\gtrsim N^{\frac{4}{3}- \epsilon}.\]
[This is because at the end of the proof, we can reset the value of \(\epsilon\) to \(20\epsilon\).] We calculate
\[|I(\mathcal{L},\mathcal{P})|=\sum_{C\in\mathcal{C}_{big}}|I(\mathcal{L},C)|+ \sum_{C\in\mathcal{C}_{notsobig}}|I(\mathcal{L},C)|.\]
To bound the first term, we apply the Szemeredi-Trotter theorem to each big cell using the fact that there are at most \(N^{\frac{2}{3}+\epsilon}\) lines going through each cell obtaining
\[\sum_{C\in\mathcal{C}_{big}}|I(\mathcal{L},C)|\lesssim\sum_{C\in \mathcal{C}_{big}}N^{\frac{4}{9}+\frac{2}{3}\epsilon}|C|^{\frac{2}{3}}\]
\[\lesssim\sum_{C\in\mathcal{C}_{big}}N^{\frac{1}{3}-\frac{8}{3}\epsilon}|C| \lesssim N^{\frac{4}{3}-\frac{8}{3}\epsilon}.\]
Here the penultimate inequality uses that each \(|C|\) is at least \(N^{\frac{1}{3}+10\epsilon}\) and the last inequality uses that \(|\mathcal{P}|\lesssim N\).
Now we know that
\[N^{\frac{4}{3}-\epsilon}\lesssim I(\mathcal{L},\mathcal{P}^{\prime}_{\epsilon}),\]
and we need only show that this implies a good lower bound on \(|\mathcal{P}^{\prime}_{\epsilon}|\). But this follows immediately from the Szemeredi-Trotter theorem and the extremality of the example.
Next, we will show that for any extremal configuration together with a point weighted cell decomposition with \(N^{\frac{1}{3}-}\lesssim r\lesssim N^{\frac{1}{3}}\) there is a refinement of the set of lines preserving extremality so that each line is incident to points in \(\gtrsim N^{\frac{1}{3}-}\) cells. This is a direct application of the Cauchy Schwarz inequality. The lines we remove don't account for many incidences.
**Theorem 2.6**.: _Let \((\mathcal{L},\mathcal{P})\) be an extremal configuration. Let \(C_{1},\ldots,C_{r^{2}}\) be a point weighted cell decomposition with \(r\sim N^{\frac{1}{3}\pm}\). Then there is a refinement \(\mathcal{L}^{\prime}\subset\mathcal{L}\) so that \(|I(\mathcal{L}^{\prime},\mathcal{P})|\gtrsim N^{\frac{4}{3}-}\) and every \(L\in\mathcal{L}^{\prime}\) is incident to points in \(\gtrsim N^{\frac{1}{3}-}\) cells._
Proof.: For the remainder of the proof, we fix the value of \(\epsilon\) corresponding to this line, with \(I(\mathcal{L},\mathcal{P})\gtrsim N^{\frac{4}{3}-\epsilon}\).
Consider \(\mathcal{L}_{\epsilon}\), the set of lines intersecting fewer than \(r^{1-20\epsilon}\) cells. It suffices to show that \(|I(\mathcal{L}_{\epsilon},\mathcal{P})|\) is considerably smaller than \(N^{\frac{4}{3}-\epsilon}\). For each line \(L\), let \(C_{L}\) denote the set of cells in which \(L\) is incident to a point. For \(L\) a line and \(P\) a point, we let \(I_{L,P}\) be the indicator function of incidence, namely \(I_{L,P}=1\) if \(P\) is incident to \(L\) and \(0\) otherwise.
We calculate
\[|I(\mathcal{L}_{\epsilon},\mathcal{P})| =\sum_{L\in\mathcal{L}_{\epsilon}}\sum_{C\in C_{L}}\sum_{P\in C}I _{LP}\] \[\lesssim N^{\frac{1}{2}}r^{\frac{1-20\epsilon}{2}}(\sum_{L\in \mathcal{L}_{\epsilon}}\sum_{C\in C_{L}}(\sum_{P\in C}I_{LP})^{2})^{\frac{1}{2}}\] \[\lesssim N^{\frac{1}{2}}r^{\frac{1-20\epsilon}{2}}(I(\mathcal{L}_ {\epsilon},\mathcal{P})+\sum_{C}\sum_{P_{1}\in C}\sum_{P_{2}\in C,P_{2}\neq P_ {1}}\sum_{L}I_{LP_{1}}I_{LP_{2}})^{\frac{1}{2}}\] \[\lesssim N^{\frac{1}{2}}r^{\frac{1-20\epsilon}{2}}(N^{\frac{4}{3 }+2\epsilon})^{\frac{1}{2}}\] \[\lesssim N^{\frac{4}{3}-\frac{4\epsilon}{3}+O(\epsilon^{2})}\]
The goal of the next theorem is to say that for an extremal example having a point weighted cell decomposition with \(r\) just on the low side of \(N^{\frac{1}{3}}\), most of the incidences come from cells having around \(\frac{N}{r^{2}}\) points, and around \(\frac{N}{r}\) lines making just a few incidences with these points, but at least two. As a corollary, we will obtain a kind of inverse theorem for the lines having a few incidences with the points of a cell that will prove useful later. (The idea is that for any set of points, the number of lines intersecting two of them is controlled by the square of the number of points.)
**Theorem 2.7**.: _Let \((\mathcal{L},\mathcal{P})\) be an extremal configuration. Specifically let \(|I(\mathcal{L},\mathcal{P})|=N^{\frac{4}{3}-\epsilon}\) with \(\epsilon\) fixed. Let \(C_{1},\ldots,C_{r^{2}}\) be a point-weighted cell decomposition for \((\mathcal{L},\mathcal{P})\) with \(N^{\frac{1}{3}-5\epsilon}\leq r\leq\frac{|I(\mathcal{L},\mathcal{P})|}{100| \mathcal{L}|}\). Then there is a set of incidences \(J(\mathcal{L},\mathcal{P})\subset I(\mathcal{L},\mathcal{P})\) so that \(|J(\mathcal{L},\mathcal{P})|\gtrsim N^{\frac{4}{3}-\epsilon}\), but for every line \(L\) and cell \(C\) for which there is \(P\in C\) with \((L,P)\in J(\mathcal{L},\mathcal{P})\), we have that_
\[2\leq|I(\{L\},C)|\lesssim N^{+}.\]
Proof.: The way this proof will work is that we will remove from \(I(\mathcal{L},\mathcal{P})\) all incidences that would violate the conditions for \(J(\mathcal{L},\mathcal{P})\) and observe that we have removed less than half of the set \(I(\mathcal{L},\mathcal{P})\).
First, for any point P for which there are more than \(N^{\frac{1}{3}+10\epsilon}\) lines incident, we can remove all these incidences and applying Szemeredi-Trotter, we see that since there are \(\lesssim N^{1-10\epsilon}\) many such points, we have removed \(\lesssim N^{\frac{4}{3}-\frac{20\epsilon}{3}+}\) incidences.
For any line \(L\) and cell \(C\) for which there is a unique point \(P\) with \((L,P)\) an incidence, we remove these incidences and we have removed at most \(r|\mathcal{L}|\) incidences since each line has incidences with at most \(r\) cells.
Finally for the incidences which remain, for any cell \(C\), all lines are incident to at least two points of the cell (which define the line) so there are at most \(|C|^{2}\) such lines. We can remove all
incidences from cells that do not contribute at least \(\gtrsim N^{-}|C|^{2}\) incidences. We note that especially rich lines cannot contribute most of the incidences. The number of lines passing through \(k\) of the points is at most \(\frac{|C|^{2}}{k^{3}}\) contributing at most \(\frac{C^{2}}{k^{2}}\) incidences (this follows from Szemeredi-Trotter), so we remove these for \(k\) which are not \(\lesssim N^{+}\).
**Corollary 2.8**.: _Let \((\mathcal{L},\mathcal{P})\) be an extremal configuration. Let \(C_{1},\ldots,C_{r^{2}}\) be a point-weighted cell decomposition for \((\mathcal{L},\mathcal{P})\) with \(r\geq N^{\frac{1}{3}-}\) but \(r\leq\frac{|I(\mathcal{L},\mathcal{P})|}{100|\mathcal{L}|}\). Then there is a set \(\mathcal{C}\) of \(\gtrsim r^{2-}\) cells so that for each \(C\in\mathcal{C}\), there is a set of lines \(\mathcal{L}_{C}\) with_
\[|\mathcal{L}_{C}|\gtrsim|C|^{2-},\]
_and with each \(L\in\mathcal{L}_{C}\) incident to at least \(2\) but \(\lesssim N^{+}\) points in \(C\). Each set \(\mathcal{L}_{C}\) has density \(\gtrsim N^{-}\) in the set of lines intersecting two points in \(C\)._
## 3 The Probabilistic method and cell decompositions
There are several different methods known for producing cell decompositions for proving the Szemeredi-Trotter theorem. The most modern is polynomial partitioning. In that method, the boundaries of cells are given by the zero set of a polynomial. If a cell decomposition is thus obtained, it is naturally point-weighted. This is called the cellular case. The alternative is that the points all lie in the zero set of a fairly low-degree polynomial. This is called the structured case. In this case we obtain the Szemeredi-Trotter theorem by bounding the intersection of a curve of bounded degree and a line by the curve's degree. However, in the structured case we don't have a cell decomposition of the set of points. The method of polynomial partition was first introduced by Larry Guth and the first author in resolving the Erdos distinct distances problem in the plane. [5]
An older approach is to define a cell decomposition by randomly selecting lines from the configuration's line set. That approach most naturally produces a cell decomposition that is line-weighted. This seems to have been first developed in the seminal paper of Clarkson _et. al._[3] as a simplification and improvement of a deterministic construction found in the original paper of Szemeredi and Trotter. [11]. This always produces a cell decomposition, at least if we started with a configuration containing no overly rich lines..
Our present aim is to use cell decompositions in order to learn about the properties of extremal configurations. For this purpose, the deterministic nature of polynomial partitioning is unhelpful. The cells are chosen by the Borsuk-Ulam theorem in a somewhat mysterious way. Not many choices are available. We will work with the probabilistic method where almost every selection of lines yields an acceptable cell decomposition. The fact that an extremal configuration behaves much the same for each selection of lines seems to yield a lot of information on extremal configurations. Our original plan for how to carry out our arguments used this observation heavily. But for technical reasons, it turns out to be beneficial to use a different classic approach to the Szemeredi-Trotter theorem, the crossing number inequality. Principally, we will use this inequality to say that the lines in a typical point weighted cell of an extremal configuration behave in the way that you would expect from a random selection of lines.
We largely follow Terry Tao's blogpost [12] on probabilistic constructions of cell decompositions. Given \(\mathcal{L}\), a collection of \(N\) lines, we choose a random subset \(\mathcal{L}_{r}\) with each line of \(\mathcal{L}\) chosen independently with probability \(\frac{r}{N}\).
We use one probabilistic calculation repeatedly:
**Lemma 3.1**.: _Let \(\mathcal{S}\subset\mathcal{L}\) be any subset of \(\frac{CN\log N}{r}\) lines with \(C\) a sufficiently large constant. The probability that none of the lines in \(\mathcal{S}\) is selected is bounded by \(\frac{1}{N^{100}}\)._
Proof.: By independence of the individual lines, the probability that no line in \(\mathcal{S}\) is selected is exactly
\[(1-\frac{r}{N})^{\frac{CN\log N}{r}}.\]
We would like to choose a random selection \(\mathcal{L}_{r}\) containing at least \(\frac{r}{2}\) lines and no more than \(2r\) lines. By Chernoff bounds, the probability that this fails is exponentially small in \(N\). Moreover, we'll make a list of \(O(N^{4})\) events controlled by Lemma 3.1 that we would like not to occur. We obtain a set of lines which satisfies our requirements with probability at least \(1-O(N^{-96})\).
This might be a good moment to review how Chernoff bounds work so that when we later need to use them a bit more seriously, we'll be better prepared.
**Lemma 3.2**.: _Let \(X_{1},\ldots,X_{M}\) be independent Bernoulli variables equal to \(1\) with probability \(p\) and zero otherwise. Let \(pM>M^{\delta}\) for some \(\delta>0\) then \(P\), the probabilitiy that \(X\) is larger than \(10pM\), where_
\[X=X_{1}+X_{2}+\ldots X_{M},\]
_is bounded by_
\[e^{-8pM}\]
Proof.: Observe that since the \(X_{j}\)'s are independent and identically distributed, we have
\[E(e^{X})=E(e^{X_{1}})^{M}=(1+p(e-1))^{M}\sim e^{p(e-1)M}.\]
But \(Pe^{10pM}\leq E(e^{X})\sim e^{(e-1)pM}.\) Hence
\[P\lesssim e^{-8pM}.\]
For each line \(l\in\mathcal{L}\), we establish an ordering on \(\mathcal{L}\backslash\{l\}\) based on the position of their intersection with \(l\). (It might be at infinity.) The ordering is ill-defined when multiple lines are concurrent at a point of \(l\), but we order concurrent lines arbitrarily. For each choice of \(l^{\prime}\in\mathcal{L}\), with \(l^{\prime}\neq l\) we would like to exclude the event that none of the \(\frac{CN\log N}{r}\) consecutive lines following \(l^{\prime}\) in the order induced by \(l\) are selected for \(\mathcal{L}_{r}\). This is a list of \(O(N^{2})\) events governed by Lemma 3.1.
One then chooses a direction different from that of all lines in \(\mathcal{L}\) which we will refer to as vertical. At each point \(p\) of intersection of two of the lines of \(\mathcal{L}\), the vertical line at \(p\) induces an order on the lines of \(\mathcal{L}\). We would like to exclude the case that none of the first \(\frac{CN\log N}{r}\) lines above \(p\) are chosen for \(\mathcal{L}_{r}\) and none of the first \(\frac{CN\log N}{r}\) lines below \(p\) are chosen for \(\mathcal{L}_{r}\). This is a list of \(O(N^{2})\) events governed by Lemma 3.1.
For future reference, we would also like to rule out one other kind of event. Given two points \(p_{1}\) and \(p_{2}\) each of which are at the intersection of two lines of \(\mathcal{L}\) for which at least \(\frac{CN\log N}{r}\) lines of \(\mathcal{L}\) intersect the open line segment between \(p_{1}\) and \(p_{2}\), at least one of those lines will be selected for \(\mathcal{L}_{r}\). This is a list of \(O(N^{4})\) events governed by Lemma 3.1.
With probability \(1-O(N^{-96})\), a random selection of lines satisfies our specifications. Let \(\mathcal{L}_{r}\) be such a selection.
**Definition 3.3**.: _A **funnel decomposition** is obtained from a cell decomposition by breaking each cell into trapezoids by taking a vertical line segment from each vertex of the cell until the point where it intersects some edge. (These trapezoids are called funnels in [3] where this construction was invented.)_
**Lemma 3.4**.: _With \(r\sim N^{\frac{1}{3}\pm}\), the funnel decomposition produces a provisionally line-weighted cell decomposition with probability at least \(1-O(N^{-96})\)._
Proof.: We let \(\mathcal{L}_{r}\) be a selection obeying the above specifications. We start with the cells given by the configuration \(\mathcal{L}_{r}\) and break it into a funnel decomposition. Because none of the adverse events occur, at most \(\frac{CN\log N}{r}\) lines enter any given funnel from each of its 4 sides.
For our purposes, both the above construction and the slightly refined one of Matousek [8] seem unsatisfactory because of the introduction of edges which are not on lines of \(\mathcal{L}\). We would like to be able to recognize the lines entering a cell as lines that intersect a fixed selected line of intersection \(L\) between consecutive points of intersection with other selected lines. For this reason, we find cell decompositions that come from just selecting \(r\) random lines and making no further adjustments most natural. This is problematic because such a decomposition is no longer line weighted due to cells with too many edges. We will deal with this by bounding the number of points that can be contained in such cells and refining the point set \(\mathcal{P}\) to only include points in cells with \(\lesssim N^{+}\) edges.
The main ingredient in our bound will be the theorem of Clarkson _et. al._[3] on the complexity of cell decompositions given by general families of lines.
**Theorem 3.5**.: _Let \(\mathcal{L}\) be a set of \(r\) lines. It divides projective space into \(O(r^{2})\) cells. Let \(\mathcal{C}\) be any subcollection of \(m\) of these cells. Then the total number of edges of cells in \(\mathcal{C}\) is \(O(r^{\frac{2}{3}}m^{\frac{2}{3}}+r)\)._
**Corollary 3.6**.: _Let \(\mathcal{L}_{r}\) be any set of \(r\) lines and let \(\mathcal{C}\) be the set of cells which they define having \(>s\) and \(\leq 2s\) edges. Then if \(s\leq r^{\frac{1}{2}}\), we have_
\[|\mathcal{C}|\lesssim\frac{r^{2}}{s^{3}},\]
_and if \(s\geq r^{\frac{1}{2}}\)_
\[|\mathcal{C}|\lesssim\frac{r}{s}.\]
Having now obtained a bound on the number of cells with a certain number of edges, we now control the number of rich points in such a cell. Let \((\mathcal{L},\mathcal{P})\) be an extremal configuration in which each point \(p\) of \(\mathcal{P}\) is at least \(N^{\frac{1}{3}-}\) rich.
**Lemma 3.7**.: _Let \(K\) be a cell coming from an acceptable selection of lines \(\mathcal{L}_{N^{\frac{1}{3}}}\). Suppose \(K\) has \(s\) sides. Then \(K\) contains at most \(sN^{\frac{1}{3}+}\) points of \(\mathcal{P}\)._
Proof.: Following the construction in [8], for each cell we choose a vertex and divide the cell into triangles by adding edges between the chosen vertex and all non-adjacent cell vertices. Then \(C\) is divided into \(s-2\) triangles \(T\). Each triangle has at most \(\frac{3N^{\frac{2}{3}}\log N}{2}\) lines entering it. Suppose there are \(P\) points of \(\mathcal{P}\) in the triangle \(T\). Then there are at least \(PN^{\frac{1}{3}-}\) incidences in \(T\). The Szemeredi Trotter theorem guarantees that \(P\leq N^{\frac{1}{3}+}\). Thus the total number of points in \(K\) is at most \((s-2)N^{\frac{1}{3}+}\) points of \(\mathcal{P}\) which was to be shown.
To extract structure from our configurations we will chose to throw out undesirable points and lines keeping only those that enjoy the desirable properties.
**Definition 3.8**.: _Given an extremal partial configuration \((\mathcal{L},\mathcal{P},\mathcal{J}(\mathcal{L},\mathcal{P}))\) we say \((\mathcal{L}^{\prime},\mathcal{P}^{\prime})\) is a **refinement** if \(|\mathcal{L}|\sim|\mathcal{L}^{\prime}|\) and \(|\mathcal{P}|\sim|\mathcal{P}|^{\prime}\) and \(|\mathcal{J}(\mathcal{L},\mathcal{P})|\sim|\mathcal{J}(\mathcal{L}^{\prime}, \mathcal{P}^{\prime})|\)._
Finally, we combine Corollary 3.6 with Lemma 3.7 to bound the number of points of \(\mathcal{P}\) contained in cells with between \(s\) and \(2s\) sides. If \(s\leq r^{\frac{1}{2}}\) with \(r=N^{\frac{1}{3}}\), we obtain the bound \(\frac{r^{2}}{s^{3}}sN^{\frac{1}{3}+}\leq\frac{N^{1+}}{s^{2}}\). If \(s\geq r^{\frac{1}{2}}\), we obtain the bound \(\frac{r}{s}sN^{\frac{1}{3}+}\leq N^{\frac{2}{3}+}\). As long as \(s\) is much bigger than \(N^{+}\), we do not capture a significant number of points. We conclude the following theorem.
**Theorem 3.9**.: _Let \((\mathcal{L},\mathcal{P})\) be an extremal configuration with each point of \(\mathcal{P}\) being at least \(N^{\frac{1}{3}-}\) rich. Then for each acceptable random selection \(\mathcal{L}_{N^{\frac{1}{3}}}\) there is a refinement \(\mathcal{P}^{\prime}\subset\mathcal{P}\) with \(|\mathcal{P}^{\prime}|\geq\frac{1}{2}|\mathcal{P}|\) so that no point of \(\mathcal{P}^{\prime}\) is contained in a cell with more than \(N^{+}\) sides, and in light of Lemma 3.7, each such cell has at most \(N^{\frac{1}{3}+}\) points of \(\mathcal{P}^{\prime}\) so that we have obtained a point weighted decomposition for the extremal configuration \((\mathcal{L},\mathcal{P}^{\prime})\)_
The main power of Theorem 3.9 for us will be that we can use it to deduce strong properties of extremal examples without reference to any cell decomposition.
Next we're going to use Corollary 2.8 to get structuring result for extremal configurations where many lines have points bounding intervals where approximately \(N^{\frac{2}{3}}\) lines intersect. Furthermore large subsets of these groups of about \(N^{\frac{2}{3}}\) lines are structured: they intersect two of a set of not much more than \(N^{\frac{1}{3}}\) points. A key ingredient in proving this will be the standard crossing number inequality which we state here.
**Lemma 3.10**.: _[_10_]_ _Let \(G(V,E)\) be a planar graph with \(v\) vertices and \(e\) edges. Suppose that \(e\geq 10v\). Then the number of crossings between edges is \(\gtrsim\frac{e^{3}}{v^{2}}\)._
We will write down a corollary describing the usual way that we use this. Any cell where a large number of lines are incident to at least two points must have a lot of crossings.
**Corollary 3.11**.: _Let \(\mathcal{P}_{c}\) be a collection of \(N^{\frac{1}{3}+\delta_{1}}\) points in a convex region \(R\) of the plane. Let \(\mathcal{L}_{c}\) be a collection of \(N^{\frac{2}{3}-\delta_{2}}\) lines each of which is incident to at least \(M+1\) points of \(\mathcal{P}_{c}\). Then \(\gtrsim M^{3}N^{\frac{4}{3}-\delta_{1}-3\delta_{2}}\) pairs of lines from \(\mathcal{L}_{c}\) cross in the region \(R\)._
Proof.: Define a graph \(G\) with \(\mathcal{P}_{c}\) as the point set and for each line, pick \(M\) consecutive pairs of points that the line is incident to to be edges. Because the set \(R\) is convex, the edges lie in \(R\). Now apply Lemma 3.10.
We make a definition of a structured set of lines.
**Definition 3.12**.: _We say that a set \(\mathcal{L}_{1}\) of at least \(N^{\frac{2}{3}-}\) lines is **structured** if there is a set \(\mathcal{P}_{1}\) of at most \(N^{\frac{1}{3}+}\) points so that each line of \(\mathcal{L}_{1}\) is incident to at least two points of \(\mathcal{P}_{1}\). We call this set of points **structuring**._
Note that since \(\lesssim N^{\frac{2}{3}+}\) lines go through at least two among \(\lesssim N^{\frac{1}{3}+}\) points, the structuring points essentially define the structured lines. Now we're ready to state our structuring theorem.
**Theorem 3.13**.: _Let \((\mathcal{L},\mathcal{P},\mathcal{J})\) be an extremal partial configuration. Then there is a refinement \((\mathcal{L}^{\prime},\mathcal{P},\mathcal{J}^{\prime})\) and so that for each line \(l\in\mathcal{L}^{\prime}\) there are points \(p_{1},\dots,p_{M}\) of \(\mathcal{P}\) with \((l,p_{j})\in\mathcal{J}^{\prime}\) and the \(p_{j}\)'s in order of their position on \(l\) and with \(M\gtrsim N^{\frac{1}{3}-}\) so that for each consecutive pair of points \(p_{j},p_{j+1}\), there is a structured set of lines \(\mathcal{L}_{j}\) so that each \(l^{\prime}\) in \(\mathcal{L}_{j}\) intersects \(l\) in the open interval bounded by the points \(p_{j}\) and \(p_{j+1}\). We say the lines in \(\mathcal{L}^{\prime}\)**organize**\(\mathcal{P}\)._
Proof.: Starting with the extremal partial configuration \((\mathcal{L},\mathcal{P},\mathcal{J})\) we remove all points with fewer than \(N^{\frac{1}{3}-}\) incidences in \(\mathcal{J}\) obtaining a refinement \(\mathcal{P}^{\prime}\subset\mathcal{P}\) so that \((\mathcal{L},\mathcal{P}^{\prime},\mathcal{J}^{\prime})\) is still an extremal partial configuration (with \(\mathcal{J}^{\prime}\) the intersection of \(\mathcal{J}\) with the Cartesian product of \(\mathcal{L}\) and \(\mathcal{P}^{\prime}\)) but satisfies the hypotheses of Theorem 3.9. We apply Theorem 3.9 to obtain a further refinement \(\mathcal{P}^{\prime\prime}\subset\mathcal{P}^{\prime}\) so that we have a point weighted cell decomposition \(\mathcal{C}=\{C_{1},\ldots,C_{r^{2}}\}\) for the extremal configuration \((\mathcal{L},\mathcal{P}^{\prime\prime})\) with \(r\gtrsim N^{\frac{1}{3}-}\) so that \((\mathcal{L},\mathcal{P}^{\prime\prime})\) together with \(\mathcal{C}\) satisfy the hypotheses of Corollary 2.8. From Corollary 2.8 we obtain a refinement \(\mathcal{C}^{\prime}\) of \(\mathcal{C}\) with \(|\mathcal{C}^{\prime}|\gtrsim N^{\frac{2}{3}-}\) and with a subset \(\mathcal{L}_{C}\) of the lines of \(\mathcal{L}\) going through any cell \(C\) of \(\mathcal{C}^{\prime}\) being a structured set. Hence, any subset \(\mathcal{L}_{1}\subset\mathcal{L}_{C}\) with \(|\mathcal{L}_{1}|\gtrsim N^{\frac{2}{3}-}\) is also a structured set.
We now examine a fixed cell \(C\in\mathcal{C}^{\prime}\), the set of points \(\mathcal{P}_{C}\) consisting of points of \(\mathcal{P}^{\prime\prime}\) which lie inside the cell \(C\) and the structured set \(\mathcal{L}_{C}\) from the previous paragraph. We define a graph \(G(V,E)\) whose vertex set consists of the points of \(\mathcal{P}_{C}\) and whose edges are the pairs of consecutive points of \(\mathcal{P}_{C}\) which are incident to any line of \(\mathcal{L}_{C}\). Because \(\mathcal{L}_{C}\) is a structured set (structured by the point set \(\mathcal{P}_{C}\) because of the application of Corollary 2.8), there is at least one edge of \(G\) for each line of \(\mathcal{L}_{C}\). Thus, \(v=|V|\lesssim N^{\frac{1}{3}+}\) while \(e=|E|\gtrsim N^{\frac{2}{3}-}\). We conclude from Lemma 3.10 that the number of crossings for the graph is \(\gtrsim\frac{e^{3}}{v^{2}}\gtrsim N^{\frac{1}{3}-}\). But the crossings of the graph are nothing other than intersections between lines of \(\mathcal{L}_{C}\) which occur inside the cell \(C\). For each line \(l\) in \(\mathcal{L}_{C}\) which intersects at least \(N^{\frac{2}{3}-}\) lines of \(\mathcal{L}_{C}\), we associate the interval \(I\) which is the intersection of the line with the cell. This interval contains at least one (in fact, two) points of \(\mathcal{P}^{\prime\prime}\) and intersects a structured subset \(\mathcal{L}_{I}\) from \(\mathcal{L}_{C}\). We will count pairs \((l,I)\). Because the lines of \(\mathcal{L}_{C}\) all must intersect the cell \(C\) which has fewer than \(N^{+}\) edges, it must be that \(|\mathcal{L}_{C}|\lesssim N^{\frac{2}{3}+}\) and therefore each cell \(C\) of \(\mathcal{C}^{\prime}\) generates at least \(N^{\frac{2}{3}-}\) pairs \((l,I)\). Summing over all the cells of \(\mathcal{C}^{\prime}\), we obtain at least \(N^{\frac{4}{3}-}\) many such pairs. For each line \(l\), the different intervals \(I\) for which \((l,I)\) are such pairs are disjoint. Since each interval is crossed by a structured set of lines, there are at most \(N^{\frac{1}{3}+}\) intervals \(I\) for each line \(l\). Thus there must be at least \(N^{1-}\) lines \(l\) with at least \(N^{\frac{1}{3}-}\) pairs \((l,I)\). We will call this set of lines \(\mathcal{L}^{\prime}\).
Figure 1: Six structuring points structure the set of black lines. A generic pair of structuring points must define a line in the structured set but not all the pairs must.
For each line \(l\in\mathcal{L}^{\prime}\), we order the intervals \(I\) for which \((l,I)\) is a pair as \(I_{1},I_{2},\ldots,I_{M}\). For each \(j=2k-1\) odd, we pick a point \(p\) from \(\mathcal{P}\) from the at least two which lie in the interval and call it \(p_{k}\). For each \(j=2k\) odd, we pick the structured set \(\mathcal{L}_{k}\) which intersects \(I_{j}\).
We'd now like to take advantage of our result. Theorem 3.13 is a method of associating to each extremal configuration a refinement which is rather nicely parametrized. We will do this by applying point-line duality to the result of Theorem 3.13. The result of the theorem gives us many lines \(l\) which are incident to particular sets of points \(p_{1},\ldots,p_{M}\) with \(M\gtrsim N^{\frac{1}{3}-}\) so that we have \(\gtrsim N^{\frac{2}{3}-}\) lines intersecting \(l\) between adjacent points which are structured. We use point-line duality, applying Theorem 3.13 to lines and points instead of points and lines.
**Theorem 3.14**.: _Let \((\mathcal{L},\mathcal{P},\mathcal{J})\) be an extremal partial configuration. Then there is a refinement \((\mathcal{L},\mathcal{P}^{\prime},\mathcal{J}^{\prime})\) so that for each point \(p\in\mathcal{P}^{\prime}\) there are lines \(l_{1},\ldots,l_{M}\) of \(\mathcal{L}\) which are incident to \(p\) in \(\mathcal{J}^{\prime}\) in order of their direction and with \(M\gtrsim N^{\frac{1}{3}-}\) so that each sector bounded by consecutive pairs of lines \(l_{j},l_{j+1}\) contains a structured set of \(\gtrsim N^{\frac{2}{3}-}\) points \(\mathcal{P}_{j}\). We say the points in \(\mathcal{P}^{\prime}\)**organize**\(\mathcal{L}\)._
For any point \(p\in\mathcal{P}^{\prime}\) with lines \(l_{1},\ldots l_{M}\) incident to it, there are \(\gtrsim N^{\frac{2}{3}-}\) points in each of
Figure 2: An organizing line \(l\in\mathcal{L}^{\prime}\) is shown with two intervals bounded respectively by points \(p_{1},p_{2}\) and \(p_{2},p_{3}\). Each of these intervals is crossed by a structured set of (gray) lines. The two gray points on \(l\) are elements of \(\mathcal{P}\) which are not included in our refined construction as the incidence between \(l\) and the gray points are not included in \(\mathcal{J}^{\prime}\).
the \(\gtrsim N^{\frac{1}{3}-}\) sectors for a total of \(N^{1-}\) points. We take this set of points as a refinement \(\mathcal{P}^{\prime}\) of our original set of points \(\mathcal{P}\). What is particularly pleasant about this structure is that each of the \(\gtrsim N^{\frac{2}{3}-}\) points of \(\mathcal{P}^{\prime}\) between two adjacent lines \(l_{j}\) and \(l_{j+1}\) lie on at least two of the \(\gtrsim N^{\frac{1}{3}-}\) structuring lines.
Structuring lines seem very odd precisely because all of the points on them lie in a particular sector between an \(l_{j}\) and \(l_{j+1}\). But this is not as odd as it seems. We see from the proof of Theorem 3.13, that the structuring lines for \(p\) are dual to the points of cells through which the line dual to \(p\) passes. Every point lies in a cell, and every cell has \(N^{2/3\pm}\) lines going through it so by duality the set of \(N^{1/3}\) structuring lines define the \(N^{2/3\pm}\) points in a sector.
We're going to show that for any choice of \(p\in\mathcal{P}^{\prime}\) a typical line \(l\) will have incidences in most of the sectors between consecutive lines \(l_{j}\) and \(l_{j+1}\). To do this we first need to introduce a refinement of the configuration endowed with a cell decomposition whose boundary lines include the bush through \(p\).
**Theorem 3.15** (bush construction).: _For any extremal configuration \((\mathcal{L},\mathcal{P})\) there exists a subset \(\mathcal{P}^{\prime}\) of \(\gtrsim N^{1-}\) points in \(\mathcal{P}\) which are organizing with \(\sim N^{\frac{1}{3}\pm}\) sectors and a refined configuration \((\mathcal{L}^{\prime},\mathcal{P}^{\prime})\) such that the \(\gtrsim N^{1-}\) lines in \(\mathcal{L}^{\prime}\) organize \(\mathcal{P}^{\prime}\). Also for any \(p\in\mathcal{P}^{\prime}\) the refinement \((\mathcal{L}^{\prime},\mathcal{P}_{p})\) where \(\mathcal{P}_{p}\) are the points in \(\mathcal{P}\) organized by \(p\), has a refinement \((\mathcal{L}^{\prime},\mathcal{P}^{\prime}_{p})\) which is an extremal configuration with a point-weighted cell decomposition where each cell is contained in a sector. Moreover, any
Figure 3: An organizing point \(p\in\mathcal{P}^{\prime}\) is shown with two sectors \(s_{1}\) and \(s_{2}\) bounded respectively by lines \(l_{1},l_{2}\) and \(l_{2},l_{3}\). Sectors \(s_{1}\) and \(s_{2}\) each contain a structuring set of (gray) lines. The two dotted black lines through \(p\) are elements of \(\mathcal{L}\) which are not included in our refined construction as the incidence between \(p\) and the dotted lines are not included in \(\mathcal{J}^{\prime}\).
line \(l\in\mathcal{L}^{\prime}\) which crosses exactly \(N^{\frac{2}{3}+\alpha}\) lines of \(\mathcal{L}^{\prime}\) within the sector \(s\) with \(\alpha>k\epsilon\) for \(k\) sufficiently large will not enter more than \(N^{\alpha+}\) cells in \(s\)._
Proof.: _Refinement properties:_ First we keep only points from \(\mathcal{P}\) which are \(\sim N^{\frac{1}{3}\pm}\) rich. This does not significantly affect the number of incidences. We then apply Theorem 3.14 and obtain the refinement \((\mathcal{L},\mathcal{P}^{\prime})\) of organizing points. Then we apply Theorem 3.13 to \((\mathcal{L},\mathcal{P}^{\prime})\) obtaining the refinement \((\mathcal{L}^{\prime},\mathcal{P}^{\prime})\) where the lines in \(\mathcal{L}^{\prime}\) organize \(\mathcal{P}^{\prime}\). Now we have shown the first claim of the theorem.
_Cell decomposition:_ Let \(p\in\mathcal{P}^{\prime}\) and \(\mathcal{P}_{p}\) be the set of points in \(\mathcal{P}\) organized by \(p\). Note there are \(\gtrsim N^{1-}\) organized points each \(\gtrsim N^{\frac{1}{3}-}\) rich so \((\mathcal{L}^{\prime},\mathcal{P}_{p})\) is an extremal configuration which we work in for this paragraph. We label the bush of \(M\sim N^{\frac{1}{3}\pm}\) lines intersecting \(p\) as \(l_{1},\ldots,l_{M}\). Now we pick random \(l_{1}^{\prime},\ldots,l_{K}^{\prime}\) with \(K\sim N^{\frac{1}{3}}\) from \(\mathcal{L}\). Our cell decomposition will be made from the lines \(l_{1},\ldots,l_{M}\) together with the lines \(l_{1}^{\prime},\ldots,l_{K}^{\prime}\). So each cell is contained in a simple sector.
Now, the randomly selected lines \(l_{1}^{\prime},\ldots,l_{K}^{\prime}\) can be chosen to be in the very likely case where they separate the points on each structuring line of a sector into mostly distinct cells. For each structuring line \(l_{s}\), there are points \(p_{1},\ldots,p_{L}\) from \(\mathcal{P}_{p}\) with \(L\gtrsim N^{\frac{1}{3}-}\) so that between each consecutive pair \(p_{k},p_{k+1}\) there are \(\gtrsim N^{\frac{2}{3}-}\) lines of \(\mathcal{L}\) which cross the line \(l_{s}\). Applying Lemma 3.1, we get a bound of \(N^{-100}\) on the probability that any \(N^{\frac{2}{3}+}\) consecutive lines of \(\mathcal{L}^{\prime}\) in the order they intersect \(l_{s}\) don't include one of the \(l^{\prime}\)s. We can select the \(l^{\prime}\)'s so that none of these events happen. Thus each cell contains at most \(N^{+}\) points per structuring line. Since there are \(\lesssim N^{\frac{1}{3}+}\) structuring lines in each sector, our cell decomposition is point weighted.
To get the claim about lines \(l\) with \(N^{\frac{2}{3}+\alpha}\) crossings simply apply Lemma 3.2 to control the probability that more than \(10\) times the expected number of lines crossing \(l\) are selected as random lines.
When trying to show that a typical line in a configuration \((\mathcal{L},\mathcal{P})\) will have incidences in most of the sectors of an organizing point, the enemy case is lines which take too many points in a given sector. A typical line has \(N^{2/3}\) crossings between each pair of points. So if we show that lines with too many crossings in a sector do not contribute significantly to the total number of incidences in that sector, then most incidences come from lines taking \(\lesssim N^{+}\) points in that sector and we win.
**Theorem 3.16**.: _There exist \(\gtrsim N^{1-}\) organizing points \(p\) in \((\mathcal{L},\mathcal{P})\) with \(\sim N^{\frac{1}{3}\pm}\) sectors and some integer \(k\) such that for every sector \(\gtrsim N^{1-}\) of its incidences come from lines taking fewer than \(N^{k\epsilon}\) points in that sector._
Proof.: We choose a point \(p\) to be the center of our bush and use the bush construction \((\mathcal{L}^{\prime},\mathcal{P}^{\prime}_{p})\) from Theorem 3.15. For a large enough constant \(k\), we toss out sectors that have \(\gtrsim N^{\frac{5}{3}+k_{1}\epsilon}\) line line crossings. There are at most \(N^{\frac{1}{3}-k_{1}\epsilon}\) such sectors because \(\gtrsim N^{2-}\) line-line crossings total. So we still kept \(\sim N^{\frac{1}{3}\pm}\) sectors which each have \(\sim N^{\frac{5}{3}\pm}\) line-line crossings and each sector contributes \(\gtrsim N^{1-}\) incidences. So this refinement still yields a configuration \((\mathcal{L}^{\prime},\mathcal{P}^{\prime})\). Similarly, our cell decomposition from Theorem 3.15 has \(\lesssim N^{\frac{2}{3}}\) cells, so we can remove any sectors with more than \(N^{\frac{1}{3}+k_{1}\epsilon}\) cells.
By Theorem 2.7 we may chose a subset \(J(\mathcal{L}^{\prime},\mathcal{P}^{\prime})\subset I(\mathcal{L}^{\prime}, \mathcal{P}^{\prime})\) such that \(|J(\mathcal{L}^{\prime},\mathcal{P}^{\prime})|\gtrsim N^{\frac{4}{3}-}\) and every line has \(\lesssim N^{+}\) incidences from \(J(\mathcal{L}^{\prime},\mathcal{P}^{\prime})\) per cell.
**Definition 3.17** (Fast lines).: _We say that a line is \(\alpha-\)fast for a sector \(s\) if it has \(\sim N^{\frac{2}{3}+\alpha\pm}\) crossings with lines in \(\mathcal{L}\)._
This is the enemy case. Our goal is to show these do not contribute significantly to the number of incidences in the sector. Note that since a fast line crosses \(N^{\frac{2}{3}+\alpha\pm}\) lines of \(\mathcal{L}\) in \(s\), it must cross at least \(N^{\alpha-}\) of the randomly selected lines \(l^{\prime}\) and therefore must enter at least \(N^{\alpha-}\) cells. Similarly by Theorem 3.15 each \(\alpha\) fast line enters no more than \(N^{\alpha+}\) cells in \(s\).
**Definition 3.18** (Slow lines).: _We say that a line is slow for a sector \(s\) if it has \(\lesssim N^{\frac{2}{3}+k_{2}\epsilon\pm}\) crossings with lines in \(\mathcal{L}\)._
We want to show slow lines contribute \(\gtrsim N^{1-k_{2}\epsilon}\) incidences. Assume to the contrary for some \(\alpha\) the \(\alpha-\)fast lines contribute \(\gtrsim N^{1-\frac{\alpha}{10^{10}}}\) incidences. Note that there are \(N^{\frac{5}{3}}\) line-line crossings in \(s\) so there are \(\lesssim N^{1-\alpha}\)\(\alpha-\) fast lines which each have \(\sim N^{\frac{2}{3}+\alpha\pm}\) crossings. If not, the sum of \(\alpha-\)fast contributions is \(N\sum_{\alpha=k_{2}}^{1/(3\epsilon)}N^{-\alpha\epsilon/10^{10}}\sim N^{1-k_{2 }\epsilon/10^{10}}\) so by choosing constant \(k_{2}\) large enough, the \(\alpha-\)fast lines contribute a vanishingly small fraction of the incidences and we also conclude that slow lines contribute most incidences.
If the \(\alpha\) fast lines contribute \(\gtrsim N^{1-\frac{\alpha}{10^{10}}}\) incidences in \(s\), then there must be at \(\gtrsim N^{\frac{1}{3}-\frac{\alpha}{10^{10}}}\) cells in which \(\alpha\) lines contribute \(N^{\frac{2}{3}-\frac{\alpha}{10^{10}}}\) incidences each. Since each structuring line \(l_{s}\) in the sector \(s\) passes through \(N^{\frac{1}{3}-}\) of the \(N^{\frac{1}{3}+}\) of the cells in \(s\), it must be that there is some structuring line \(l_{s}\) in the sector so that the \(\alpha\) fast lines contribute \(N^{\frac{2}{3}-\frac{\alpha}{10^{10}}}\) incidences in each of \(N^{\frac{1}{3}-\frac{\alpha}{10^{10}}-}\) cells through which \(l_{s}\) passes. The cells through which \(l_{s}\) passes are ordered according to their intersections with \(l_{s}\) and each \(\alpha-\)fast line passes through a set of \(N^{\alpha\pm}\) of these. Since there are \(\lesssim N^{1-\alpha+}\)\(\alpha\)-fast lines, we can find an interval of length \(N^{\alpha+}\) of the cells that intersect \(l_{s}\) with only \(N^{\frac{2}{3}+}\)\(\alpha\)-fast lines (belonging to a set we'll call \(\mathcal{L}_{fast}\)) intersecting them with the property that there are \(N^{\alpha(1-10^{-10})-}\) cells where the lines of \(\mathcal{L}_{fast}\) make \(N^{\frac{2}{3}-\alpha(10^{-10})-}\) incidences.
We need a quick lemma affirming the union of an interval of cells is in fact a convex set.
**Lemma 3.19**.: _Let \(l_{s}\) be a structuring line and \(C_{1},\ldots,C_{k}\) be consecutive polygonal cells through which the line \(l_{s}\) passes. Then_
\[R=\bigcup_{t=1}^{k}C_{t},\]
_is a convex region._
Proof.: The cells in the sector \(s\) are bounded by the two lines that bound the sector and the \(l^{\prime}\)'s which we selected randomly. The cells \(C_{1},\ldots,C_{k}\) are divided by consecutive \(l^{\prime}\)'s intersecting \(l_{s}\), namely \(l^{\prime}_{j_{1}},\ldots,l^{\prime}_{j_{k-1}}\). Simply removing \(l^{\prime}_{j_{1}},\ldots,l^{\prime}_{j_{k-1}}\) from our list of bounding lines, \(R\) is the cell defined by the remaining lines which contains each \(C_{j}\). Therefore it is convex.
We let \(R\) be the union of the \(N^{\alpha+}\) consecutive cells above. There are \(N^{\frac{2}{3}+}\) many \(\alpha\)-fast lines going through \(R\), each making \(N^{\alpha(1-2(10^{-10}))-}\) incidences with the \(N^{\frac{1}{3}+\alpha+}\) many points in \(R\). We apply Corollary 3.11 to conclude that there must be \(N^{\frac{4}{3}+\alpha(1-6(10^{-10}))-}\) crossings among \(\alpha\)-fast lines in \(R\), but this is a contradiction because there are at most \(N^{\frac{4}{3}+}\) many such crossings.
It is worth emphasizing that Theorem 3.16 says that most (up to \(\epsilon\) loss in the exponent) lines will take \(O(1)\) points from each cell they crosses. In other words, points on a line are evenly spaced among cells. This is the lynchpin fact in our main result (see Section 4.1).
**Theorem 3.20**.: _Let \((\mathcal{L},\mathcal{P})\) be an extremal configuration. Then there are two organizing points \(p_{1}\) and \(p_{2}\), and two bushes \(l_{1,1},\ldots,l_{M_{1},1}\) incident to \(p_{1}\) and \(l_{1,2},\ldots,l_{M_{2},2}\) incident to \(p_{2}\) with \(M_{1},M_{2}\gtrsim N^{\frac{1}{3}-}\) and a refinement \(\mathcal{P}^{\prime}\subset\mathcal{P}\) so that the two bushes break \(\mathcal{P}^{\prime}\) into \(M_{1}M_{2}\) cells which are point
weighted and so that each sector \(s\) of the bush at \(p_{1}\) having at least \(N^{\frac{2}{3}-}\) points of \(\mathcal{P}^{\prime}\) has at least \(N^{1-}\) lines of \(\mathcal{L}\) incident to at least one point of the sector._
Proof.: Apply Theorem 3.13, we find a point \(p_{1}\) with bush \(l_{l,1},\ldots,l_{M,1}\) and structuring lines holding in total \(N^{1-}\) points. We call this set of points \(\mathcal{P}_{1}\). Then \((\mathcal{L},\mathcal{P}_{1})\) is an extremal configuration. We apply Theorem 3.16 to find a refinement of the set of incidences \(J(\mathcal{L},\mathcal{P}_{1})\) with \(|J(\mathcal{L},\mathcal{P}_{1})|\gtrsim N^{\frac{4}{3}-}\) so that each line of \(\mathcal{L}\) takes only \(N^{+}\) incidences of \(J(\mathcal{L},\mathcal{P}_{1})\) in each sector of the bush at \(p_{1}\).
We restrict to those points in \(\mathcal{P}_{1}\) which are at least \(N^{\frac{1}{3}-}\) rich in incidences of \(J(\mathcal{L},\mathcal{P}_{1})\). We refer to that set as \(\mathcal{P}_{2}\). The set \((\mathcal{L},\mathcal{P}_{2})\). is an extremal configuration. We refine the set of lines to \(\mathcal{L}_{1}\) which take \(N^{+}\) incidences in \(N^{\frac{1}{3}-}\) sectors of the bush at \(p_{1}\). We let \(\mathcal{P}_{3}\) be the set of points of \(\mathcal{P}_{2}\) that are \(N^{\frac{1}{3}-}\) rich with respect to lines of \(\mathcal{L}_{1}\) incident to only \(N^{+}\) other points in the same sector. The pair \((\mathcal{L}_{1},\mathcal{P}_{3})\) is an extremal configuration. Pick an organizing point \(p_{2}\) with bush \(l_{1,2}\ldots l_{M_{2},2}\) and structuring lines for each sector of the bush from \(\mathcal{L}_{1}\). From the structuring lines keep only the points which occur in groups of at most \(N^{+}\) in the first bush. Call this set of points \(\mathcal{P}^{\prime}\). Each structuring line has at most \(N^{+}\) points in each sector of the first bush. But since there are only at most \(N^{\frac{1}{3}+}\) structuring lines in each sector of the second bush, the cell decomposition given by the two bushes is point weighted.
Since each sector of the first bush has \(N^{1-}\) incidences coming from lines making at most \(N^{+}\) incidences in the sector, there must be \(N^{1-}\) lines incident to points of the sector.
## 4 A proto inverse Szemeredi Trotter theorem
A proto inverse Szemeredi Trotter theorem will be a recipe for constructing a configuration of points and lines which may not terminate or may not yield an extremal configuration but so that a large portion of every extremal example can be obtained using this recipe.
When considering such a recipe, an important piece of information is how many parameters one needs to specify to obtain an instantiation of the recipe.
There is a trivial recipe taking \(O(N)\) parameters. Namely use \(4N\) parameters to completely specify a set of \(N\) lines, \(\mathcal{L}\) and a set of \(N\) points \(\mathcal{P}\). Examine the set of incidences between these lines and points \(I(\mathcal{L},\mathcal{P})\). If it happens to be that \(|I(\mathcal{L},\mathcal{P})|\gtrsim N^{\frac{4}{3}-}\), then we have constructed an extremal configuration and in fact, every extremal configuration can be constructed in this way. This recipe and its proto inverse Szemeredi Trotter theorem amount to really just the definition of extremal configuration, and nothing has been gained.
Now, however, we describe a recipe using just \(O(N^{\frac{1}{3}})\) parameters. Our recipe will be based on a cell decomposition consisting of a grid of axis parallel rectangles. \(a_{1}<a_{2}<\cdots<a_{N^{\frac{1}{3}}}\) will be real numbers representing the \(x\) coordinates of the grid. \(b_{1}<b_{2}<\cdots<b_{N^{\frac{1}{3}}}\) will be the \(y\) coordinates of the grid. The final ingredients will be a set of lines \(l_{s,1}\ldots l_{s,N^{\frac{1}{3}}}\) which will serve as the structuring lines for the strip of cells between \(x\) coordinates \(a_{1}\) and \(a_{2}\).
We declare the recipe to have failed if the lines \(l_{s}\) do not have at least \(N^{\frac{2}{3}-}\) crossings with \(x\)-coordinate between \(a_{1}\) and \(a_{2}\). Otherwise, we declare the recipe to have failed unless there are at least \(N^{\frac{1}{3}-}\) values of \(j\) so that at least \(N^{\frac{1}{3}-}\) and no more than \(N^{\frac{1}{3}+}\) of the crossings with \(x\)-coordinates between \(a_{1}\) and \(a_{2}\) have \(y\) coordinates between \(b_{j}\) and \(b_{j+1}\). Otherwise, we define these crossings to be the points of the cell \([a_{1},a_{2}]\times[b_{j},b_{j+1}]\) For each cell, we find all lines which are incident to two points. We declare this set of lines to be \(\mathcal{L}\). We say that the recipe has failed unless there are at least \(N^{\frac{2}{3}-}\) choices of \((j,k)\) so that \(N^{\frac{2}{3}+}\) lines of \(\mathcal{L}\) cross the cell \([a_{j},a_{j+1}]\times[b_{k},b_{k+1}]\). Otherwise, we say that the recipe has failed unless for at least \(N^{\frac{2}{3}-}\) of these choices \((j,k)\) the lines going through
the \((j,k)\)th cell are structured. If they are structured, we refer to the structuring points as the points of the \((j,k)\)th cell. And combining all of these structuring points, we get the set \(\mathcal{P}\) and we declare that the construction has succeeded. In this case, \((\mathcal{L},\mathcal{P})\) is an extremal configuration. We denote the output of this recipe as a function of its inputs as \((\mathcal{L}(a,b,l_{s}),\mathcal{P}(a,b,l_{s}))\).
**Theorem 4.1**.: _Let \((\mathcal{L},\mathcal{P})\) be an extremal configuration. Then there is a choice of the \(O(N^{\frac{1}{3}})\) parameters \((a,b,l_{s})\) and a projective transformation \(P\) so that \((P(\mathcal{L})\cap\mathcal{L}(a,b,l_{s}),P(\mathcal{P})\cap\mathcal{P}(a,b,l_ {s}))\) is an extremal configuration (of \(N^{1-}\) lines \(N^{1-}\) points and \(N^{\frac{4}{3}-}\) incidences.)_
Proof.: This is essentially a consequence of Theorem 3.20 and its proof. We will choose \(P\) to be a projective transformation sending the points \(p_{1}\) and \(p_{2}\) to the points at infinity corresponding to the \(x\) direction and \(y\) direction. We find a sector through \(p_{1}\) through which \(N^{1-}\) lines make \(N^{1-}\) incidences with the structured points and making at least two incidences per cell. (Possibly by removing all but \(N^{-}\) density of elements of each bush.) Then we choose \(a,b\) so that they give all the cells of the double bush construction with \(a_{1},a_{2}\) corresponding to the sector chosen above and we let \(l_{s}\) be the structuring lines for the sector.
**Theorem 4.2**.: _Let \((\mathcal{L},\mathcal{P})\) be an extremal configuration. Then there are \(N^{1-}\) organizing lines such that any pair of them will organize a subconfiguration \((\mathcal{L}^{\prime},\mathcal{P})\) such that a strip of lines is the set of size \(N^{2/3\pm}\) in \(\mathcal{L}^{\prime}\) that intersects an organizing line at a given interval between adjacent points on the organizing line. Then cells are intersections of strips. Furthermore each of the strips are structured by \(N^{1/3\pm}\) structuring points._
Proof.: This is the dual case of Theorem 3.20
**Theorem 4.3** (Mixing).: _Let \((\mathcal{L},\mathcal{P})\) be an extremal configuration with the cell decomposition given by Theorem 4.1. Then \(\gtrsim N^{\frac{4}{3}-}\) pairs of cells share \(\gtrsim N^{\frac{1}{3}-}\) lines which take at least two incidences of \(J^{\prime}\) in each of the two cells._
Proof.: Assume we have the cell decomposition from Theorem 4.1. An application of Theorem 4.2 gives us that \(\gtrsim N^{2-}\) pairs of lines are organizing.
Consider a pair of organizing lines \(l_{1},l_{2}\). We take a refinement of the line set such that each line has at least two incidences in a cell that \(l_{1}\) has incidences in and at least two incidences in a cell that \(l_{2}\) has incidences in. Note this must yield an extremal refinement for \(\gtrsim N^{2-}\) pairs of organizing lines \(l_{1},l_{2}\) because each of the \(\gtrsim N^{1-}\) organizing lines in \(\mathcal{L}^{\prime}\) has at least \(2\) incidences in \(\gtrsim N^{\frac{1}{3}-}\) cells which each have \(\gtrsim N^{\frac{2}{3}-}\) lines going through it which each have at least two incidences in that cell. So each organizing line contributes \(\gtrsim N^{1-}\) lines (out of a total of \(<N\) lines) so \(\gtrsim N^{1-}\) other organizing lines must share \(\gtrsim N^{1-}\) regular lines that take at least two points in one of each of their cells.
From our initial application of Theorem 4.2 we know that \(\gtrsim N^{\frac{2}{3}-}\) pairs of intervals between adjacent points on \(l_{1}\) and adjacent points on \(l_{2}\) share \(\gtrsim N^{\frac{1}{3}-}\) lines. (Note this is still true after our refinement because we kept \(\gtrsim N^{1-}\) lines). Furthermore lines take \(\lesssim N^{+}\) incidences in a single cell so \(\gtrsim N^{\frac{2}{3}-}\) pairs of cells that \(l_{1}\) and \(l_{2}\) take incidences in share \(\gtrsim N^{\frac{1}{3}-}\) lines out of \(\lesssim N^{\frac{2}{3}}\) total pairs of cells.
Since this result holds for any generic pair of \(\gtrsim N^{2-}\) pairs of organizing lines, we conclude that any generic pair of cells must share \(\gtrsim N^{\frac{1}{3}-}\) lines.
So we conclude that our two bush cell decomposition can be chosen to be constructed using two sets of \(\sim N^{\frac{1}{3}\pm}\) parallel lines (projective transformation) which form a grid. \(\gtrsim N^{\frac{2}{3}-}\) of the
rectangles in the grid are cells which contain \(\sim N^{\frac{1}{3}\pm}\) points and \(\sim N^{\frac{2}{3}\pm}\) lines which take about 2 points in the cell. Finally we have the mixing property that generic pairs of cells share \(\sim N^{\frac{1}{3}\pm}\) lines.
## 5 Cell decompositions for extremal configurations in the unit distances problem
We adapt the argument from Sections 2 to 4 to the incidence problem between points and unit circles centered at the points, which is equivalent to the unit distance problem.
Section 2 only contains counting arguments that rely on the property that two lines intersect in at most one point. Since two unit circles intersect in at most two points, the counting arguments all still hold in the unit circle case. That said, to tackle some complications due to circle cells lacking convexity, we will slightly modify the chosen number of cells and points per cell. Furthermore the first few probability results in Section 3 also rely on counting arguments so also hold for unit circles.
### Cell Decomposition Preliminaries
As before, the unit circle cell decompositions that currently exist in the literature [3] do not satisfy our purpose because they add additional boundary components to the cells which are not part of
Figure 4: Two organizing lines \(l_{1},l_{2}\) go through several cells including cells \(\mathcal{C}_{1},\mathcal{C}_{2}\) respectively. These cells are shown to share many lines which each take two points in \(\mathcal{C}_{1}\) and in \(\mathcal{C}_{2}\).
our original set of circles. A main ingredient in our proof will be the theorem of Clarkson et. al. [3] on the complexity of cell decompositions given by general families of unit circles.
**Theorem 5.1**.: _Let \(\mathcal{C}\) be a set of \(r\) unit circles. It divides \(\mathbb{R}^{2}\) into \(O(r^{2})\) connected components called **cells**. Let \(\mathcal{S}\) be any subcollection of \(m\) of these cells. Then the total number of edges of cells in \(\mathcal{S}\) is \(O(r^{\frac{2}{3}}m^{\frac{2}{3}}\beta(r)+r)\) where \(\beta(r)\) is the inverse of the Ackermann function._
Note the inverse Ackermann function grows notoriously slowly (much slower than \(\log(n)\)). [1]
**Corollary 5.2**.: _Let \(\mathcal{L}_{r}\) be any set of \(r\) unit circles and let \(\mathcal{C}\) be the set of cells which they define having \(>s\) and \(\leq 2s\) edges. Then if \(s\leq r^{\frac{1}{2}}/\beta(r)^{3/2}\), we have_
\[|\mathcal{C}|\lesssim\beta(r)^{3}\frac{r^{2}}{s^{3}},\]
_and if \(s\geq r^{\frac{1}{2}}/\beta(r)^{3/2}\)_
\[|\mathcal{C}|\lesssim\frac{r}{s}.\]
To extract structure from our configurations we will chose to throw out undesirable points and lines keeping only those that enjoy the desirable properties.
**Definition 5.3**.: _Given an extremal partial unit circle configuration \((\mathcal{L},\mathcal{P},\mathcal{J}(\mathcal{L},\mathcal{P}))\) we say \((\mathcal{L}^{\prime},\mathcal{P}^{\prime},\mathcal{J}^{\prime}(\mathcal{L}^{ \prime},\mathcal{P}^{\prime}))\) is a **refinement** if \(|\mathcal{L}|\sim|\mathcal{L}^{\prime}|\) and \(|\mathcal{P}|\sim|\mathcal{P}|^{\prime}\) and \(|\mathcal{J}(\mathcal{L},\mathcal{P})|\sim|\mathcal{J}^{\prime}(\mathcal{L}^ {\prime},\mathcal{P}^{\prime})|\)._
Figure 5: A two bush cell decomposition where a pair of far away cells are shown to share many (\(\sim N^{\frac{1}{3}\pm}\)) lines.
We also exclude certain very low probability events. For each circle \(c\in\mathcal{L}\) we establish an ordering on \(\mathcal{L}_{c}\subset\mathcal{L}\), the set circles that intersect \(c\) in two points, based on the position of their intersection with \(c\). We do this by choosing a reference point on \(c\) and a direction (clockwise or counter clockwise) and then ordering the circles of \(\mathcal{L}_{c}\) by the order of their first point of intersection with \(c\) starting at the chosen reference point and going in the chosen direction. The ordering is ill-defined when multiple circles are concurrent at a point of \(c\), but we order concurrent circles arbitrarily. For each choice of \(c^{\prime}\in\mathcal{L}\), with \(c^{\prime}\neq c\) we consider the order induced by \(c\) with reference point equal to a point of intersection of \(c\) and \(c^{\prime}\). We exclude the event that none of the \(\frac{CN\log N}{r}\) consecutive circles following \(c^{\prime}\) in this order are selected for \(\mathcal{L}_{r}\). This is a list of \(O(N^{2})\) events governed by Lemma 3.1.
At each point \(p\) of intersection of two circles of \(\mathcal{L}\), the vertical line going through \(p\) induces an order on the circle of \(\mathcal{L}\) which is just the order of the first points of intersection between circles and the vertical line (either in ascending or descending order). We would like to exclude the case that none of the first \(\frac{CN\log N}{r}\) circles above \(p\) are chosen for \(\mathcal{L}_{r}\) and none of the first \(\frac{CN\log N}{r}\) circles below \(p\) are chosen for \(\mathcal{L}_{r}\). This is a list of \(O(N^{2})\) events. So by Lemma 3.1 a uniform random selection of circles yields a cell decomposition that excludes the above undesirable events with probability \(1-O(N^{-98})\). We call a selection of unit circles which excludes the above events **acceptable**.
**Lemma 5.4**.: _Let \(K\) be a cell coming from an acceptable selection of unit circles \(\mathcal{L}_{N^{\frac{1}{3}-\alpha}}\) where \(\alpha>0\) is very small. (But \(\alpha\) large compared to the current valueof \(\epsilon\).) Suppose \(K\) has \(s\) sides. Then \(K\) contains at most \(sN^{\frac{1}{3}+2\alpha+}\) many \(N^{\frac{1}{3}-}\)-rich points of \(\mathcal{P}\)._
Proof.: Following the construction in [3], at every intersection of pairs of circles in \(\mathcal{L}_{N^{\frac{1}{3}-\alpha}}\) draw the vertical segment going up until it hits the next circle from \(\mathcal{L}_{N^{\frac{1}{3}-\alpha}}\) and the vertical segment going down until it hits the next circle from \(\mathcal{L}_{N^{\frac{1}{3}-\alpha}}\). The connected components bounded by circles from \(\mathcal{L}_{N^{\frac{1}{3}-\alpha}}\) and vertical segments are called **funnels**. Each funnel is bounded by at most two circle arcs (top and bottom) and two vertical segments (left and right). Since we chose an acceptable set of unit circles \(\mathcal{L}_{N^{\frac{1}{3}-\alpha}}\) each boundary element has at most \(\frac{N}{N^{\frac{1}{3}-\alpha}}\log(N)=N^{\frac{2}{3}+\alpha}\log(N)\) circles entering it. Suppose there are \(P\) points of \(\mathcal{P}\) in the funnel \(F\). Then there are at least \(PN^{\frac{1}{3}-}\) incidences in \(F\). The Szemeredi Trotter theorem guarantees that \(P\leq N^{\frac{1}{3}+2\alpha+}\).
If \(K\) has \(s\) sides, then it has at most \(s\) funnels so the total number of points in \(K\) is at most \(sN^{\frac{1}{3}+2\alpha+}\) points of \(\mathcal{P}\) which was to be shown.
Finally, we combine Corollary 5.2 with Lemma 5.4 to bound the number of rich points of \(\mathcal{P}\) contained in cells with between \(s\) and \(2s\) sides. If \(s\leq r^{\frac{1}{2}}/\beta(r)^{\frac{3}{2}}\) with \(r\leq N^{\frac{1}{3}-\alpha}\), we obtain the bound \(\beta(r)^{3}\frac{r^{2}}{s^{3}}sN^{\frac{1}{3}+2\alpha+}\leq\frac{N^{1+}}{s^{ 2}}\). If \(s\geq r^{\frac{1}{2}}/\beta(r)^{\frac{3}{2}}\), we obtain the bound \(\frac{r}{s}sN^{\frac{1}{3}+2\alpha+}\leq N^{\frac{2}{3}+\alpha+}\). As long as \(s\) is much bigger than \(N^{+}\), we do not capture a significant number of rich points. We conclude the following theorem.
**Theorem 5.5**.: _Let \((\mathcal{L},\mathcal{P})\) be an extremal configuration with each point of \(\mathcal{P}\) being at least \(N^{\frac{1}{3}-}\) rich. Then for each acceptable random selection \(\mathcal{L}_{N^{\frac{1}{3}-\alpha}}\), where \(\alpha>0\) is very small, there is a refinement \(\mathcal{P}^{\prime}\subset\mathcal{P}\) with \(|\mathcal{P}^{\prime}|\geq\frac{1}{2}|\mathcal{P}|\) so that no point of \(\mathcal{P}^{\prime}\) is contained in a cell with more than \(N^{+}\) sides, and in light of Lemma 5.4, each such cell has at most \(N^{\frac{1}{3}+2\alpha+}\) points of \(\mathcal{P}^{\prime}\) so that we have obtained a point weighted decomposition for the extremal configuration \((\mathcal{L},\mathcal{P}^{\prime})\)_
**Theorem 5.6**.: _Let \((\mathcal{L},\mathcal{P})\) be an extremal unit circle configuration. Specifically let \(|I(\mathcal{L},\mathcal{P})|=N^{\frac{4}{3}-\epsilon}\) with \(\epsilon\) fixed. Let \(C_{1},\ldots,C_{r^{2}}\) be a point-weighted cell decomposition for \((\mathcal{L},\mathcal{P})\) with \(N^{\frac{1}{3}-5\epsilon}\leq r\leq N^{\frac{1}{3}+2\alpha+}\). Then \(C_{1},\ldots,C_{r^{2}}\) are the points of \(\mathcal{P}\) in the funnel \(F\)._
Proof.: We first consider the case where \(\alpha>0\). We consider the case where \(\alpha=0\). We consider the case where \(\alpha=0\).
\(N^{\frac{1}{3}-4\epsilon}\). Then there is a set of incidences \(J(\mathcal{L},\mathcal{P})\subset I(\mathcal{L},\mathcal{P})\) so that \(|J(\mathcal{L},\mathcal{P})|\gtrsim N^{\frac{4}{3}-\epsilon}\), but for every circle \(L\) and cell \(C\) for which there is \(P\in C\) with \((L,P)\in J(\mathcal{L},\mathcal{P})\), we have that_
\[2\leq|I(\{L\},C)|\lesssim N^{+}\]
_and there exists some other point \(P^{\prime}\) in \(C\) such that \(P\) and \(P^{\prime}\) are adjacent on \(L\) and the circle arc \((P,P^{\prime})\) is entirely contained in \(C\)._
Proof.: The way this proof will work is that we will remove from \(I(\mathcal{L},\mathcal{P})\) all incidences that would violate the conditions for \(J(\mathcal{L},\mathcal{P})\) and observe that we have removed less than half of the set \(I(\mathcal{L},\mathcal{P})\).
For any circle \(L\) and cell \(C\) for which \(L\) has less than \(N^{2\epsilon}\) incidences, we remove these incidences and we have removed at most \(r|\mathcal{L}|N^{2\epsilon}\lesssim N^{\frac{4}{3}-2\epsilon}\) incidences since each circle has incidences with at most \(r\) cells. By Theorem 5.5 each cell has at most \(N^{+}\sim N^{\epsilon}\) sides. Furthermore, every circle intersects a cell side in at most two points. Thus for every circle \(L\) and cell \(C\), \(L\) has at most \(N^{\epsilon}\) disjoint connected circle arcs contained in \(C\). By the previous refinement, if \(L\) has an incidence with a point in \(C\) in \(J(\mathcal{L},\mathcal{P})\) then \(L\) has at least \(N^{2\epsilon}\) incidences in \(C\) so by pigeonhole principle, at least two incidences with points \(P\) and \(P^{\prime}\) must occur on the same connected circle arc in \(C\).
For the incidences which remain, for any cell \(C\), all circles are incident to at least two points of the cell (which define the line) so there are at most \(|C|^{2}\) such circles. We can remove all incidences from cells that do not contribute at least \(\gtrsim N^{-}|C|^{2}\) incidences. We note that especially rich circles cannot contribute most of the incidences. The number of circles passing through \(k\) of the points is at most \(\frac{|C|^{2}}{k^{3}}\) contributing at most \(\frac{|C|^{2}}{k^{2}}\) incidences in \(C\) (this follows from Szemeredi-Trotter). By Theorem 5.5 we have \(|C|\lesssim N^{\frac{1}{3}+2(4\epsilon)+}\) since \(\alpha<4\epsilon\) so the total number of \(k\) rich incidences is at most \(r^{2}\frac{|C|^{2}}{k^{2}}\lesssim(N^{\frac{1}{3}-4\epsilon+\frac{1}{3}+2(4 \epsilon)+}/k)^{2}=N^{\frac{4}{3}+8\epsilon}/k^{2}\). So we may remove incidences for \(k\lesssim N^{5\epsilon}\). Thus \(N^{2\epsilon}\lesssim|I(\{L\},C)|\lesssim N^{5\epsilon}\).
We obtain the following corollary.
**Corollary 5.7**.: _Let \((\mathcal{L},\mathcal{P})\) be an extremal unit circle configuration. Let \(C_{1},\ldots,C_{r^{2}}\) be a point-weighted cell decomposition for \((\mathcal{L},\mathcal{P})\) with \(r\sim N^{\frac{1}{3}-}\). Then there is a set \(\mathcal{C}\) of \(\gtrsim r^{2-}\) cells so that for each \(C\in\mathcal{C}\), there is a set of circles \(\mathcal{L}_{C}\) with_
\[|\mathcal{L}_{C}|\gtrsim|C|^{2-},\]
_and with each \(L\in\mathcal{L}_{C}\) incident to at least \(2\) but \(\lesssim N^{+}\) points in \(C\) such that for at least two of these points adjacent on \(L\), the circle arc between them is entirely contained in \(C\). Each set \(\mathcal{L}_{C}\) has density \(\gtrsim N^{-}\) in the set of lines intersecting two points in \(C\)._
### Crossing Numbers and Structuring Circles
As before, a key ingredient in proving this will be the crossing number inequality which we state here. This is nearly the same result as the standard crossing number inequality from Lemma 3.10 with the key difference that edges are unit circle arcs.
**Definition 5.8**.: _A crossing between unit circle arcs is a point in their intersection._
Note that a pair of circle arcs can have zero, one, or two crossings.
**Lemma 5.9**.: _[_10_]_ _Let \(\mathcal{P}\) be a set of \(n\) points in the plane and \(\mathcal{L}\) be a set of unit circles. Let \(G\) be the multigraph where \(\mathcal{P}\) is the vertex set and for every circle \(c\) in \(\mathcal{L}\) and pair of adjacent points \((p,p^{\prime})\) on \(c\) we add an edge between \(p\) and \(p^{\prime}\). Let \(e\) be the number of edges in \(G\). Then the number of crossings between edges is \(\gtrsim\frac{e^{3}}{n^{2}}\)._
We make a definition of a structured set of circles.
**Definition 5.10**.: _We say that a set \(\mathcal{L}_{1}\) of at least \(N^{\frac{2}{3}-}\) unit circles is **structured** if there is a set \(\mathcal{P}_{1}\) of at most \(N^{\frac{1}{3}+}\) points so that each circle of \(\mathcal{L}_{1}\) is incident to at least two points of \(\mathcal{P}_{1}\). We call this set of points **structuring**._
Note that since \(\lesssim N^{\frac{2}{3}+}\) circles go through at least two among \(\lesssim N^{\frac{1}{3}+}\) points, the structuring points essentially define the structured circles. Now we're ready to state our structuring theorem.
**Theorem 5.11**.: _Let \((\mathcal{L},\mathcal{P},\mathcal{J})\) be an extremal partial unit circle configuration. Then there exists a point weighted cell decomposition with \(\gtrsim N^{\frac{2}{3}-}\) cells and a refinement of the configuration \((\mathcal{L}^{\prime},\mathcal{P},\mathcal{J}^{\prime})\) so that for each circle \(c\in\mathcal{L}^{\prime}\) there are points \(p_{0},\ldots,p_{M}\) of \(\mathcal{P}\) with \((c,p_{j})\in\mathcal{J}^{\prime}\) and the \(p_{j}\)'s in order of their position on \(c\) and with \(M\gtrsim N^{\frac{1}{3}-}\) so that for each consecutive pair of points \(p_{2k},p_{2k+1}\), the circle arc bounded by \(p_{2k},p_{2k+1}\) is entirely contained in a cell, and there is a structured set of circles \(\mathcal{L}_{k}\) so that each \(c^{\prime}\) in \(\mathcal{L}_{k}\) intersects \(c\) in the open circle arc bounded by the points \(p_{2k}\) and \(p_{2k+1}\) exactly once. We say the circles in \(\mathcal{L}^{\prime}\)**organize**\(\mathcal{P}\)._
Note that we restrict our attention to crossings where the circle arcs share only one point in common instead of two.
**Definition 5.12**.: _We say two circle arcs share a **simple crossing** if they have exactly one point in their intersection. We say two circle arcs share a **double crossing** if they have exactly two points in their intersection. Similarly, two circles have a simple crossing if they each contain an arc bounded by two adjacent points such that these arcs share a simple crossing. Likewise for double crossing._
Proof.: First consider an extremal partial unit circle configuration \((\mathcal{L},\mathcal{P},\mathcal{J})\). We consider the point weighted cell decomposition \(C_{1},\ldots,C_{N^{\frac{2}{3}-}}\) guaranteed by Corollary 5.7. We obtain the refined unit circle incidence multigraph \(G\) where the vertex set is \(\mathcal{P}\) and for every pair of vertices, we add an edge if the corresponding points are adjacent on a unit circle, and the unit circle arc that they bound has length \(\lesssim N^{-\epsilon}\) and is entirely contained inside a cell and such that each cell \(\lesssim N^{\frac{2}{3}+}\) edges. By Corollary 5.7 we know each cell has \(\gtrsim N^{\frac{2}{3}-}\) edges of \(G\) and \(\lesssim N^{\frac{1}{3}+}\) points.
We know from Lemma 5.9 that there are \(\gtrsim N^{\frac{4}{3}-}\) crossings per cell. Now we must show that \(\gtrsim N^{\frac{4}{3}-}\) of them are _simple_ crossings.
**Lemma 5.13**.: _Every cell in a cell decomposition from Corollary 5.7 has \(\gtrsim N^{\frac{4}{3}-}\) simple crossings._
Proof.: Since there are \(\sim N^{\frac{2}{3}\pm}\) segments per cell then \(\gtrsim N^{\frac{2}{3}-}\) of them must have \(N^{\frac{2}{3}-}\) crossings. Assume one of these edges does not contribute \(\gtrsim N^{\frac{2}{3}-}\) simple crossings. Then it must have \(\gtrsim N^{\frac{2}{3}-}\) double crossings. Recall all edges are circle arcs of length less than \(N^{-\epsilon}\) so double crossings correspond to circles being tangent with small error. Quantitatively, given a fixed edge that has double crossings with two other edges, their centers are at distance \(\lesssim N^{-\epsilon}\) (small angle approximation).
Figure 6: Five structuring points structure the set of black unit circles. A generic pair of structuring points must define a circle in the structured set but not all the pairs must.
Then also by small angle approximation, the length of the the circle arc spanning the two intersections between circles whose centers are at distance \(\lesssim N^{-\epsilon}\) is \(\sim 1\) i.e. much larger than \(N^{-\epsilon}\). Thus no edge can span both the crossings of a pair of circles with close centers. So if two edges form double crossings with a third edge, then the first two must form a simple crossing. Furthermore this simple crossing must be contained in the same cell since all edges are entirely contained in a cell.
So if one of these crossing rich edges \(e\) in a cell does not contribute \(\gtrsim N^{\frac{2}{3}-}\) simple crossings, then every pair of edges that form double crosses with \(e\) must simple cross each other. But \(e\) is crossing rich so it must have \(\gtrsim N^{\frac{2}{3}-}\) double crossings which implies \(\gtrsim N^{\frac{4}{3}-}\) simple crossings in the cell. Thus every cell has \(\gtrsim N^{\frac{4}{3}-}\) simple crossings.
Each cell has \(\lesssim N^{\frac{1}{3}}\) points so \(\lesssim N^{\frac{2}{3}}\) circle arcs between pairs of points so dividing the number of simple crossings we obtained from the above Lemma by the number of circle arcs we get that there are \(\gtrsim N^{\frac{2}{3}-}\) circle arcs with \(\gtrsim N^{\frac{2}{3}-}\) simple crossings. We call these sector arcs because their \(\gtrsim N^{\frac{2}{3}-}\) simple crossings come from a set of \(\gtrsim N^{\frac{2}{3}-}\) structured circles which are structured by the set of points in the cell.
There are \(\gtrsim N^{\frac{2}{3}-}\) cells so \(\gtrsim N^{\frac{4}{3}-}\) sector arcs total. There are \(\lesssim N\) circles in the configuration each of which (under our refinements) has no more than one circle arc per cell so has \(\lesssim N^{\frac{1}{3}}\) sector
Figure 7: An organizing circle \(c\) has two circle arcs bounded by \(p_{0},p_{1}\) and \(p_{2},p_{3}\) respectively, where both arcs are crossed by a structured set of circles (dotted light gray with crossing arcs in solid black). The dual formulation is shown: the centers of the structured circles form a set of structured points (black) contained in the interior of the sectors defined below (hashed gray region bounded by dotted black circles which intersect at the organizing point \(p\): center of \(c\)).
arcs. Thus \(\gtrsim N^{1-}\) circles must have \(\gtrsim N^{\frac{1}{3}-}\) sector arcs.
We'd now like to take advantage of our result. Theorem 5.11 is a method of associating to each extremal unit circle configuration a refinement which is rather nicely parametrized. We will do this by applying point-circle duality to the result of Theorem 5.11. The result of the theorem gives us many circles \(c\) which are incident to particular sets of points \(p_{1},\dots,p_{M}\) with \(M\gtrsim N^{\frac{1}{3}-}\) so that we have \(\gtrsim N^{\frac{2}{3}-}\) circles intersecting \(c\) between adjacent points which are structured. We use point-circle duality which maps a point to the unit circle centered at it and vise-versa, applying Theorem 5.11 to circles and points instead of points and circles.
**Definition 5.14**.: _Consider a point \(p\) with a bush \(c_{1},\dots,c_{M}\) of circles going through it ordered according to the direction of their tangent at \(p\). Let \(d_{1},\dots,d_{M}\) be the interiors of the circles \(c_{i}\). This bush of circles defines \(M\)**sectors** where we define the \(i^{\text{th}}\) sector \(s_{i}=(d_{i}\cap d_{i+1}^{c})\cup(d_{i}^{c}\cap d_{i+1})\) where \(d^{c}\) is the complement of the disk \(d\)._
We analyze what it means for a point to be in the interior of a circle in terms of circle crossings.
**Lemma 5.15**.: _A point \(p\) is in the sector \(s_{i}\) of an organizing circle \(c\) if and only if the unit circle centered at \(p\) crosses the arc of \(c\) between \(p_{i}\) and \(p_{i+1}\) exactly once._
Proof.: Given two circles \(c_{a}\) and \(c_{b}\) and points \(p_{1}\) and \(p_{2}\) on \(c_{a}\), the property that \(c_{b}\) crosses the circle arc of \(c_{a}\) contained between \(p_{1}\) and \(p_{2}\) exactly once is equivalent to saying that exactly one of \(p_{1}\) or \(p_{2}\) is in the interior of \(c_{b}\). Taking the dual which exchanges unit circles with their center points and vise-versa, this statement is equivalent to saying that \(p_{b}\), the center of \(c_{b}\), is in exactly one of the interiors of \(c_{1}\) or \(c_{2}\), the circles centered at \(p_{1}\) and \(p_{2}\) respectively. So the statement is dual to saying that the point \(p_{b}\) (dual to \(c_{b}\)) is in the sector \(s_{1}\) centered at the point \(p_{a}\). Taking \(c_{a}\) to be the organizing circle from the lemma statement and \(c_{b}\) to be the circle centered at \(p\) from the lemma statement and \(p_{a}\), \(p_{b}\) to be \(p_{i}\) and \(p_{i+1}\) respectively we obtain the statement.
This motivates the our need to distinguish between _simple crossings_ and _double crossings_. We obtain the dual of Theorem 5.11:
**Theorem 5.16**.: _Let \((\mathcal{L},\mathcal{P},\mathcal{J})\) be an extremal partial unit circle configuration. Then there is a refinement \((\mathcal{L},\mathcal{P}^{\prime},\mathcal{J}^{\prime})\) so that for each point \(p\in\mathcal{P}^{\prime}\) there are circles \(c_{0},\dots,c_{M}\) of \(\mathcal{L}\) which are incident to \(p\) in \(\mathcal{J}^{\prime}\) in order of their direction and with \(M\gtrsim N^{\frac{1}{3}-}\) so that each sector \(s_{j}\) bounded by consecutive pairs of circles \(c_{2j},c_{2j+1}\) contains a structured set of \(\gtrsim N^{\frac{2}{3}-}\) points \(\mathcal{P}_{j}\). We say the points in \(\mathcal{P}^{\prime}\)**organize**\(\mathcal{L}\)._
Proof.: We apply Lemma 5.15 to Theorem 5.11.
For any point \(p\in\mathcal{P}^{\prime}\) with circles \(c_{1},\ldots c_{M}\) incident to it, there are \(\gtrsim N^{\frac{2}{3}-}\) points in each of the \(\gtrsim N^{\frac{1}{3}-}\) sectors for a total of \(N^{1-}\) points. We take this set of points as a refinement \(\mathcal{P}^{\prime}\) of our original set of points \(\mathcal{P}\). What is particularly pleasant about this structure is that each of the \(\gtrsim N^{\frac{2}{3}-}\) points of \(\mathcal{P}^{\prime}\) in a sector \(s_{j}\) lie on at least two of the \(\gtrsim N^{\frac{1}{3}-}\) structuring circles.
Structuring circles seem very odd precisely because all of the points on them lie in a particular sector \(s_{j}\). But this is not as odd as it seems. We see from the proof of Theorem 5.11, that the structuring circles for \(p\) are dual to the points in cells through which the circle dual to \(p\) passes. Every point lies in a cell, and every cell has \(N^{\frac{2}{3}\pm}\) circles going through it so by duality the set of \(N^{\frac{1}{3}}\) structuring circles define the \(N^{\frac{2}{3}\pm}\) points in a sector.
We're going to show that for any choice of organizing point \(p\in\mathcal{P}^{\prime}\) a typical circle \(c\) will have incidences in most sectors \(s_{j}\). To do this we first need to introduce a refinement of the configuration endowed with a cell decomposition whose boundary circles include the bush through \(p\).
**Theorem 5.17** (bush construction).: _For any extremal unit circle configuration \((\mathcal{L},\mathcal{P})\) there exists a subset \(\mathcal{P}^{\prime}\) of \(\gtrsim N^{1-}\) points in \(\mathcal{P}\) which are organizing with \(\sim N^{\frac{1}{3}\pm}\) sectors and a refined configuration \((\mathcal{L}^{\prime},\mathcal{P}^{\prime})\) such that the \(\gtrsim N^{1-}\) circles in \(\mathcal{L}^{\prime}\) organize \(\mathcal{P}^{\prime}\). Also for any \(p\in\mathcal{P}^{\prime}\) the refinement \((\mathcal{L}^{\prime},\mathcal{P}_{p})\) where \(\mathcal{P}_{p}\) are the points in \(\mathcal{P}\) organized by \(p\), has a refinement \((\mathcal{L}^{\prime},\mathcal{P}^{\prime}_{p})\) which is an extremal configuration with a point-weighted cell decomposition where each cell is contained in a sector. Moreover, any circle \(c\in\mathcal{L}^{\prime}\) which has exactly \(N^{\frac{2}{3}+\alpha}\) simple crossings with circles of \(\mathcal{L}^{\prime}\) within the sector \(s\) with \(\alpha>k\epsilon\) for \(k\) sufficiently large will not enter more than \(N^{\alpha+}\) cells in \(s\)._
Note as before we split our circles into circle arcs between adjacent points so we say two circles have a simple crossing if they each have a circle arc between a pair of adjacent points such that the intersection of these circle arcs is exactly one point.
Proof.: _Refinement properties:_ We apply Theorem 5.16 and obtain the refinement \((\mathcal{L},\mathcal{P}^{\prime})\) of organizing points. Then we apply Theorem 5.11 to \((\mathcal{L},\mathcal{P}^{\prime})\) obtaining the refinement \((\mathcal{L}^{\prime},\mathcal{P}^{\prime})\) where the circles in \(\mathcal{L}^{\prime}\) organize \(\mathcal{P}^{\prime}\). Now we have shown the first claim of the theorem.
_Cell decomposition:_ Let \(p\in\mathcal{P}^{\prime}\) and \(\mathcal{P}_{p}\) be the set of points in \(\mathcal{P}\) organized by \(p\). Note there are \(\gtrsim N^{1-}\) organized points each \(\gtrsim N^{\frac{1}{3}-}\) rich so \((\mathcal{L}^{\prime},\mathcal{P}_{p})\) is an extremal configuration which we work in for this paragraph. We label the bush of \(M\sim N^{\frac{1}{3}\pm}\) circles intersecting \(p\) as \(c_{0},\ldots,c_{M}\). Now we pick random \(c^{\prime}_{1},\ldots,c^{\prime}_{K}\) with \(K\sim N^{\frac{1}{3}}\) from \(\mathcal{L}^{\prime}\). Our "cell decomposition" will be made from the circles \(c_{0},\ldots,c_{M}\) together with the circles \(c^{\prime}_{0},\ldots,c^{\prime}_{K}\). But we will not think of this collection of circles as giving a cell decomposition in the conventional sense. Recall that the set of points \(E_{c}\) in \(\mathbf{R}^{2}\) which are the centers of circles that simple cross \(c_{0}\) are double-covered by the sectors \(s_{i}=(d_{i}\cap d_{i+1}^{c})\cup(d_{i}^{c}\cap d_{i+1})\) defined by the adjacent circles \(c_{i}\) and \(c_{i+1}\). For each sector \(s_{i}\) we let the circles \(c^{\prime}_{1},\ldots,c^{\prime}_{K}\) subdivide \(s_{i}\) into cells. We have now obtained a collection of cells which double cover \(E\). This will essentially be all right for us. We will sometimes over count incidences, but we will at most double count them.
The randomly selected circles \(c^{\prime}_{1},\ldots,c^{\prime}_{K}\) can be chosen to be in the very likely case where they separate the points on the structuring circles of each sector into (essentially) distinct cells. For each structuring circle \(c_{s}\), there are points \(p_{1},\ldots,p_{L}\) from \(\mathcal{P}\) with \(L\gtrsim N^{\frac{1}{3}-}\) so that between each consecutive pair \(p_{k},p_{k+1}\) there are \(\gtrsim N^{\frac{2}{3}-}\) circles of \(\mathcal{L}\) which cross the circle \(c_{s}\). Applying Lemma 3.1, we get a bound of \(N^{-100}\) on the probability that any \(N^{\frac{2}{3}+}\) consecutive circles of \(\mathcal{L}\) in the order they intersect \(c_{s}\) don't include one of the \(c^{\prime}\)'s. We can select the \(c^{\prime}\)'s so that none of these events happen. Thus each cell contains at most \(N^{+}\) points per structuring circle. Since there are \(\lesssim N^{\frac{1}{3}+}\) structuring circles in each sector, our cell decomposition is point weighted.
We would like to obtain one further property of our double-covering cell decomposition, so we refine it a bit further. The circles \(c^{\prime}_{1},\ldots,c^{\prime}_{K}\) have a total of at most \(2K^{2}\sim N^{\frac{2}{3}}\) crossings. We want to restrict to sectors in which there are no more than \(N^{\frac{1}{3}+}\) crossings. This restrictions requires us to remove a small proportion of the sectors. Now in each sector \(s_{j}\) that we have retained we will remove some circles from the list \(c^{\prime}_{1},\ldots,c^{\prime}_{K}\). [These are removed only for consideration in the sector \(s_{j}\)] We choose the circles to remove so that each remaining circle \(c^{\prime}_{j}\) has only \(N^{+}\) crossings with other \(c^{\prime}\)'s in the sector \(s_{j}\). The effect of the removals is that we have joined together a number of cells in the sector \(s_{j}\). This ruins the point weightedness of the cells. We remove the cells having more than \(N^{\frac{1}{3}+}\) points, but we have not removed as many as \(N^{\frac{2}{3}-}\) points because on each structuring circle of \(s_{j}\), a removed point was adjacent to a removed circle \(c^{\prime}_{k}\). Thus we retain \(N^{\frac{1}{3}-}\) points per structuring circle. Now we have a new "cell decomposition" which covers \(N^{1-}\) points at most twice and which is subordinate to the sectors, so that in each retained sector \(s_{j}\), none of the retained dividing circles \(c^{\prime}_{k}\) intersect more than \(N^{+}\) other dividing circles so each bounds at most \(N^{+}\) cells in that sector.
To get the claim about circles \(c\) with \(N^{\frac{2}{3}+\alpha}\) simple crossings apply Lemma 3.2 to control the probability that more than 10 times the expected number of circles that share a simple crossing with \(c\) are selected as random circles.
When trying to show that a typical circle in a configuration \((\mathcal{L},\mathcal{P})\) will have incidences in most of the sectors of an organizing point, the enemy case is circles which take too many points in a given sector. A typical circle has \(N^{2/3}\) simple crossings between each pair of points. So if we show that circles with too many simple crossings in a sector do not contribute significantly to the total number of incidences in that sector, then most incidences come from circles taking \(\lesssim N^{+}\) points in that sector and we win.
**Theorem 5.18**.: _There exist \(\gtrsim N^{1-}\) organizing points \(p\) in \((\mathcal{L},\mathcal{P})\) with \(\sim N^{\frac{1}{3}\pm}\) sectors and some integer \(k\) such that for every sector \(\gtrsim N^{1-}\) of its incidences come from circles taking fewer than \(N^{k\epsilon}\) points in that sector._
Proof.: We use the bush construction \((\mathcal{L}^{\prime},\mathcal{P}^{\prime}_{p})\) from Theorem 5.17 and chose a point \(p\) to be the center of our bush. For a large enough constant \(k\), we toss out sectors that have \(\gtrsim N^{\frac{5}{3}+k_{1}\epsilon}\) simple circle arc crossings. There are at most \(N^{\frac{1}{3}-k_{1}\epsilon}\) such sectors because \(\gtrsim N^{2-}\) simple circle arc crossings total by Lemma 5.13 (because there are \(\gtrsim N^{\frac{2}{3}-}\) cells with \(\gtrsim N^{\frac{4}{3}-}\) simple crossings each). So we still kept \(\sim N^{\frac{1}{3}\pm}\) sectors which each have \(\sim N^{\frac{5}{3}\pm}\) simple circle arc crossings and each sector contributes \(\gtrsim N^{1-}\) incidences. So this refinement still yields a configuration \((\mathcal{L}^{\prime},\mathcal{P}^{\prime})\). Similarly, our cell decomposition from Theorem 5.17 has \(\lesssim N^{\frac{2}{3}}\) cells, so we can remove any sectors with more than \(N^{\frac{1}{3}+k_{1}\epsilon}\) cells.
By Corolarry 5.7 we may chose a subset \(J(\mathcal{L}^{\prime},\mathcal{P}^{\prime})\subset I(\mathcal{L}^{\prime}, \mathcal{P}^{\prime})\) such that \(|J(\mathcal{L}^{\prime},\mathcal{P}^{\prime})|\gtrsim N^{\frac{4}{3}-}\) and every circle has \(\lesssim N^{+}\) incidences from \(J(\mathcal{L}^{\prime},\mathcal{P}^{\prime})\) per cell.
**Definition 5.19** (Fast lines).: _We say that a circle is \(\alpha-\)fast for a sector \(s\) if it has \(\sim N^{\frac{2}{3}+\alpha\pm}\) simple crossings with circles in \(\mathcal{L}\)._
This is the enemy case. Our goal is to show these do not contribute significantly to the number of incidences in the sector. Note that since a fast circle crosses \(N^{\frac{2}{3}+\alpha\pm}\) circles of \(\mathcal{L}\) in \(s\), it must cross at least \(N^{\alpha-}\) of the randomly selected circles \(l^{\prime}\) and therefore must enter at least \(N^{\alpha-}\) cells. Similarly by Theorem 5.17 each \(\alpha\) fast line enters no more than \(N^{\alpha+}\) cells in \(s\). So \(\alpha-\)fast circles go through \(\sim N^{\alpha\pm}\) cells in this sector.
**Definition 5.20** (Slow lines).: _We say that a circle is slow for a sector \(s\) if it has \(\lesssim N^{\frac{2}{3}+k_{2}\epsilon\pm}\) simple crossings with circles in \(\mathcal{L}\)._
We want to show slow clocks contribute \(\gtrsim N^{1-k_{2}\epsilon}\) incidences. Assume to the contrary for some \(\alpha\) the \(\alpha-\)fast circles contribute \(\gtrsim N^{1-\frac{\alpha}{10^{10}}}\) incidences. Note that there are \(N^{\frac{5}{3}}\) simple circle arc crossings in \(s\) so there are \(\lesssim N^{1-\alpha}\)\(\alpha-\) fast circles which each have \(\sim N^{\frac{2}{3}+\alpha\pm}\) crossings. If not, the sum of \(\alpha-\)fast contributions is \(N\sum_{\alpha=k_{2}}^{1/(3\epsilon)}N^{-\alpha\epsilon/10^{10}}\sim N^{1-k_{2 }\epsilon/10^{10}}\) so by choosing constant \(k_{2}\) large enough, the \(\alpha-\)fast circles contribute a vanishingly small fraction of the incidences and we also conclude that slow circles contribute most incidences.
If the \(\alpha-\)fast circles contribute \(\gtrsim N^{1-\frac{\alpha}{10^{10}}}\) incidences in \(s\), then there must be \(\gtrsim N^{\frac{1}{3}-\frac{\alpha}{10^{10}}}\) cells in which \(\alpha-\)fast circles contribute \(N^{\frac{2}{3}-\frac{\alpha}{10^{10}}}\) incidences each. We can pick a structuring line of the sector \(s\) and use it to "double"-order the retained dividing circles \(c^{\prime}_{k}\) which were retained for the sector \(s\). This is an ordered list in which each \(c^{\prime}_{k}\) appears at most twice. Within the sector \(s\), this ordering does not change much depending on which structuring line we pick since dividing lines intersect at most \(O(N^{+})\) other dividing lines in \(s\). Thus if an \(\alpha\)- fast line \(c\) intersects a particular \(c^{\prime}_{k}\) any \(c^{\prime}_{l}\) must be in one of 4 intervals of length \(N^{\alpha+}\) in the ordering for \(s\). (It's 4 intervals because each dividing line appears twice in the ordering and because there are two disconnected peieeces of the sector.) Each interval is associated to the cells which its \(c^{\prime}_{k}\) bound. In this way, we can find an interval of length \(N^{\alpha+}\) associated to \(N^{\frac{2}{3}+}\)\(\alpha\)-fast lines.
We let \(R\) be the union of the \(N^{\alpha+}\) consecutive cells above. There are \(N^{\frac{2}{3}+}\) many \(\alpha\)-fast circles going through \(R\), each making \(N^{\alpha(1-2(10^{-10}))-}\) incidences with the \(N^{\frac{1}{3}+\alpha+}\) many points in \(R\). We apply Szemeredi-Trotter (for unit circles) and obtain a contradiction.
It is worth emphasizing that Theorem 5.18 says that most (up to \(\epsilon\) loss in the exponent) circles will take \(O(1)\) points from each cell they crosses. In other words, points on a circle are evenly spaced among cells. This is the linchpin fact in our main result (see Section 5.3).
**Theorem 5.21**.: _Let \((\mathcal{L},\mathcal{P})\) be an extremal unit circle configuration. Then there are \(\gtrsim N^{2-}\) pairs of organizing points \(p_{1}\) and \(p_{2}\), with bushes \(c_{1,1},\ldots,c_{M_{1},1}\) incident to \(p_{1}\) and \(c_{1,2},\ldots,c_{M_{2},2}\) incident to \(p_{2}\) with \(M_{1},M_{2}\sim N^{\frac{1}{3}\pm}\) and a refinement \(\mathcal{P}^{\prime}\subset\mathcal{P}\) so that the two bushes break \(\mathcal{P}^{\prime}\) into \(M_{1}M_{2}\) cells which are point weighted and so that each sector \(s\) of the bush at \(p_{1}\) having at least \(N^{\frac{2}{3}-}\) points of \(\mathcal{P}^{\prime}\) has at least \(N^{1-}\) circles of \(\mathcal{L}\) incident to at least two points in \(s\)._
Proof.: Apply Theorem 5.11, we find a point \(p_{1}\) with bush \(c_{1,1},\ldots,c_{M,1}\) and structuring circles holding in total \(N^{1-}\) points. We call this set of points \(\mathcal{P}_{1}\). Then \((\mathcal{L},\mathcal{P}_{1})\) is an extremal configuration. We apply Theorem 5.18 to find a refinement of the set of incidences \(J(\mathcal{L},\mathcal{P}_{1})\) with \(|J(\mathcal{L},\mathcal{P}_{1})|\gtrsim N^{\frac{4}{3}-}\) so that each circle of \(\mathcal{L}\) takes only \(N^{+}\) incidences of \(J(\mathcal{L},\mathcal{P}_{1})\) in each sector of the bush at \(p_{1}\).
We restrict to those points in \(\mathcal{P}_{1}\) which are at least \(N^{\frac{1}{3}-}\) rich in incidences of \(J(\mathcal{L},\mathcal{P}_{1})\). We refer to that set as \(\mathcal{P}_{2}\). The set \((\mathcal{L},\mathcal{P}_{2})\). is an extremal configuration. We refine the set of circles to \(\mathcal{L}_{1}\) which take \(N^{+}\) incidences in \(N^{\frac{1}{3}-}\) sectors of the bush at \(p_{1}\). We let \(\mathcal{P}_{3}\) be the set of points of \(\mathcal{P}_{2}\) that are \(N^{\frac{1}{3}-}\) rich with respect to circles of \(\mathcal{L}_{1}\) incident to only \(N^{+}\) other points in the same sector. The pair \((\mathcal{L}_{1},\mathcal{P}_{3})\) is an extremal configuration. Pick an organizing point \(p_{2}\) with bush \(c_{1,2}\ldots c_{M_{2},2}\) and structuring circles for each sector of the bush from \(\mathcal{L}_{1}\). From the structuring circles keep only the points which occur in groups of at most \(N^{+}\) in the first bush. Call this set of points \(\mathcal{P}^{\prime}\). Each structuring circle has at most \(N^{+}\) points in each sector of the first bush. But since there are only at most \(N^{\frac{1}{3}+}\) structuring circles in each sector of the second bush, the cell decomposition given by the two bushes is point weighted.
Since each sector of the first bush has \(N^{1-}\) incidences coming from circles making at most \(N^{+}\) incidences in the sector, there must be \(N^{1-}\) circles incident to points of the sector.
### A proto inverse theorem for unit distances
Here we just state the analog for unit circles of Theorem 4.1
**Theorem 5.22**.: _Let \((\mathcal{L},\mathcal{P})\) be an extremal configuration. Then there is a choice of the \(O(N^{\frac{1}{3}})\) parameters \((a,b,l_{s})\) so that \((\mathcal{L}\cap\mathcal{L}(a,b,l_{s}),\mathcal{P}\cap\mathcal{P}(a,b,l_{s}))\) is an extremal configuration (of \(N^{1-}\) circles \(N^{1-}\) points and \(N^{\frac{4}{3}-}\) incidences.)_
The proof is just the same as the proof of Theorem 4.1. We let \(a\) and \(b\) encode the location of two bushes for the two bush cell decomposition and let \(l_{s}\) encode the sturcturing lines for one of the sectors of the first bush.
|
2302.11806 | PLU-Net: Extraction of multi-scale feature fusion | Deep learning algorithms have achieved remarkable results in medical image
segmentation in recent years. These networks are unable to handle with image
boundaries and details with enormous parameters, resulting in poor segmentation
results. To address the issue, we develop atrous spatial pyramid pooling (ASPP)
and combine it with the Squeeze-and-Excitation block (SE block), as well as
present the PS module, which employs a broader and multi-scale receptive field
at the network's bottom to obtain more detailed semantic information. We also
propose the Local Guided block (LG block) and also its combination with the SE
block to form the LS block, which can obtain more abundant local features in
the feature map, so that more edge information can be retained in each down
sampling process, thereby improving the performance of boundary segmentation.
We propose PLU-Net and integrate our PS module and LS block into U-Net. We put
our PLU-Net to the test on three benchmark datasets, and the results show that
by fewer parameters and FLOPs, it outperforms on medical semantic segmentation
tasks. | Weihu Song | 2023-02-23T06:34:05Z | http://arxiv.org/abs/2302.11806v1 | # PLU-Net: Extraction of multi-scale feature fusion
###### Abstract
Deep learning algorithms have achieved remarkable results in medical image segmentation in recent years. These networks are unable to handle with image boundaries and details with enormous parameters, resulting in poor segmentation results. To address the issue, we develop atrous spatial pyramid pooling (ASPP) and combine it with the Squeeze-and-Excitation block (SE block), as well as present the PS module, which employs a broader and multi-scale receptive field at the network's bottom to obtain more detailed semantic information. We also propose the Local Guided block (LG block) and also its combination with the SE block to form the LS block, which can obtain more abundant local features in the feature map, so that more edge information can be retained in each down sampling process, thereby improving the performance of boundary segmentation. We propose PLU-Net and integrate our PS module and LS block into U-Net. We put our PLU-Net to the test on three benchmark datasets, and the results show that by fewer parameters and FLOPs, it outperforms on medical semantic segmentation tasks.
Keywords:Semantic segmentation U-Net deep learning medical image
## 1 Introduction
The significance of image analysis is rising in parallel with the successful application of imaging in clinical medicine. Image segmentation is a key image analysis technology which plays an essential role in imaging medicine. Deep learning technology, mainly based on deep convolutional neural network (DCNN), have solved various semantic segmentation difficulties of medical images in recent times.Although the performance of subsequent improved methods based on the U-Net has improved, some issues have emerged, including such increasing the parameters and FLOPs, and the network for image segmentation boundary and
details is not good enough.In this article, we propose PLU-Net, a simple and effective network model which utilizes U-Net as a baseline to solve the problems. To start, the LS block is employed to obtain rich local information in order to improve boundary segmentation performance. Second, with its broad and multi-scale receptive field, the PS module is added to the bottom of the network to collect richer detail information. The combination of the two modules allows for excellent image boundary and detail information acquisition. Finally, the network depth is reduced by one layer, greatly increasing the model's interface speed.
## 2 Related work
Other CNN models appeared in the years after the ILSVRC [22] competition began in 2012, including ALexNet [15], VGG [25], GoogLeNet [26], ResNet [9], and SENet [10]. These models are mostly utilized in image classification tasks at the image level, and many fields require more detailed image classification. This is especially true in medical imaging, where precision and speed are more important than other fields.
**deep convolutional networks**: In 2015, the Fully Convolutional Network (FCN) [17] replaced the fully connected layer with the convolution layers to output spatial mapping, allowing the model to handle images of varying sizes and considerably boosting segmentation accuracy over traditional methods. However, it still has significant flaws, such as the model's poor recognition efficiency in particular cases and the omission of global context information. At this time, U-Net was born. It uses a completely symmetrical model structure and uses an altogether new feature fusion technique than FCN: concat. Meanwhile, it reduces the size of the model and delivers excellent results with little training data, which is essential for medical segmentation. More semantic segmentation models employ U-Net as the basis for improvement because of its superior performance. U-Net++ [30] improves accuracy by adding deep supervision to each layer's sub-network and better capturing some feature information lost in down-sampling and up-sampling operations.
**multi-scale feature extraction**: PSPNet[29] proposes to use the pyramid pooling module to aggregate the context information of different regions, so as to improve the extraction ability of feature information. DeepLab[4] uses ASPP to aggregate more convolution kernels of different scales to improve the multi-scale feature extraction ability.Res-UNet [28] and Dense-UNet [11], respectively, incorporate ResNet and DenseNet concepts into the U-Net, ResNet's residual block and DenseNet's dense block are used to effectively reduce feature information loss during transmission.
**attention modules**: For each up-sampling, Attention U-Net [18] inserts the attention gate into U-Net, which eliminates feature redundancy caused by the repetitive employment of low-level features in multiple convolution processing. R2U-Net [1] combines the RNN and ResNet structures into a U-Net structure, allowing the structure to gain more characteristic information after each convolution. Additionally, because transformer has a global receptive field and can acquire feature information from all pixels in an image, numerous recent works [8][23][3][20][16] have merged transformer and U-Net in various ways. The models performance has improved to some level, but it has also introduced a slew of new issues. On the one
hand, a transformer structure will dramatically increase the size of the model, stifling inference speed and necessitating higher hardware needs. On the other hand, it frequently requires the blessing of the pre-trained model, therefore a solid pre-trained model is critical to the model's performance. To summarize, the model's accuracy can be enhanced by learning additional feature information or lowering feature information loss during the feature map calculation. In addition, during the model design process, the long-running time induced by the growth of the model size must be taken into account.
## 3 Methodology
### LS block
The Conv block in the original U-Net network consists of two 3x3 convolution operations, two batch normalization operations, and two nonlinear activations (ReLU). However, we notice that this structure has a loss of local information, so we propose the local guided block (LG block, shown as Fig.1), which is divided into two branches and is made up of two 3x3 dilated convolution operations with dilated rates equal to 1 and 3, The results of two dilated convolution operations are then concatenated to enhance feature propagation. Then, to achieve cross-channel information fusion, a 1x1 convolution operation is employed, and nonlinear features
Figure 1: Comparison of three different blocks.
are added on the assumption that the size of the feature map remains unchanged. To achieve channel information adaptation, we inserted SE block after LG block to make LS block (shown as Fig.1), similar to PS module. LG block, as comparison to the original convolution block, minimizes the amount of calculation while obtaining richer feature information with a large receptive field thanks to the addition of double branch structure and dilated convolution. It further realizes the adaptation of channel feature information by adding SE block.
### PS module
ASPP first was proposed in DeepLabv1 [4] and then improved in DeepLabv2 [5] and DeepLabv3 [6], as seen in Fig.2. Deeplabv3's ASPP, which comprises of one 1x1 convolution, three dilated convolutions, and one global average pooling operation, is used as the foundation. The employment of the global average pooling method in up-sampling produces duplicate information and degrades prediction performance, according to experimental verification. As a result, we remove the global average pooling and replace the ordinary void convolution with the depth-wise separable convolution [24], resulting in a reduction of roughly five times the number of parameters compared to the original ASPP. The ASPP's multi-scale structure allows it to gather more feature information and utilise larger receptive fields, however more feature information may contain duplicated data, lowering the performance of the system.To alleviate the impact of redundant information, we choose to employ the SE block in SENet [10] to increase the weight of important channel information while decreasing the weight of worthless channel information. The SE block uses squeeze and excitation two processes to learn the importance of each channel's features, and then strengthens the relevant channel while weakening the idle channel to achieve adaptive feature channel calibration. As a result, a PS module is proposed (shown as Fig.2). After getting additional feature information and employing the large receptive field, this module may combine the advantages of the ASPP module and the SE block to suppress redundant information and strengthen the important channel feature information.
### Network Architecture
Our PLU-Net, as shown in Fig.3, improves on the original U-Net network architecture by substituting the LS block for the convolution block in the down-sampling and up-sampling pathways. The LS block's double-branch structure effectively ensures that information loss is minimized at each layer of operation, while feature reuse and successful propagation are ensured by the bigger receptive field. In addition, the U-Net network's up-sampling and down-sampling periods were reduced from four to three, and a PS module was added at the end of the down-sampling. We may considerably minimize the calculation amount and make the model more lightweight by reducing the number of down-sampling and up-sampling channels in the last layer of the U-Net network, which is 1024. We employ four branches and a greater dilation rate at the same time to obtain a bigger receptive field and consequently richer feature information. The up-sampling step that follows
can be completed efficiently. Our network structure can now achieve better performance with fewer parameters and FLOPs because to the combination of these enhancements.
## 4 Experiments and Results
### Datasets
Because both the PS module and the LS block are modular, they can simply be utilized to substitute convolution processes in various network architectures. We designed three models, PU-Net (conv+PS module), LUNet(LS block), and PLU-Net (LS block+PS module), to demonstrate the robustness of our model, in addition to the original U-Net (conv+null here A+B means A for down-sampling and up-sampling pathway and B for PS module, the same below.) We evaluated the models on three biomedical image segmentation datasets in the study.
#### 4.1.1 Polyp Segmentation
CVC-ClinicDB[2](CVC for short) is from colonoscopy videos and contains 612 polyp images. We use the original size 384x288 of image and is randomly split into train set (60%), validation set (20%), and test set (20%). Also, we scale the original images equally (resize it from \(512\times 512\) to \(256\times 256\)).
#### 4.1.2 Nuclei Segmentation
In most cancer grading schemes, nuclei segmentation has far-reaching significance because nuclear morphology is one of the important components. The dataset is
Figure 2: Comparison of different feature extractors.
derived from Kaggle 2018 Data Science Bowl1 (DSB2018 for short). It contains 670 nucleus images and is randomly split into train set (60%), validation set (20%), and test set (20%). Also, we resize the original images to \(96\times 96\).
Footnote 1: [https://www.kaggle.com/c/data-science-bowl-2018/data](https://www.kaggle.com/c/data-science-bowl-2018/data)
#### 4.1.3 Skin Lesion Segmentation
Computer-aided automatic diagnosis of skin cancer is an inevitable trend, and skin lesions segmentation as the first step is urgent. The dataset is from MICCAI 2018 Workshop - ISIC2018: Skin Lesion Analysis Towards Melanoma Detection [7][27] (ISIC2018 for short). It contains 2594 images and is randomly split into train set (60%), validation set (20%), and test set (20%). For better model training and result display, we resize all the original images to \(224\times 224\).
### Experimental Setup
We use three datasets to compare the U-Net, PU-Net, LU-Net, U-Net++, MultiResUnet[12], DoubleUNet[13], and PLU-Net architectures. We chose U-Net because of its widespread use and relevance in medical image segmentation, as well as the fact that it serves as the foundation for numerous network architectures. The kernel size is set to \(3\times 3\) with dilation values of 1 and 3 correspondingly in the LS block, followed by batch normalization and ReLU. Furthermore, the PS module employs depth-wise separable convolution, the results of which are fed into four atrous convolutions with
Figure 3: Proposed PLU-Net architecture.
kernel sizes of \(3\times 3\) and dilation values of 1, 6, 12, and 18 respectively. The output size is determined by concatenating the results of four atrous convolutions using 1x1 convolution. For the DSB2018 dataset, we used a batch size of 16, four for the ISIC2018 dataset, and two for the CVC dataset. The optimizer is Adam [14], and the two momentum terms are 0.5 and 0.999, with a learning rate of 0.0003. The epoch is set to 100, and the loss function is Binary CrossEntropy Loss(BCELoss). All of the experiments are run on four NVIDIA TITAN V GPUs with 12GB of RAM each, using PyTorch [19].
### Result and Discussion
To better show the experimental results, we considered several performance metrics, including Precision (PC, Eq.1), Sensitivity (SE, Eq.2), F1-score(F1, which is also known as Dice coefficient, DC, Eq.3) and Jaccard similarity (JS, Eq.4). Variables involved in these formulas are True Positive (TP), False Positive (FP),
Figure 4: Qualitative comparison of segmentation results for nuclei, colon, and skin lesion datasets, from top to bottom are Image, U-Net, U-Net++, MultiResUnet, DoubleUNet, PLU-Net, Ground Truth
True Negative (TN), False Negative (FN), Ground Truth (GT), and Segmentation Result (SR).
\[PC=\frac{TP}{TP+FP} \tag{1}\]
\[SE=\frac{TP}{TP+FN} \tag{2}\]
\[F1=2\frac{SE*PC}{SE+PC}=2\frac{|GT\cap SR|}{|GT|+|SR|}=DC \tag{3}\]
\[JS=\frac{|GT\cap SR|}{|GT\cup SR|} \tag{4}\]
Table.1 illustrates the results of our experiments using our proposed model and various state-of-the-art U-Net models, such as U-Net++, MultiResUnet, and DoubleUNet, while Table.1 demonstrates the segmentation outcomes of three different biomedical image segmentation tasks. Table.1 shows that our proposed models LU-Net, PU-Net, and PLU-Net are all superior to U-Net. F1 in JS is superior to U-Net among them. On CVC, our model outperforms U-Net by more than 6 and 8 points in F1 and JS, respectively, when compared to U-Net. LU-Net and PU-Net results, on the other hand, reveal that they are superior than U-Net in JS and F1, with PLU-Net outperforming all other models, proving the superiority of LG Block and the capability of PS module. Furthermore, the segmentation results of the three segmentation tasks in Fig.2 show the model's advantages. In nucleus segmentation, our model performs better on the edges, and in Polyp Segmentation, the model presented in this paper greatly outperforms other models in terms of segmentation performance. Unlike other models with smooth boundary processing, our model has more refined boundary processing in skin lesion segmentation.
\begin{table}
\begin{tabular}{c c c c c c c} \hline Dataset & Methods & PC & SE & F1 & JS & Params(M) & FLOPs(G) \\ \hline DSB2018 & U-Net[21] & 0.8965 & 0.9064 & 0.9014 & 0.8205 & 34.53 & 9.21 \\ & U-Net++[30] & 0.8892 & 0.9184 & 0.9036 & 0.8237 & 36.62 & 19.41 \\ & MultiResInet[12] & 0.9432 & 0.8401 & 0.8887 & 0.7977 & 7.24 & 2.11 \\ & DoubleUNet[13] & 0.8808 & 0.9298 & 0.9046 & 0.8249 & 18.84 & 6.21 \\ & LU-Net & 0.9067 & 0.9015 & 0.9040 & 0.8258 & 29.29 & 6.79 \\ & PU-Net & 0.8912 & 0.9157 & 0.9032 & 0.8234 & 38.19 & 9.32 \\ & PLU-Net & 0.9025 & 0.9099 & **0.9062** & **0.8279** & **6.22** & **4.99** \\ \hline CVC & U-Net[21] & 0.8001 & 0.9087 & 0.8509 & 0.7385 & 34.53 & 110.49 \\ & U-Net++[30] & 0.7973 & 0.9632 & 0.8724 & 0.7706 & 36.62 & 232.92 \\ & MultiResUnet[12] & 0.7929 & 0.9495 & 0.8641 & 0.7562 & 7.24 & 25.3 \\ & DoubleUNet[13] & 0.8637 & 0.9222 & 0.8920 & 0.8249 & 18.84 & 74.52 \\ & LU-Net & 0.8591 & 0.9351 & 0.8954 & 0.8102 & 29.29 & 81.42 \\ & PU-Net & 0.8727 & 0.9066 & 0.8807 & 0.7979 & 38.19 & 111.85 \\ & PLU-Net & 0.9139 & 0.8832 & **0.8983** & **0.8125** & **6.22** & **59.9** \\ \hline ISIC2018 & U-Net[21] & 0.8449 & 0.9038 & 0.8734 & 0.7665 & 34.53 & 50.13 \\ & U-Net++[30] & 0.8342 & 0.9156 & 0.8730 & 0.7688 & 36.62 & 105.68 \\ & MultiResUnet[12] & 0.8223 & 0.9340 & 0.8746 & 0.7732 & 7.24 & 11.48 \\ & DoubleUNet[13] & 0.8567 & 0.9007 & 0.8781 & 0.7779 & 18.84 & 33.18 \\ & LU-Net & 0.8678 & 0.8993 & 0.8804 & 0.7802 & 29.29 & 36.94 \\ & PU-Net & 0.8556 & 0.8981 & 0.8763 & 0.7771 & 38.19 & 50.75 \\ & PLU-Net & 0.8774 & 0.9152 & **0.8959** & **0.8061** & **6.22** & **27.18** \\ \hline \end{tabular}
\end{table}
Table 1: Evaluation of proposed PLU-Net
## 5 Conclusion
In this paper, We propose an LS block for learning local feature information from a big reception field and a PS module for learning more deep information from a wider reception field. Furthermore, we design a lightweight network PLU-Net with fewer parameters and FLOPs, which can handle boundaries and details well for medical images, based on the Local Guided block and PS module. Experiments on colon cancer, nuclei, and skin lesion segmentation demonstrate the advantages of the suggested PLU-Net for generating high-quality segmentation results.
## Conflict of interest
The authors report no conflict of interest.
## Data availability statement
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
2304.03154 | Counting wildly ramified quartic extensions with prescribed discriminant
and Galois closure group | Given a $2$-adic field $K$, we give formulae for the number of totally
ramified quartic field extensions $L/K$ with a given discriminant valuation and
Galois closure group. We use these formulae to prove a refinement of Serre's
mass formula, which will have applications to the arithmetic statistics of
number fields. | Sebastian Monnet | 2023-04-06T15:35:38Z | http://arxiv.org/abs/2304.03154v2 | # Counting wildly ramified quartic extensions with fixed automorphism group
###### Abstract.
Given a \(2\)-adic field \(K\), we give formulae for the number of totally ramified quartic field extensions \(L/K\) with a given discriminant valuation and automorphism group. We use these formulae to prove a refinement of Serre's mass formula, which will have applications to the arithmetic statistics of number fields.
## 1. Introduction
Throughout this paper, we use the term _\(2\)-adic field_ for a finite field extension of the \(2\)-adic numbers \(\mathbb{Q}_{2}\). Fix a \(2\)-adic field \(K\). Write \(\Sigma\) for the set of isomorphism classes of totally ramified quartic field extensions \(L/K\). For a finite group \(G\), write \(\Sigma^{G}\) for the set of \(L\in\Sigma\) such that the automorphism group \(\operatorname{Aut}(L/K)\) is isomorphic to \(G\). For a positive integer \(m\), write \(\Sigma_{m}\) for the set of \(L\in\Sigma\) with \(v_{K}(d_{L/K})=m\), and finally write \(\Sigma_{m}^{G}=\Sigma^{G}\cap\Sigma_{m}\). Using his eponymous lemma, Krasner [10, Theoreme 1] found a formula for the size of the set \(\Sigma_{m}\). More recently, Sinclair [14] and Pauli-Sinclair [13] gave refinements of Krasner's formula, enumerating (among other things) the elements of \(\Sigma_{m}\) with a prescribed ramification polygon. In this paper, we give different refinements of Krasner's result by finding formulae for the sizes of the sets \(\Sigma_{m}^{G}\).
In Section 2, we use a result of Serre to relate \(\#\Sigma_{m}^{\{1\}}\) to the density of the set of Eisenstein polynomials defining extensions in \(\Sigma_{m}^{\{1\}}\). We then find explicit congruence conditions for this set of Eisenstein polynomials and use them to compute the required density, obtaining the following result:
**Theorem 1.1**.: _If \(\Sigma_{m}^{\{1\}}\) is nonempty, then \(m\) is an even integer with \(4\leq m\leq 6e_{K}+2\) and_
\[\#\Sigma_{m}^{\{1\}}=q^{\lfloor\frac{m}{3}\rfloor-1}(q-1)\Big{(}1+\mathbb{1}_ {6|m}\cdot\big{(}\frac{1-2q}{3q}\big{)}\Big{)}.\]
The case \(C_{2}\times C_{2}\) was addressed by Tunnell in [12]. We repackage his result in Section 3 as the following theorem:
**Theorem 1.2**.: _If \(\Sigma_{m}^{(C_{2}\times C_{2})}\) is nonempty, then \(m\) is an even integer with \(6\leq m\leq 6e_{K}+2\) and_
\[\#\Sigma_{m}^{C_{2}\times C_{2}}=2(q-1)q^{\frac{m-4}{2}}\Big{(}q^{-\lfloor \frac{m}{6}\rfloor}(1+\mathbb{1}_{3|m}\cdot\frac{q-2}{3})-\mathbb{1}_{m\leq 4e _{K}+2}\cdot q^{-\lfloor\frac{m-2}{4}\rfloor}\Big{)}.\]
The bulk of our work goes into the \(C_{4}\) case. In [1], Cohen, Diaz y Diaz, and Olivier obtain asymptotic formulae for the number of \(C_{4}\)-extensions of a number field. We adapt their methods to compute the size of \(\Sigma_{m}^{C_{4}}\). Our formula for \(\#\Sigma_{m}^{C_{4}}\) depends on an invariant \(t_{0}\) of \(K\), which we define to be the smallest nonnegative integer \(t\) such that
\[N_{K/\mathbb{Q}_{2}}(U_{K}^{(2t)})\subseteq 1+4\mathbb{Z}_{2}.\]
**Theorem 1.3**.: _If \(\Sigma_{m}^{C_{4}}\) is nonempty, then either \(m=8e_{K}+3\) or \(m\) is an even integer with \(8\leq m\leq 8e_{K}\). For even \(m\) with \(8\leq m\leq 8e_{K}\), the number \(\#\Sigma_{m}^{C_{4}}\) is the sum of the following quantities:_
1. \(\mathbbm{1}_{8\leq m\leq 5e_{K}-2}\cdot\mathbbm{1}_{m\equiv 3\pmod{5}}\cdot 2q^{ \frac{3m-14}{10}}(q-1)\)_._
2. \(\mathbbm{1}_{4e_{K}+4\leq m\leq 5e_{K}+2}\cdot 2q^{\frac{m}{2}-e_{K}-2}(q-1)\)_._
3. \(\mathbbm{1}_{5e_{K}<m\leq 8e_{K}}\cdot\mathbbm{1}_{m\equiv 2e_{K}\pmod{3}} \cdot 2q^{\frac{4e_{K}+m}{6}-1}(1+\mathbbm{1}_{m\leq 8e_{K}-6t_{0}})(q-1- \mathbbm{1}_{m=8e_{K}-6t_{0}+6})\)_._
4. \(\mathbbm{1}_{10\leq m\leq 5e_{K}}\cdot 2(q-1)(q^{\frac{3m}{10}}]^{-1}-q^{\max(\lceil \frac{m+2}{4}\rceil,\frac{m}{2}-e_{K}\}-2)\)_._
_We also have_
\[\#\Sigma_{8e_{K}+3}^{C_{4}}=\begin{cases}4q^{2e_{K}}&\text{if $-1\in K^{ \times 2}$},\\ 2q^{2e_{K}}&\text{if $K(\sqrt{-1})/K$ is quadratic and totally ramified},\\ 0&\text{if $K(\sqrt{-1})/K$ is quadratic and unramified}.\end{cases}\]
Finally, we can use the results above to compute \(\#\Sigma_{m}^{C_{2}}\). In the following formula, we have opted not to expand the terms \(\#\Sigma_{m}^{G}\) for \(G\in\{\{1\},C_{4},C_{2}\times C_{2}\}\), since they are given explicitly above.
**Theorem 1.4**.: _If \(\Sigma_{m}^{C_{2}}\) is nonempty, then \(m\) is an integer with \(3\leq m\leq 8e_{K}+3\), and one of the following holds:_
1. \(m\) _is even._
2. \(m\equiv 1\pmod{4}\) _and_ \(4e_{K}+3\leq m\leq 8e_{K}+3\)_._
3. \(m=8e_{K}+3\)_._
_Moreover, we have_
\[\#\Sigma_{m}^{C_{2}}=2(1+\mathbbm{1}_{m=8e_{K}+3}(q-2))q^{\lceil \frac{m}{2}\rceil-2-1_{m>4e_{K}}(\lfloor\frac{m-1}{4}\rfloor-e_{K})}-2\# \Sigma_{m}^{\{1\}}-\frac{1}{2}\#\Sigma_{m}^{C_{4}}-\frac{1}{2}\#\Sigma_{m}^{ C_{2}\times C_{2}}.\]
For any set \(S\) of quartic etale algebras over \(K\), define the _pre-mass_ of \(S\) to be the quantity
\[\widetilde{m}(S)=\sum_{L\in S}\frac{1}{\#\operatorname{Aut}(L/K)}\cdot\frac{1 }{q^{v_{K}(d_{L/K})}},\]
where \(q=\#\mathbb{F}_{K}\) is the size of the residue field of \(K\). The pre-mass has important applications in the statistics of number fields; it essentially tells us how likely a "randomly selected" \(S_{4}\)-quartic number field is to have its \(2\)-adic completion in the set \(S\). See [1, Theorem 2]
for a general statement, or our paper [14, Theorem 2.11] for the special case we have in mind. We use Theorems 1.1-1.4 to obtain formulae for the pre-masses \(\widetilde{m}(\Sigma^{G})\), which we state in Corollaries 2.16, 3.4, 4.4, and 5.1. These formulae are refinements of Serre's mass formula [10, Theorem 2], which states that the \(\widetilde{m}(\Sigma)=q^{-3}\).
### Notation
We fix the following notation.
1. For a \(2\)-adic field \(F\), write: 1. \(\mathcal{O}_{F}\) for its ring of integers. 2. \(\pi_{F}\) for a uniformiser of \(\mathcal{O}_{F}\). 3. \(\mathfrak{p}_{F}\) for the maximal ideal of \(\mathcal{O}_{F}\). 4. \(\mathbb{F}_{F}\) for the residue field \(\mathcal{O}_{F}/\mathfrak{p}_{F}\). 5. \(e_{F}\) for the absolute ramification index \(e(F/\mathbb{Q}_{2})\). 6. \(f_{F}\) for the inertia degree \(f(F/\mathbb{Q}_{2})\). 7. \(v_{F}\) for the unique \(2\)-adic valuation on \(F\), normalised such that \(v_{F}(\pi_{F})=1\). 8. \(U_{F}^{(i)}\) for the group \(1+\mathfrak{p}_{F}^{i}\) in the unit filtration, where \(i>0\). 9. \(U_{F}^{(0)}\) for the unit group \(\mathcal{O}_{F}^{\times}\).
2. Given an extension \(E/F\) of \(2\)-adic fields, write \(d_{E/F}\) for its discriminant ideal.
3. For a group \(G\) and positive integer \(m\), we write \(\Sigma_{m}^{G}\) for the set of totally ramified quartic extensions \(L/K\) with \(v_{K}(d_{L/K})=m\).
### Acknowledgements
I am grateful to my supervisor, Rachel Newton, for her support and enthusiasm throughout the project. Thanks also to Melanie Matchett Wood and Takehiko Yasuda for helpful suggestions, especially in the Galois cases. I am particularly indebted to Lee Berry, Ross Paterson, and Tim Santens for some very helpful conversations that got me over the finish line. None of this work would have been possible without funding from the EPSRC, University College London, and the Heilbronn Institute for Mathematical Research.
## 2. The case \(G=1\)
Write \(P\) for the set of quartic Eisenstein polynomials in \(K[X]\). For \(f\in P\), let \(L_{f}\) be the field \(K[X]/(f)\), which is a totally ramified quartic extension of \(K\). Given a finite group \(G\), let \(P^{G}\) be the set of \(f\in P\) such that \(\operatorname{Aut}(L_{f}/K)\cong G\). For any integer \(m\), let \(P_{m}\) be the set of \(f\in P\) such that \(v_{K}(d_{L_{f}/K})=m\), or equivalently such that \(v_{K}(\operatorname{disc}(f))=m\). Finally, write \(P_{m}^{G}\) for the intersection \(P^{G}\cap P_{m}\).
The quartic Eisenstein polynomials in \(K[X]\) embed naturally into \(\mathcal{O}_{K}^{4}\) via
\[X^{4}+a_{3}X^{3}+a_{2}X^{2}+a_{1}X+a_{0}\mapsto(a_{3},a_{2},a_{1},a_{0}).\]
Write \(\mu\) for the Haar measure on \(\mathcal{O}_{K}^{4}\), normalised such that \(\mu(\mathcal{O}_{K}^{4})=1\). We will apply this Haar measure to sets of Eisenstein polynomials, viewed as subsets of \(\mathcal{O}_{K}^{4}\) via the embedding described above.
**Lemma 2.1**.: _For every finite group \(G\) and every integer \(m\), we have_
\[\#\Sigma_{m}^{G}=\frac{q^{m+2}\cdot\#G}{q-1}\cdot\mu(P_{m}^{G}).\]
Proof.: This follows easily from [11, Equation 13].
### Congruence conditions for \(P_{m}^{\{1\}}\)
In [10, Theorem 2.9], Lbekkouri gives congruence conditions for a quartic Eisenstein polynomial \(f(X)\in\mathbb{Q}_{2}[X]\) to define a Galois extension. We extend his methods to Eisenstein polynomials over arbitrary base fields, to obtain congruence conditions for the set \(P_{m}^{\{1\}}\), which we will state in Lemma 2.6 and Corollary 2.9.
**Remark 2.2**.: It should be noted that Lbekkouri's statement of [10, Theorem 2.9] is incorrect. In items (2i) and (2ii), both instances of "\(a_{0}+a_{2}\)" should read "\(a_{0}+2\)". This mistake is first introduced in the statement of Proposition 2.8 and carried over into Theorem 2.9.
**Remark 2.3**.: We tried to extend the method to find congruence conditions for \(L_{f}/K\) to be Galois. This was a partial success, and we came up with an algorithm that can compute such congruence conditions given a choice of base field \(K\). Our results are not organised enough to be publishable, but the interested reader should get in touch!
For \(f\in P\), we will always denote the coefficients of \(f\) by \(f(X)=X^{4}+a_{3}X^{3}+a_{2}X^{2}+a_{1}X+a_{0}\). Whenever we refer to the coefficients \(a_{i}\), the choice of polynomial \(f\) will be clear. Let \(\pi_{f}=X+(f)\) be the natural uniformiser of \(L_{f}\). When the polynomial \(f\) is clear, we will drop the subscript and denote \(\pi_{f}\) by \(\pi\). Write \(v_{\pi}\) for the \(2\)-adic valuation on \(L_{f}\), normalised such that \(v_{\pi}(\pi)=1\). Fix an algebraic closure \(\overline{K}\) of \(L_{f}\), and let
\[\sigma_{i}:L_{f}\to\overline{K},\quad i=1,2,3,4\]
be the four embeddings of \(L_{f}\), where \(\sigma_{1}\) is the identity embedding.
**Lemma 2.4**.: _For all \(f\in P^{\{1\}}\), the three valuations_
\[v_{K}(\sigma_{i}(\pi)-\pi),\quad i=2,3,4\]
_are all equal._
Proof.: Suppose that \(f\in P\) and the quantities \(v_{K}(\sigma_{i}(\pi)-\pi)\) are not all equal for \(i=2,3,4\). Reordering the \(\sigma_{i}\) if necessary, we have
\[v_{K}(\sigma_{2}(\pi)-\pi)\neq v_{K}(\sigma_{i}(\pi)-\pi)\]
for \(i=3,4\). The cubic polynomial \(X^{-1}f(X+\pi)\in L_{f}[X]\) has roots
\[\sigma_{i}(\pi)-\pi,\quad i=2,3,4.\]
The minimal polynomial of \(\sigma_{2}(\pi)-\pi\) over \(L_{f}\) therefore divides \(X^{-1}f(X+\pi)\), and all its roots have the same valuation, so
\[\sigma_{2}(\pi)-\pi\in L_{f},\]
and therefore \(f\) has at least two roots in \(L_{f}\), so \(f\not\in P^{\{1\}}\).
For each even integer \(4\leq m\leq 6e_{K}+2\), define \(T_{m}\) to be the set of \(f\in P\) such that
\[\begin{cases}v_{K}(a_{1})=\frac{m}{4},\quad v_{K}(a_{2})\geq\frac{m}{6},\quad v _{K}(a_{3})\geq\frac{m}{4},\quad\text{if }m\equiv 0\pmod{4},\\ v_{K}(a_{1})\geq\frac{m+2}{4},\quad v_{K}(a_{2})\geq\frac{m}{6},\quad v_{K}(a_ {3})=\frac{m-2}{4},\quad\text{if }m\equiv 2\pmod{4}.\end{cases}\]
**Lemma 2.5**.: _The following two statements are true:_
1. _Let_ \(m\) _be an even integer with_ \(4\leq m\leq 6e_{K}+2\) _and let_ \(f\in P_{m}\)_. Then_ \(f\in T_{m}\) _if and only if_ \[v_{K}(\sigma_{i}(\pi)-\pi)=\frac{m}{12}\] _for_ \(i=2,3,4\)_._
2. _Let_ \(m\) _be a positive integer. If_ \(P_{m}^{\{1\}}\) _is nonempty then_ \(m\) _is even,_ \(4\leq m\leq 6e_{K}+2\)_, and_ \(P_{m}^{\{1\}}\subseteq T_{m}\)_._
Proof.: Let \(f\in P_{m}\) for any \(m\), not necessarily even. Define the polynomial
\[g(X):=X^{-1}f(X+\pi),\]
and write \(g(X)=\sum_{i=0}^{3}b_{i}X^{i}\) for \(b_{i}\in L_{f}\). It is easy to see that
\[b_{0} =a_{1}+2\pi a_{2}+3\pi^{2}a_{3}+4\pi^{3},\] \[b_{1} =a_{2}+3\pi a_{3}+6\pi^{2},\] \[b_{2} =a_{3}+4\pi.\]
Since the \(v_{\pi}(a_{i})\) are all multiples of \(4\), we have
\[v_{\pi}(b_{0}) =\min\{v_{\pi}(a_{1}),v_{\pi}(2\pi a_{2}),v_{\pi}(3\pi^{2}a_{3}), v_{\pi}(4\pi^{3})\},\] \[v_{\pi}(b_{1}) =\min\{v_{\pi}(a_{2}),v_{\pi}(3\pi a_{3}),v_{\pi}(6\pi^{2})\},\] \[v_{\pi}(b_{2}) =\min\{v_{\pi}(a_{3}),v_{\pi}(4\pi)\}\]
The polynomial \(g(X)\in K_{f}[X]\) has roots \(\sigma_{i}(\pi)-\pi\) for \(i=2,3,4\). Suppose that
\[v_{K}(\sigma_{i}(\pi)-\pi)=\frac{m}{12}\]
for each \(i\). Then the Newton polygon of \(g(X)\) consists of one line segment \((0,m)\leftrightarrow(3,0)\), so
( \[*\] ) \[\begin{cases}m=\min\{v_{\pi}(a_{1}),v_{\pi}(2\pi a_{2}),v_{\pi}(3\pi^{2}a_{3} ),v_{\pi}(4\pi^{3})\},\\ \frac{2m}{3}\leq\min\{v_{\pi}(a_{2}),v_{\pi}(3\pi a_{3}),v_{\pi}(6\pi^{2})\}, \\ \frac{m}{3}\leq\min\{v_{\pi}(a_{3}),v_{\pi}(4\pi)\}.\end{cases}\]
and for even \(m\) this implies membership of \(T_{m}\). Reversing the argument, it is easy to see that for even \(m\), every \(f\in T_{m}\) has
\[v_{K}(\sigma_{i}(\pi)-\pi)=\frac{m}{12},\quad i=2,3,4.\]
Thus we have proven (1). Now, let \(f\in P_{m}^{\{1\}}\) for some positive integer \(m\). Then \((*)\) tells us that
\[m =\min\{v_{\pi}(a_{1}),v_{\pi}(2\pi a_{2}),v_{\pi}(a_{3})+2,8e_{K}+3\},\] \[\frac{2m}{3} \leq\min\{v_{\pi}(a_{2}),4e_{K}+2\}.\]
Since \(f\) is Eisenstein, \(v_{\pi}(a_{i})\geq 4\) for each \(i\), and therefore we obtain \(4\leq m\leq 6e_{K}+3\). Moreover, \(v_{\pi}(a_{2})\geq\frac{2m}{3}\) implies that \(v_{\pi}(2\pi a_{2})>m\). Since \(m<8e_{K}+3\), we obtain
\[m=\min\{v_{\pi}(a_{1}),v_{\pi}(a_{3})+2\},\]
which implies that \(m\) is even. Finally, Lemma 2.4 tells us that the valuations \(v_{K}(\sigma_{i}(\pi)-\pi)\) are all equal, so
\[v_{K}(\sigma_{i}(\pi)-\pi)=\frac{m}{12}\]
by definition of discriminant, and therefore (2) follows from (1).
**Lemma 2.6**.: _Let \(m\) be an even integer with \(4\leq m\leq 6e_{K}+2\). If \(m\) is not a multiple of \(3\), then \(P_{m}^{\{1\}}=T_{m}\)._
Proof.: Let \(f\in T_{m}\). Lemma 2.5 tells us that
\[v_{K}(\sigma_{i}(\pi)-\pi)=\frac{m}{12},\quad i=2,3,4,\]
so \(\sigma_{i}(\pi)\not\in L_{f}\), since \(\frac{m}{3}\) is not an integer. The other direction is also part of Lemma 2.5.
From now on, fix a system of representatives \(\mathcal{R}\) for \((\mathcal{O}_{K}/\mathfrak{p}_{K})^{\times}\). For each \(u\in\mathcal{R}\), and given an implicit \(f\in P_{m}\), define the polynomial
\[g^{(u)}(X):=f(X+\pi+u\pi^{\lfloor\frac{m}{4}\rfloor}),\]
and write \(g^{(u)}(X)=\sum_{i=0}^{4}b_{i}^{(u)}X^{i}\) for \(b_{i}^{(u)}\in L_{f}\).
**Lemma 2.7**.: _Let \(4\leq m\leq 6e_{K}+2\) be a multiple of \(6\). Let \(f\in T_{m}\) and \(u\in\mathcal{R}\). We have:_
1. \(v_{K}(b_{3}^{(u)})\geq\frac{m-2}{4}\)_._
2. \(v_{K}(b_{2}^{(u)})\geq\frac{m}{6}\)_._
3. \(v_{K}(b_{1}^{(u)})=\frac{m}{4}\)_._
4. \[v_{K}(b_{0}^{(u)})\begin{cases}\geq\frac{m}{3}+1&\text{if $4\mid m$ and $a_{1}+ua_{2}a_{0}^{\frac{m}{12}}+u^{3}a_{0}^{\frac{m}{4}}\equiv 0 \pmod{\mathfrak{p}_{K}^{\frac{m}{4}+1}}$},\\ \geq\frac{m}{3}+1&\text{if $4\nmid m$ and $a_{3}+ua_{2}a_{0}^{\lfloor\frac{m}{12} \rfloor}+u^{3}a_{0}^{\lfloor\frac{m}{4}\rfloor}\equiv 0\pmod{\mathfrak{p}_{K}^{ \lfloor\frac{m}{4}\rfloor+1}}$},\\ =\frac{m}{3}&\text{otherwise}.\end{cases}\]
Proof.: It is easy so see that for each \(i\) and \(u\), we have
\[b_{i}^{(u)}=\sum_{j=i}^{4}\binom{j}{i}a_{j}(\pi+u\pi^{\lfloor\frac{m}{4} \rfloor})^{j-i},\]
where we adopt the convention that \(a_{4}=1\). Using this formula for the \(b_{i}^{(u)}\), along with the congruence conditions defining \(T_{m}\), gives us the following three congruences:
\[b_{3}^{(u)} \equiv a_{3}\pmod{\pi^{m+1}}.\] \[b_{2}^{(u)} \equiv a_{2}\pmod{\pi^{\frac{2m}{3}+1}}.\] \[b_{1}^{(u)} \equiv\begin{cases}a_{1}\pmod{\pi^{m+1}}\quad\text{if $m\equiv 0 \pmod{4}$},\\ 3\pi^{2}a_{3}\pmod{\pi^{m+1}}\quad\text{if $m\equiv 2\pmod{4}$}.\end{cases}\]
We can read off the first three claims from these congruences. Expanding the formula for \(b_{0}^{(u)}\) and ignoring the high-valuation terms, we obtain
\[b_{0}^{(u)}\equiv\begin{cases}a_{1}u\pi^{\frac{m}{3}}+a_{2}u^{2}\pi^{\frac{2m} {3}}+u^{4}\pi^{\frac{4m}{3}}\pmod{\pi^{\frac{4m}{3}+1}}\quad\text{if $m\equiv 0 \pmod{4}$},\\ u^{2}a_{2}\pi^{\frac{2m}{3}}+ua_{3}\pi^{\frac{m}{3}+2}+u^{4}\pi^{\frac{4m}{3 }}\pmod{\pi^{\frac{4m}{3}+1}}\quad\text{if $m\equiv 2\pmod{4}$}.\end{cases}\]
It follows that \(v_{K}(b_{0}^{(u)})\geq\frac{m}{3}\), and \(v_{K}(b_{0}^{(u)})\geq\frac{m}{3}+1\) if and only if
\[\begin{cases}a_{1}+ua_{2}\pi^{\frac{m}{3}}+u^{3}\pi^{m}\equiv 0\pmod{\pi^{m+1}} \quad\text{if $m\equiv 0\pmod{4}$},\\ a_{3}+ua_{2}\pi^{\frac{m}{3}-2}+u^{3}\pi^{m-2}\equiv 0\pmod{\pi^{m-1}}\quad\text{if $m\equiv 2 \pmod{4}$}.\end{cases}\]
The result then follows from the fact that, for any positive integer \(k\), we have
\[\pi^{4k}\equiv(-a_{0})^{k}\pmod{\pi^{4k+\frac{2m}{3}-2}}.\]
**Lemma 2.8**.: _Let \(4\leq m\leq 6e_{K}+2\) be a multiple of \(6\) and let \(f\in T_{m}\). Then \(f\not\in P_{m}^{\{1\}}\) if and only if \(v_{K}(b_{0}^{(u)})\geq\frac{m}{3}+1\) for some \(u\in\mathcal{R}\)._
Proof.: Suppose that \(f\not\in P_{m}^{\{1\}}\). Then \(f\) has at least two roots in \(L_{f}\). Reordering the \(\sigma_{i}\) if necessary, we may assume that \(\sigma_{2}(\pi)\in L_{f}\). Since \(f\in T_{m}\), Lemma 2.5 tells us that \(v_{K}(\sigma_{2}(\pi)-\pi)=\frac{m}{12}\), so
\[\sigma_{2}(\pi)=\pi+\tilde{u}\pi^{\frac{m}{12}}\]
for some \(\tilde{u}\in\mathcal{O}_{L_{f}}^{\times}\). Since \(L_{f}/K\) is totally ramified, there is some \(u\in\mathcal{R}\) with \(u\equiv\tilde{u}\pmod{\pi}\), which means that
\[v_{K}(\sigma_{2}(\pi)-\pi-u\pi^{\frac{m}{12}})>\frac{m}{12}.\]
The other three roots of \(g^{(u)}\) all have valuation at least \(\frac{m}{12}\), so
\[v_{K}(b_{0}^{(u)})\geq\frac{m}{3}+1.\]
Suppose conversely that \(v_{K}(b_{0}^{(u)})\geq\frac{m}{3}+1\). Lemma 2.7 tells us that \(v_{K}(b_{1}^{(u)})=\frac{m}{4}\) and \(v_{K}(b_{2}^{(u)})\geq\frac{m}{6}\), so the Newton polygon of \(g^{(u)}\) tells us that it has exactly one root \(\sigma_{i}(\pi)-\pi-u\pi^{[\frac{m}{4}]}\) with
\[v_{\pi}(\sigma_{i}(\pi)-\pi-u\pi^{[\frac{m}{4}]})\geq\frac{m}{3}+1.\]
Therefore we have
\[\sigma_{i}(\pi)-\pi-u\pi^{[\frac{m}{4}]}\in L_{f},\]
so \(\sigma_{i}(\pi)\in L_{f}\), which means that \(f\not\in P^{\{1\}}\).
**Corollary 2.9**.: _Let \(m\) be a multiple of \(6\) with \(4\leq m\leq 6e_{K}+2\), and let \(f\in T_{m}\). The following are equivalent:_
1. _We have_ \(f\not\in P_{m}^{\{1\}}\)_._
2. _There is some_ \(u\in\mathcal{R}\) _such that_ \[\begin{cases}a_{1}+ua_{2}a_{0}^{\lfloor\frac{m}{2}\rfloor}+u^{3}a_{0}^{ \lfloor\frac{m}{4}\rfloor}&\equiv 0\pmod{\mathfrak{p}_{K}^{\lfloor\frac{m}{4} \rfloor+1}})\quad\text{if }m\equiv 0\pmod{4},\\ a_{3}+ua_{2}a_{0}^{\lfloor\frac{m}{2}\rfloor}+u^{3}a_{0}^{\lfloor\frac{m}{4} \rfloor}&\equiv 0\pmod{\mathfrak{p}_{K}^{\lfloor\frac{m}{4}\rfloor+1}}) \quad\text{if }m\equiv 2\pmod{4}.\end{cases}\]
Proof.: This is immediate from Lemmas 2.7 and 2.8.
### Computing the densities
**Lemma 2.10**.: _Let \(m\) be an even integer with \(4\leq m\leq 6e_{K}+2\). Then_
\[\mu(T_{m})=q^{-\frac{m}{2}-\lceil\frac{m}{6}\rceil-3}(q-1)^{2}.\]
Proof.: This is easy to see from the definition of \(T_{m}\).
Since \(\mathbb{F}_{K}\cong\mathbb{F}_{2^{f_{K}}}\), the trace map \(\operatorname{Tr}_{\mathbb{F}_{K}/\mathbb{F}_{2}}:\mathbb{F}_{K}\to\mathbb{F} _{2}\) is given by
\[\operatorname{Tr}_{\mathbb{F}_{K}/\mathbb{F}_{2}}(x)=x+x^{2}+\ldots+x^{2^{f_{ K}-1}}.\]
**Lemma 2.11**.: _Let \(\alpha,\beta,\gamma\in\mathbb{F}_{K}\) with \(\alpha\neq 0\), and let \(g\) be the polynomial \(\alpha X^{2}+\beta X+\gamma\in\mathbb{F}_{K}[X]\). The number of roots of \(g\) in \(\mathbb{F}_{K}\) is_
\[\begin{cases}1&\text{if }\beta=0,\\ 2&\text{if }\beta\neq 0\text{ and }\operatorname{Tr}_{\mathbb{F}_{K}/\mathbb{F}_{2}}( \alpha\gamma/\beta^{2})=0,\\ 0&\text{if }\beta\neq 0\text{ and }\operatorname{Tr}_{\mathbb{F}_{K}/\mathbb{F}_{2}}( \alpha\gamma/\beta^{2})=1.\end{cases}\]
Proof.: The case with \(\beta=0\) is clear, so assume \(\beta\neq 0\). Let \(u\) be a root of \(g\) in a splitting field over \(\mathbb{F}_{K}\), and let \(\theta=\frac{\alpha u}{\beta}\). Clearly \(u\in\mathbb{F}_{K}\) if and only if \(\theta\in\mathbb{F}_{K}\), which is equivalent to \(\theta+\theta^{q}=0\). It is easy to see that
\[\operatorname{Tr}_{\mathbb{F}_{K}/\mathbb{F}_{2}}(\theta+\theta^{2})=\theta+ \theta^{q},\]
and also that
\[\theta+\theta^{2}=\frac{\alpha\gamma}{\beta^{2}}.\]
Therefore, \(u\in\mathbb{F}_{K}\) if and only if \(\operatorname{Tr}_{\mathbb{F}_{K}/\mathbb{F}_{2}}(\frac{\alpha\gamma}{\beta ^{2}})=0\), and the result follows.
**Lemma 2.12**.: _Let \(n\geq 0\) be an integer and let \(\lambda,\mu\in\mathfrak{p}_{K}^{n}\), with \(\mu\not\in\mathfrak{p}_{K}^{n+1}\). Define the map_
\[\alpha:\mathcal{O}_{K}/\mathfrak{p}_{K}\to\mathcal{O}_{K}/\mathfrak{p}_{K}^{n +1},\quad c\mapsto\lambda c+\mu c^{3}.\]
_The following two statements are true:_
1. _For_ \(c\in(\mathcal{O}_{K}/\mathfrak{p}_{K})^{\times}\)_, we have_ \[\#\{c^{\prime}\in(\mathcal{O}_{K}/\mathfrak{p}_{K})^{\times}:\alpha(c^{\prime})= \alpha(c)\}=\begin{cases}1&\text{if }c^{2}=\lambda/\mu,\\ 1&\text{if }c^{2}\neq\lambda/\mu\text{ and }\operatorname{Tr}_{\mathbb{F}_{K}/ \mathbb{F}_{2}}\left(\frac{\lambda}{c^{2}\mu}\right)\not\equiv f_{K}\pmod{2},\\ 3&\text{if }c^{2}\neq\lambda/\mu\text{ and }\operatorname{Tr}_{\mathbb{F}_{K}/ \mathbb{F}_{2}}\left(\frac{\lambda}{c^{2}\mu}\right)\equiv f_{K}\pmod{2}.\end{cases}\]
2. _We have_ \[\#\operatorname{im}\alpha=\begin{cases}\frac{2q+(-1)^{f_{K}}}{3}&\text{if } \lambda\not\in\mathfrak{p}_{K}^{n+1},\\ \frac{q+1+(-1)^{f_{K}}}{2+(-1)^{f_{K}}}&\text{if }\lambda\in\mathfrak{p}_{K}^{n+1}. \end{cases}\]
Proof.: It is easy to see that for \(c,c^{\prime}\in(\mathcal{O}_{K}/\mathfrak{p}_{K})^{\times}\), we have \(\alpha(c)=\alpha(c^{\prime})\) if and only if
\[(c-c^{\prime})\Big{(}(c^{\prime})^{2}+cc^{\prime}+\frac{\lambda}{\mu}+c^{2} \Big{)}\equiv 0\pmod{\mathfrak{p}_{K}}.\]
The first item then follows from Lemma 2.11. For the second item, suppose first that \(\lambda\not\in\mathfrak{p}_{K}^{n+1}\). Then there is some \(c\in(\mathcal{O}_{K}/\mathfrak{p}_{K})^{\times}\) with \(\alpha(c)=0\), so
\[\#\operatorname{im}\alpha =\sum_{c\in(\mathcal{O}_{K}/\mathfrak{p}_{K})^{\times}}\frac{1} {\#\{c^{\prime}\in(\mathcal{O}_{K}/\mathfrak{p}_{K})^{\times}:\alpha(c^{\prime })=\alpha(c)\}}\] \[=1+(q-2-a)+\frac{a}{3},\]
where
\[a=\#\{c\in(\mathcal{O}_{K}/\mathfrak{p}_{K})^{\times}:c^{2}\neq\frac{\lambda} {\mu}\text{ and }\operatorname{Tr}_{\mathbb{F}_{K}/\mathbb{F}_{2}}\Big{(}\frac{\lambda}{c^{2} \mu}\Big{)}\equiv f_{K}\pmod{2}\}.\]
Since \(\lambda\not\in\mathfrak{p}_{K}^{n+1}\), the map
\[(\mathfrak{p}_{K}/\mathcal{O}_{K})^{\times}\to(\mathfrak{p}_{K}/\mathcal{O}_{ K})^{\times},\quad c\mapsto\frac{\lambda}{c^{2}\mu}\]
is a bijection, so
\[a =\#\{u\in(\mathcal{O}_{K}/\mathfrak{p}_{K})^{\times}:\operatorname{ Tr}_{\mathbb{F}_{K}/\mathbb{F}_{2}}(u)\equiv f_{K}\pmod{2}\}\] \[=\frac{1}{2}(q-3-(-1)^{f_{K}}),\]
and the result follows. Now suppose that \(\lambda\in\mathfrak{p}_{K}^{n+1}\). Then \(\alpha(c)=0\) if and only if \(c=0\), so
\[\operatorname{im}\alpha=1+\sum_{c\in(\mathcal{O}_{K}/\mathfrak{p}_{K})^{\times }}\frac{1}{\#\{c^{\prime}\in(\mathcal{O}_{K}/\mathfrak{p}_{K})^{\times}: \alpha(c^{\prime})=\alpha(c)\}}.\]
For \(c\in(\mathcal{O}_{K}/\mathfrak{p}_{K})^{\times}\), we have \(\frac{\lambda}{c^{2}\mu}=0\), so
\[\#\{c^{\prime}\in(\mathcal{O}_{K}/\mathfrak{p}_{K})^{\times}:\alpha(c^{\prime })=\alpha(c)\}=2+(-1)^{f_{K}},\]
and the result follows.
**Lemma 2.13**.: _Let \(4\leq m\leq 6e_{K}+2\) be a multiple of \(6\). Let \(S\) be the set of triples \((x_{0},x_{1},x_{2})\in\mathcal{O}_{K}^{3}\) such that the following two conditions hold:_
1. \(v_{K}(x_{0})=1,\quad v_{K}(x_{1})=\lfloor\frac{m}{4}\rfloor,\quad v_{K}(x_{2} )\geq\frac{m}{6}\)
_._
2. _There is some_ \(u\in\mathcal{R}\) _such that_ \(x_{1}+ux_{2}x_{0}^{\lfloor\frac{m}{12}\rfloor}+u^{3}x_{0}^{\lfloor\frac{m}{4} \rfloor}\equiv 0\pmod{\mathfrak{p}_{K}^{\lfloor\frac{m}{4}\rfloor+1}}).\)__
_Then \(\mu(S)=\frac{1}{3}q^{-\lfloor\frac{m}{4}\rfloor-\frac{m}{6}-4}(q-1)^{2}(2q-1)\)._
Proof.: Suppose that, for \(x_{i}\) and \(x_{i}^{\prime}\) in \(\mathcal{O}_{K}\) satisfying the conditions of the lemma, we have \(x_{i}\equiv x_{i}^{\prime}\pmod{\mathfrak{p}_{K}^{\lfloor\frac{m}{4}\rfloor+ 1}}\) for \(i=1,2,3\). Then \((x_{0},x_{1},x_{2})\in S\) if and only if \((x_{0}^{\prime},x_{1}^{\prime},x_{2}^{\prime})\in S\), so
\[\mu(S)=\frac{\#\overline{S}}{q^{3\lfloor\frac{m}{4}\rfloor+3}},\]
where \(\overline{S}\) is the set of triples
\[(\bar{x}_{0},\bar{x}_{1},\bar{x}_{2})\in\left((\mathfrak{p}_{K}/\mathfrak{p} _{K}^{\lfloor\frac{m}{4}\rfloor+1})\setminus(\mathfrak{p}_{K}^{2}/\mathfrak{p }_{K}^{\lfloor\frac{m}{4}\rfloor+1})\right)\times\left((\mathfrak{p}_{K}^{ \lfloor\frac{m}{4}\rfloor}/\mathfrak{p}_{K}^{\lfloor\frac{m}{4}\rfloor+1}) \setminus\{0\}\right)\times(\mathfrak{p}_{K}^{\frac{m}{6}}/\mathfrak{p}_{K}^{ \lfloor\frac{m}{4}\rfloor+1})\]
such that there is some \(u\in\mathcal{R}\) with
\[\bar{x}_{1}+u\bar{x}_{2}\bar{x}_{0}^{\lfloor\frac{m}{2}\rfloor}+u^{3}\bar{x}_ {0}^{\lfloor\frac{m}{4}\rfloor}=0.\]
For each \(\bar{x}_{0}\in(\mathfrak{p}_{K}/\mathfrak{p}_{K}^{\lfloor\frac{m}{4}\rfloor+ 1})\setminus(\mathfrak{p}_{K}^{2}/\mathfrak{p}_{K}^{\lfloor\frac{m}{4}\rfloor+ 1})\) and \(\bar{x}_{2}\in\mathfrak{p}_{K}^{\frac{m}{6}}/\mathfrak{p}_{K}^{\lfloor\frac{m} {4}\rfloor+1}\), define the map
\[\alpha_{\bar{x}_{0},\bar{x}_{2}}:\mathcal{O}_{K}/\mathfrak{p}_{K}\to \mathcal{O}_{K}/\mathfrak{p}_{K}^{\lfloor\frac{m}{4}\rfloor+1},\quad u\mapsto -u\bar{x}_{2}\bar{x}_{0}^{\lfloor\frac{m}{12}\rfloor}-u^{3}\bar{x}_{0}^{ \lfloor\frac{m}{4}\rfloor}.\]
Then
\[\overline{S}=\bigsqcup_{\begin{subarray}{c}\bar{x}_{0}\in(\mathfrak{p}_{K}/ \mathfrak{p}_{K}^{\lfloor\frac{m}{4}\rfloor+1})\setminus(\mathfrak{p}_{K}^{2} /\mathfrak{p}_{K}^{\lfloor\frac{m}{4}\rfloor+1})\\ \bar{x}_{2}\in\mathfrak{p}_{K}^{\frac{m}{6}}/\mathfrak{p}_{K}^{\lfloor\frac{ m}{4}\rfloor+1}\end{subarray}}\{\bar{x}_{0}\}\times\Big{(}\operatorname{im} \alpha_{\bar{x}_{0},\bar{x}_{2}}\setminus\{0\}\Big{)}\times\{\bar{x}_{2}\},\]
so
\[\#\overline{S}=\sum_{\begin{subarray}{c}\bar{x}_{0}\in(\mathfrak{p}_{K}/ \mathfrak{p}_{K}^{\lfloor\frac{m}{4}\rfloor+1})\setminus(\mathfrak{p}_{K}^{2} /\mathfrak{p}_{K}^{\lfloor\frac{m}{4}\rfloor+1})\\ \bar{x}_{2}\in\mathfrak{p}_{K}^{\frac{m}{6}}/\mathfrak{p}_{K}^{\lfloor\frac{ m}{4}\rfloor+1}\end{subarray}}(\#\operatorname{im}\alpha_{\bar{x}_{0},\bar{x}_{2}}-1),\]
since \(\alpha_{\bar{x}_{0},\bar{x}_{2}}(0)=0\) so we always have \(0\in\operatorname{im}\alpha_{\bar{x}_{0},\bar{x}_{2}}\). Lemma 2.12 tells us that
\[\#\operatorname{im}\alpha_{\bar{x}_{0},\bar{x}_{2}}=\begin{cases}\frac{2q+(- 1)^{f_{K}}}{3}&\text{if }\bar{x}_{2}\not\in\mathfrak{p}_{K}^{\frac{m}{6}+1}/ \mathfrak{p}_{K}^{\lfloor\frac{m}{4}\rfloor+1},\\ \frac{q+1+(-1)^{f_{K}}}{2+(-1)^{f_{K}}}&\text{if }\bar{x}_{2}\in\mathfrak{p}_{K}^{ \frac{m}{6}+1}/\mathfrak{p}_{K}^{\lfloor\frac{m}{4}\rfloor+1}.\end{cases}\]
It follows that
\[\#\overline{S}=\frac{1}{3}q^{2\lfloor\frac{m}{4}\rfloor-\frac{m}{6}-1}(q-1)^{ 2}(2q-1),\]
so
\[\mu(S)=\frac{1}{3}q^{-\lfloor\frac{m}{4}\rfloor-\frac{m}{6}-4}(q-1)^{2}(2q-1).\]
**Corollary 2.14**.: _Let \(4\leq m\leq 6e_{K}+2\) be a multiple of \(6\). Then_
\[\mu(T_{m}\setminus P_{m}^{\{1\}})=\frac{1}{3}q^{-\frac{m}{2}-\frac{m}{6}-4}(q-1 )^{2}(2q-1).\]
Proof.: Suppose first that \(4\mid m\). Setting \(x_{i}:=a_{i}\) for \(i=0,1,2\), Corollary 2.9 tells us that \(T_{m}\setminus P_{m}^{\{1\}}\) is the set \(S\) from Lemma 2.13, together with the added congruence condition that
\(v_{K}(a_{3})\geq\lceil\frac{m}{4}\rceil\), so
\[\mu(T_{m}\setminus P_{m}^{\{1\}})=\mu(S)\cdot q^{-\lceil\frac{m}{4}\rceil}=\frac{ 1}{3}q^{-\frac{m}{2}-\frac{m}{6}-4}(q-1)^{2}(2q-1).\]
If \(4\nmid m\), then set
\[(x_{0},x_{1},x_{2}):=(a_{0},a_{3},a_{2}),\]
and proceed similarly.
**Corollary 2.15**.: _Let \(4\leq m\leq 6e_{K}+2\) be an even integer. Then_
\[\mu(P_{m}^{\{1\}})=q^{-\frac{m}{2}-\lceil\frac{m}{6}\rceil-3}(q-1)^{2}\cdot \Big{(}1+\mathbb{1}_{6|m}\cdot\big{(}\frac{1-2q}{3q}\big{)}\Big{)}.\]
Proof.: This is immediate from Corollary 2.14 and Lemma 2.10.
Proof of Theorem 1.1.: This is immediate from Lemma 2.1 and Corollary 2.15.
**Corollary 2.16**.: _We have_
\[\widetilde{m}(\Sigma^{\{1\}})=\frac{1}{3}(q-1)\cdot\frac{q^{4e_{K}}-1}{q^{4}- 1}q^{-4e_{K}-3}\Big{(}3q^{3}+q^{2}+q+3\Big{)}.\]
Proof.: Theorem 1.1 tells us that
\[\widetilde{m}(\Sigma^{\{1\}})=\sum_{\begin{subarray}{c}4\leq m\leq 6e_{K}+2\\ m\text{ even}\end{subarray}}q^{-\frac{m}{2}-\lceil\frac{m}{6}\rceil-1}(q-1) \Big{(}1+\mathbb{1}_{6|m}\cdot\big{(}\frac{1-2q}{3q}\big{)}\Big{)}.\]
Setting \(m=2k\), this is the same as
\[\sum_{k=2}^{3e_{K}+1}q^{-k-\lceil\frac{k}{3}\rceil-1}(q-1)\Big{(}1+\mathbb{1} _{3|k}\big{(}\frac{1-2q}{3q}\big{)}\Big{)},\]
which is equal to
\[(q-1)\sum_{l=1}^{e_{K}}\Big{(}q^{-4l}+\frac{1}{3}q^{-4l-2}(q+1)+q^{-4l-3}\Big{)} =(q-1)\sum_{l=1}^{e_{K}}\Big{(}q^{-4l-3}+\frac{1}{3}q^{-4l-2}+\frac{1}{3}q^{-4 l-1}+q^{-4l}\Big{)}.\]
It is easy to see that
\[\sum_{l=1}^{e_{K}}q^{-4l}=q^{-4e_{K}}\cdot\frac{q^{4e_{K}}-1}{q^{4}-1},\]
so the quantity we are computing equals
\[(q-1)\cdot\frac{q^{4e_{K}}-1}{q^{4}-1}\cdot\Big{(}q^{-4e_{K}-3}+\frac{1}{3}q^ {-4e_{K}-2}+\frac{1}{3}q^{-4e_{K}-1}+q^{-4e_{K}}\Big{)},\]
and this is equal to
\[\frac{1}{3}(q-1)\cdot\frac{q^{4e_{K}}-1}{q^{4}-1}q^{-4e_{K}-3}\Big{(}3q^{3}+q ^{2}+q+3\Big{)}.\]
## 3. The case \(G=c_{2}\times C_{2}\)
**Lemma 3.1**.: _Suppose that \(\Sigma_{m}^{C_{2}\times C_{2}}\) is nonempty. Then \(m\) is an even integer and \(6\leq m\leq 6e_{K}+2\)._
Proof.: Let \(L\in\Sigma_{m}^{C_{2}\times C_{2}}\) and let
\[\chi_{i}:K^{\times}\to C_{2},\quad i=1,2,3\]
be the quadratic characters associated to the three quadratic subfields of \(L\). For each \(i\), write \(c_{i}\) for the conductor of \(\chi_{i}\). Without loss of generality, we may assume that \(c_{1}\leq c_{2}=c_{3}\). By the conductor-discriminant formula, we have
\[m=c_{1}+2c_{2}.\]
Suppose for contradiction that \(c_{1}\) is odd. Then [12, Lemma 4.3] tells us that \(c_{1}=2e_{K}+1\), so also \(c_{2}=2e_{K}+1\). By considering the short exact sequence
\[1\to\{\pm 1\}\to U_{K}^{(e_{K})}/U_{K}^{(e_{K}+1)}\stackrel{{[u] \to[u^{2}]}}{{\to}}U_{K}^{(2e_{K})}/U_{K}^{(2e_{K}+1)}\to 1,\]
it is easy to see that
\[K^{\times 2}U_{K}^{(2e_{K})}/K^{\times 2}\cong C_{2},\]
so \(c_{1}=c_{2}=2e_{K}+1\) implies that \(c_{3}\leq 2e_{K}\), which is a contradiction. Therefore \(c_{1}\) is even, so \(m\) is also even. By [12, Lemma 4.3], we have \(2\leq c_{1}\leq 2e_{K}\) and \(2\leq c_{i}\leq 2e_{K}+1\) for \(i=2,3\), so
\[6\leq m\leq 6e_{K}+2.\]
**Lemma 3.2**.: _[_12_, Lemma 4.7]_ _Let \(m\) be a positive even integer with \(2\leq m\leq 6e_{K}+2\). Then_
\[\#\Sigma_{m}^{C_{2}\times C_{2}}=2(q-1)q^{\frac{m-4}{2}}\Big{(}q^{-\lfloor\frac {m}{6}\rfloor}(1+\mathbb{1}_{3|m}\cdot\frac{q-2}{3})-\mathbb{1}_{m\leq 4e_{K}+2} \cdot q^{-\lfloor\frac{m-2}{4}\rfloor}\Big{)}\]
**Remark 3.3**.: Tunnell omits the statement that he only counts totally ramified extensions. This fact can be seen from the second paragraph of his proof, where he writes \(d>\frac{c-1}{3}\). The fact that this inequality is strict means that all intermediate quadratic fields have positive discriminant, so the \(C_{2}\times C_{2}\)-extension is totally ramified.
**Corollary 3.4**.: _We have_
\[\widetilde{m}(\Sigma^{C_{2}\times C_{2}})=\frac{q-1}{6}\cdot\Big{(}q^{-4e_{K} -3}\cdot\frac{q^{4e_{K}}-1}{q^{4}-1}\cdot(3q^{3}+q^{2}+q+3)-3q^{-3e_{K}-3}\cdot \frac{q^{3e_{K}}-1}{q^{3}-1}\cdot(q^{2}+1)\Big{)}.\]
Proof.: By Lemmas 3.1 and 3.2, we have
\[\widetilde{m}(\Sigma^{C_{2}\times C_{2}}) =\sum_{\begin{subarray}{c}4\leq m\leq 6e_{K}+2\\ m\text{ even}\end{subarray}}\frac{\#\Sigma_{m}^{\mathbb{C}_{2}\times C_{2}}}{4 q^{m}}\] \[=\frac{1}{2}(q-1)\cdot\Big{(}\sum_{\begin{subarray}{c}4\leq m \leq 6e_{K}+2\\ m\text{ even}\end{subarray}}q^{-\frac{m+4}{2}-\lfloor\frac{m}{6}\rfloor}(1+ \mathbb{1}_{3|m}\cdot\frac{q-2}{3})-\sum_{\begin{subarray}{c}4\leq m\leq 4e_{K}+2\\ m\text{ even}\end{subarray}}q^{-\frac{m+4}{2}-\lfloor\frac{m-2}{4}\rfloor} \Big{)}.\]
We have
\[\sum_{\begin{subarray}{c}4\leq m\leq 6e_{K}+2\\ m\text{ even}\end{subarray}}q^{-\frac{m+4}{2}-\lfloor\frac{m}{6}\rfloor}(1+ \mathbb{1}_{3|m}\cdot\frac{q-2}{3}) =\sum_{k=2}^{3e_{K}+1}q^{-k-2-\lfloor\frac{k}{3}\rfloor}(1+ \mathbb{1}_{3|k}\cdot\frac{q-2}{3})\] \[=\sum_{l=1}^{e_{K}}\Big{(}q^{-4l}+q^{-4l-2}\cdot\frac{q+1}{3}+q^{ -4l-3}\Big{)}.\]
It is easy to see that
\[\sum_{l=1}^{e_{K}}q^{-4l}=q^{-4e_{K}}\cdot\frac{q^{4e_{K}}-1}{q^{4}-1},\]
which means that
\[\sum_{\begin{subarray}{c}4\leq m\leq 6e_{K}+2\\ m\text{ even}\end{subarray}}q^{-\frac{m+4}{2}-\lfloor\frac{m}{6}\rfloor}(1+ \mathbb{1}_{3|m}\cdot\frac{q-2}{3})=\frac{1}{3}\cdot q^{-4e_{K}-3}\cdot\frac{ q^{4e_{K}}-1}{q^{4}-1}\cdot(3q^{3}+q^{2}+q+3)\]
Similarly, we obtain
\[\sum_{\begin{subarray}{c}4\leq m\leq 4e_{K}+2\\ m\text{ even}\end{subarray}}q^{-\frac{m+4}{2}-\lfloor\frac{m-2}{4}\rfloor}=q ^{-3e_{K}-3}\cdot\frac{q^{3e_{K}}-1}{q^{3}-1}\cdot(q^{2}+1),\]
and the result follows.
## 4. The case \(G=c_{4}\)
### Sketch of our approach
Let \(L/K\) be a \(C_{4}\)-extension and let \(E\) be its unique nontrivial intermediate field. For positive integers \(m_{1}\) and \(m_{2}\), write \(\Sigma_{m_{1},m_{2}}\) for the set of totally ramified \(C_{4}\)-extensions \(L/K\) such that \(v_{K}(d_{E/K})=m_{1}\) and \(v_{E}(d_{L/E})=m_{2}\). Write
\[N(m_{1},m_{2}):=\#\Sigma_{m_{1},m_{2}}.\]
Call a quadratic extension \(E/K\)_extendable_ if there is some extension \(L/E\) such that \(L/K\) is a \(C_{4}\)-extension. For any real number \(m_{1}\), write \(N_{\text{ext}}(m_{1})\) for the number of extendable extensions \(E/K\) with \(v_{K}(d_{E/K})=m_{1}\). Note that \(N_{\text{ext}}(m_{1})=0\) if \(m_{1}\) not a nonnegative integer.
Let \(t_{0}\) be the smallest nonnegative integer \(t\) such that
\[N_{K/\mathbb{Q}_{2}}(U_{K}^{(2t)})\subseteq 1+4\mathbb{Z}_{2}.\]
This is an invariant of the field \(K\), and it appears in many of the results in this section. In the current subsection, we state the main results, whose proofs are postponed to the later subsections.
**Lemma 4.1**.: _If \(E/K\) is a totally ramified extendable extension, then \(2\leq v_{K}(d_{E/K})\leq 2e_{K}+1\) and \(v_{K}(d_{E/K})\) is either even or equal to \(2e_{K}+1\). For even \(m_{1}\) with \(2\leq m_{1}\leq 2e_{K}\), we have_
\[N_{\rm ext}(m_{1})=(1+\mathbb{1}_{m_{1}\leq 2e_{K}-2t_{0}})q^{\frac{m_{1}}{2}-1} (q-1-\mathbb{1}_{m_{1}=2e_{K}-2t_{0}+2}).\]
_For \(m_{1}=2e_{K}+1\), we have_
\[N_{\rm ext}(2e_{K}+1)=\begin{cases}2q^{e_{K}}&\text{if $-1\in K^{\times 2}$},\\ q^{e_{K}}&\text{if $K(\sqrt{-1})/K$ is quadratic and totally ramified},\\ 0&\text{if $K(\sqrt{-1})/K$ is quadratic and unramified}.\end{cases}\]
Let \(E/K\) be a totally ramified extendable extension with \(v_{K}(d_{E/K})=m_{1}\). For each positive integer \(m_{2}\), write \(N_{E}(m_{2})\) for the number of totally ramified quadratic extensions \(L/E\) such that \(v_{E}(d_{L/E})=m_{2}\) and \(L/K\) is a \(C_{4}\)-extension.
**Lemma 4.2**.: _Let \(E\) be a totally ramified extendable extension and let \(m_{1}=v_{K}(d_{E/K})\). If \(2\leq m_{1}\leq e_{K}\), then_
\[N_{E}(m_{2})=\begin{cases}q^{m_{1}-1}&\text{if $m_{2}=3m_{1}-2$},\\ q^{\lfloor\frac{m_{1}+m_{2}}{4}\rfloor}-q^{\lfloor\frac{m_{1}+m_{2}-2}{4} \rfloor}&\text{if $3m_{1}\leq m_{2}\leq 4e_{K}-m_{1}$ and $m_{2}$ is even},\\ q^{e_{K}}&\text{if $m_{2}=4e_{K}-m_{1}+2$},\\ 0&\text{otherwise}.\end{cases}\]
_If \(m_{1}>e_{K}\), then_
\[N_{E}(m_{2})=\begin{cases}2q^{e_{K}}&\text{if $m_{2}=m_{1}+2e_{K}$},\\ 0&\text{otherwise}.\end{cases}\]
**Corollary 4.3**.: _If \(\Sigma_{m}^{C_{4}}\) is nonempty, then either \(m=8e_{K}+3\) or \(m\) is an even integer with \(8\leq m\leq 8e_{K}\). For any even integer \(m\), the number \(\#\Sigma_{m}^{C_{4}}\) is the sum of the following four quantities:_
1. \(\mathbb{1}_{8\leq m\leq 5e_{K}-2}\cdot q^{\frac{m-3}{5}}N_{\rm ext}(\frac{m+2} {5})\)_._
2. \[\sum_{\begin{subarray}{c}\max\{2,m-4e_{K}\}\leq m_{1}\leq\min\{\frac{m}{5},e_{ K}\}\\ m_{1}\equiv m\pmod{4}\end{subarray}}q^{\frac{m-m_{1}}{4}-1}(q-1)N_{\rm ext}(m_{1}).\]
3. \(\mathbb{1}_{4e_{K}+4\leq m\leq 5e_{K}+2}\cdot q^{e_{K}}N_{\rm ext}(m-4e_{K}-2)\)_._
4. \(\mathbb{1}_{5e_{K}+3\leq m\leq 8e_{K}}\cdot 2q^{e_{K}}N_{\rm ext}(\frac{m-2e_{K}} {3})\)_._
_Moreover,_
\[\#\Sigma_{8e_{K}+3}^{C_{4}}=\begin{cases}4q^{2e_{K}}&\text{if $-1\in K^{\times 2}$},\\ 2q^{2e_{K}}&\text{if $K(\sqrt{-1})/K$ is quadratic and totally ramified},\\ 0&\text{if $K(\sqrt{-1})/K$ is quadratic and unramified}.\end{cases}\]
**Corollary 4.4**.: _The pre-mass \(\widetilde{m}(\Sigma^{C_{4}})\) is the sum of the following nine quantities:_
1. \[\frac{1}{2}\cdot\frac{(q-1)(1-q^{-7\lfloor\frac{e_{K}}{2}\rfloor})}{q^{7}-1}.\]
2. \[\frac{1}{2}\cdot q^{-3e_{K}-3}(1-q^{-\lfloor\frac{e_{K}}{2}\rfloor}).\]
3. \[\mathbbm{1}_{2t_{0}<e_{K}}\cdot\frac{(q-1)(q^{-5\lfloor\frac{e_{K}}{2} \rfloor-e_{K}-1}-q^{5t_{0}-6e_{K}-1})}{q^{5}-1}.\]
4. \[\frac{1}{2}\cdot\mathbbm{1}_{t_{0}\geq 1}\cdot q^{-6e_{K}+5t_{0}-6}(q-2).\]
5. \[\frac{1}{2}\cdot\mathbbm{1}_{t_{0}\geq 2}\cdot\frac{(q-1)(q^{5t_{0}-6e_{K}-6}-q ^{-6e_{K}-1})}{q^{5}-1}.\]
6. \[\mathbbm{1}_{e_{K}\geq 2}\cdot\frac{1}{2}(q-1)q^{-7\lceil\frac{e_{K}}{2} \rceil-1}\Big{(}\frac{q(q^{7\lfloor\frac{e_{K}}{2}\rfloor-7}-1)(q^{6}+q^{4}+ q^{3}+q+1)}{q^{7}-1}+1+\mathbbm{1}_{2\mid e_{K}}(q^{-2}+q^{-3})\Big{)}.\]
7. \[-\mathbbm{1}_{e_{K}\geq 2}\cdot\frac{1}{2}\cdot\frac{(q-1)(q+1)(q^{-7}-q^{-3e_ {K}-1})}{q^{3}-1}.\]
8. \[-\frac{1}{2}q^{-3e_{K}-2}(1-q^{-\lfloor\frac{e_{K}}{2}\rfloor}).\]
9. \[\begin{cases}q^{-6e_{K}-3}&\text{if $-1\in K^{\times 2}$},\\ \frac{1}{2}q^{-6e_{K}-3}&\text{if $K(\sqrt{-1})/K$ is quadratic and totally ramified},\\ 0&\text{otherwise}.\end{cases}\]
### Counting extendable extensions
The aim of this subsection is to prove Lemma 4.1. The paper [1] gives conditions on \(d\in K^{\times}\) for the extension \(K(\sqrt{d})/K\) to be extendable. We use these conditions and adapt the methods of [1] to parametrise and count extendable extensions.
**Lemma 4.5** (Hecke's Theorem).: _Let \(E\) be a \(2\)-adic field, let \(\alpha\in E^{\times}\setminus E^{\times 2}\), and let \(L=E(\sqrt{\alpha})\). If \(v_{E}(\alpha)=1\), then \(v_{E}(d_{L/E})=2v_{E}(2)+1\). If \(v_{E}(\alpha)=0\), then \(L/E\) is totally ramified
_if and only if \(\alpha\equiv x^{2}\pmod{\mathfrak{p}_{E}^{2v_{E}(2)}}\) has no solution \(x\in E\). In that case, we have_
\[v_{E}(d_{L/E})=2v_{E}(2)+1-\kappa_{E,\alpha},\]
_where_
\[\kappa_{E,\alpha}=\max\{0\leq l<2v_{E}(2):\alpha\equiv x^{2}\pmod{\mathfrak{ p}_{E}^{l}}\text{ has a solution in }E\}.\]
Proof.: This is the special case \(p=2\) of [1, Theorem 2.4].
**Corollary 4.6**.: _Let \(E,\alpha\), and \(L\) be as in Lemma 4.5, and assume that \(v_{E}(\alpha)=0\). Let \(t\) be an integer with \(0\leq t\leq v_{E}(2)\). Then_
\[v_{E}(d_{L/E})\leq 2v_{E}(2)-2t\]
_if and only if there is some \(x\in E^{\times}\) with \(\alpha\equiv x^{2}\pmod{\mathfrak{p}_{E}^{2t}}\)._
Proof.: This follows from Lemma 4.5, along with the fact that for \(0\leq t<v_{E}(2)\), if \(\alpha\) is square modulo \(\mathfrak{p}_{E}^{2t}\) then it is also square modulo \(\mathfrak{p}_{E}^{2t+1}\).
**Lemma 4.7**.: _Let \(E=K(\sqrt{d})\) for \(d\in K^{\times}\setminus K^{\times 2}\) and let \(L=E(\sqrt{\alpha})\) for \(\alpha\in E^{\times}\setminus E^{\times 2}\). Then \(L/K\) is a \(C_{4}\)-extension if and only if \(N_{E/K}(\alpha)\in dK^{\times 2}\)._
Proof.: Write \(\alpha=a+b\sqrt{d}\) for \(a,b\in K\) and let \(\theta=\sqrt{\alpha}\). Let \(m(X)\) be the minimal polynomial of \(\theta\) over \(K\). Let \(N\) be a splitting field of \(m(X)\) over \(L\). The polynomial \(m(X)\) has roots \(\pm\theta,\pm\varphi\) for some element \(\varphi\in N\). Let \(\lambda:=\frac{\theta}{\varphi}-\frac{\varphi}{\theta}\).
We claim that \(L/K\) is a \(C_{4}\)-extension if and only if \(\lambda\in K\). Suppose that \(L/K\) is a \(C_{4}\)-extension. Then \(\theta,\varphi\in L\), so there is a generator \(\sigma\in\operatorname{Gal}(L/K)\) such that \(\sigma(\theta)=\varphi\). It follows that \(\sigma(\lambda)=\lambda\), so \(\lambda\in K\). Suppose conversely that \(\lambda\in K\). There is some element \(\sigma\in\operatorname{Gal}(N/K)\) such that \(\sigma(\theta)=\varphi\). It is easy to see that \(\sigma^{2}(\theta)=\varepsilon\theta\) for some \(\varepsilon\in\{\pm 1\}\). Since \(\lambda\in K\), we have \(\varepsilon=-1\), so \(\sigma\) has order \(4\). Clearly \(\theta^{2}+\varphi^{2}=2a\), so
\[\lambda=\frac{2\theta^{2}-2a}{\theta\varphi},\]
which means that
\[\varphi=\frac{2\theta^{2}-2a}{\theta\lambda}\in L,\]
so \(L/K\) is Galois and hence \(C_{4}\) with Galois group \(\langle\sigma\rangle\). Finally,
\[\lambda^{2}=\frac{4b^{2}d}{N_{E/K}(\alpha)},\]
and the result follows.
**Corollary 4.8**.: _For \(d\in K^{\times}\setminus K^{\times 2}\), the following are equivalent:_
1. _The extension_ \(K(\sqrt{d})/K\) _is extendable._
2. _The element_ \(d\) _is a sum of two squares in_ \(K\)_._
3. _The element_ \(d\) _is in the norm group of the extension_ \(K(\sqrt{-1})/K\)
Proof.: The equivalence of (1) and (2) follows from Lemma 4.7. If \(-1\in K^{\times 2}\), then (2) and (3) are equivalent because every element of \(K\) can be written as a sum of two squares, due to the identity
\[d=\Big{(}\frac{d+1}{2}\Big{)}^{2}+\Big{(}\frac{d-1}{2\sqrt{-1}}\Big{)}^{2}.\]
If \(-1\not\in K^{\times 2}\), then the equivalence of (2) and (3) is trivial.
For a nonnegative integer \(m_{1}\), write \(\Sigma^{\mathrm{ext}}_{m_{1}}\) (respectively \(\Sigma^{\mathrm{ext}}_{\leq m_{1}}\)) for the set of extendable extensions \(E/K\) such that \(v_{K}(d_{E/K})=m_{1}\) (respectively \(v_{K}(d_{E/K})\leq m_{1}\)). For \(0\leq t<e_{K}\), write \(S^{q}_{t}\) for the subgroup consisting of \([u]\in\mathcal{O}^{\times}_{K}/\mathcal{O}^{\times 2}_{K}\) such that the following two conditions hold:
1. \(u\) is a sum of two squares in \(K\).
2. \(u\equiv x^{2}\pmod{\mathfrak{p}^{2t}_{K}}\) for some \(x\in\mathcal{O}^{\times}_{K}\).
**Lemma 4.9**.: _Let \(0\leq t\leq e_{K}\) and let \(d_{0}\in S^{q}_{t}\). The map_
\[S^{q}_{t}\to\Sigma^{\mathrm{ext}}_{\leq 2e_{K}-2t}\cup\{K\},\quad u\mapsto K( \sqrt{d_{0}u})\]
_is a bijection._
Proof.: Of course, the map \(u\mapsto K(\sqrt{d_{0}u})\) is a bijection between \(K^{\times}/K^{\times 2}\) and the set of extensions of \(K\) of degree at most \(2\). Corollary 4.8 tells us that the extension \(K(\sqrt{d_{0}u})/K\) is extendable for all \(u\in S^{q}_{t}\setminus\{d_{0}\}\). Moreover, for \(u\in S^{q}_{t}\setminus\{d_{0}\}\), Corollary 4.6 tells us that \(K(\sqrt{d_{0}u})\in\Sigma^{\mathrm{ext}}_{\leq 2e_{K}-2t}\) if and only if \(d_{0}u\in S^{q}_{t}\). Each \(S^{q}_{t}\) is a subgroup of \(K^{\times}/K^{\times 2}\), so \(d_{0}u\in S^{q}_{t}\) if and only if \(u\in S^{q}_{t}\). Therefore, for \(u\in K^{\times}/K^{\times 2}\), we have \(K(\sqrt{d_{0}u})\in\Sigma^{\mathrm{ext}}_{\leq 2e_{K}-2t}\) if and only if \(u\in S^{q}_{t}\), and the result follows.
For \(0\leq t\leq e_{K}\), define the sets
\[(\mathcal{O}_{K}/\mathfrak{p}^{2t}_{K})^{q}:=\{\overline{\gamma}\in(\mathcal{ O}_{K}/\mathfrak{p}^{2t}_{K})^{\times}:\text{ some lift }\gamma\in\mathcal{O}_{K}\text{ is a sum of two squares in }K\},\]
and
\[Z^{q}_{t}:=\frac{(\mathcal{O}_{K}/\mathfrak{p}^{2t}_{K})^{q}}{(\mathcal{O}_{K }/\mathfrak{p}^{2t}_{K})^{\times 2}}.\]
In the case \(t=0\), we adopt the convention that \((\mathcal{O}_{K}/\mathfrak{p}^{2t}_{K})^{\times 2}=(\mathcal{O}_{K}/\mathfrak{p}^{2t}_{K })^{q}=\{1\}\).
**Lemma 4.10**.: _For \(0\leq t\leq e_{K}\), there is an exact sequence_
\[1\to S^{q}_{t}\to S^{q}_{0}\to Z^{q}_{t}\to 1.\]
Proof.: This is immediate from the definitions.
**Corollary 4.11**.: _For \(0\leq t\leq e_{K}\), we have_
\[\#S^{q}_{t}=\frac{\#S^{q}_{0}\cdot\#(\mathcal{O}_{K}/\mathfrak{p}^{2t}_{K})^ {\times 2}}{\#(\mathcal{O}_{K}/\mathfrak{p}^{2t}_{K})^{q}}.\]
**Lemma 4.12**.: _For \(1\leq t\leq e_{K}\), we have_
\[\#(\mathcal{O}_{K}/\mathfrak{p}^{2t}_{K})^{\times 2}=q^{t-1}(q-1).\]
Proof.: Consider the squaring map
\[\varphi:(\mathcal{O}_{K}/\mathfrak{p}_{K}^{2t})^{\times}\to(\mathcal{O}_{K}/ \mathfrak{p}_{K}^{2t})^{\times 2},\]
which is a surjective group homomorphism. It is easy to see that \(\ker\varphi=(1+\mathfrak{p}_{K}^{t})/(1+\mathfrak{p}_{K}^{2t})\), which has size \(q^{t}\). The result follows since \(\#(\mathcal{O}_{K}/\mathfrak{p}_{K}^{2t})^{\times}=q^{2t-1}(q-1)\).
**Lemma 4.13**.: _We have_
\[\#S_{0}^{q}=\frac{2q^{e_{K}}}{e(K(\sqrt{-1})/K)}.\]
Proof.: By Corollary 4.8, we have
\[S_{0}^{q}=N_{K(\sqrt{-1})/K}\big{(}K(\sqrt{-1})^{\times}\big{)}\cap\big{(} \mathcal{O}_{K}^{\times}/\mathcal{O}_{K}^{\times 2}\big{)}.\]
The result then follows by basic class field theory, along with the well-known fact that \(\#(K^{\times}/K^{\times 2})=4q^{e_{K}}\).
**Corollary 4.14**.: _For \(1\leq t\leq e_{K}\), we have_
\[\#S_{t}^{q}=\frac{2q^{e_{K}+t-1}(q-1)}{e(K(\sqrt{-1})/K)\cdot\#(\mathcal{O}_{ K}/\mathfrak{p}_{K}^{2t})^{q}}.\]
Proof.: This is immediate from Corollary 4.11 and Lemmas 4.12 and 4.13.
**Lemma 4.15**.: _Let \(\alpha\in\mathcal{O}_{K}^{\times}\). Then \(\alpha\) is a sum of two squares in \(K\) if and only if \(N_{K/\mathbb{Q}_{2}}(\alpha)\equiv 1\pmod{4}\)._
Proof.: For \(a,b\in K^{\times}\), write \((a,b)_{K}\) for the quadratic Hilbert symbol defined over \(K\), so that \(u\in K^{\times}\) is a sum of two squares if and only if \((-1,u)_{K}=1\). This in turn is equivalent to
\[(-1,N_{K/\mathbb{Q}_{2}}(u))_{\mathbb{Q}_{2}}=1,\]
which is the case if and only if \(N_{K/\mathbb{Q}_{2}}(u)\equiv 1\pmod{4}\).
**Lemma 4.16**.: _Let \(1\leq t\leq e_{K}\). Then_
\[\#(\mathcal{O}_{K}/\mathfrak{p}_{K}^{2t})^{q}=\frac{1}{e(K(\sqrt{-1})/K)}(1+ \mathbb{1}_{t<t_{0}})q^{2t-1}(q-1).\]
Proof.: Suppose first that \(e(K(\sqrt{-1})/K)=1\). By Corollary 4.8, every element of \(\mathcal{O}_{K}^{\times}\) is a sum of two squares. It follows that
\[\#(\mathcal{O}_{K}/\mathfrak{p}_{K}^{2t})^{q}=q^{2t-1}(q-1),\]
and also that \(t_{0}=0\) by Lemma 4.15, so \(\mathbb{1}_{t<t_{0}}=0\) and we are done.
Now suppose that \(e(K(\sqrt{-1})/K)=2\). Write \(\mathcal{O}_{K}^{q}\) for the set of \(u\in\mathcal{O}_{K}^{\times}\) that can be written as a sum of two squares in \(K\). Suppose that \(t<t_{0}\). By Lemma 4.15, there is some \(u\in U_{K}^{(2t)}\) such that \(u\not\in\mathcal{O}_{K}^{q}\). Let \([x]\in(\mathcal{O}_{K}/\mathfrak{p}_{K}^{2t})^{\times}\), where \(x\in\mathcal{O}_{K}^{\times}\). The elements \(x\) and \(ux\) are both lifts of \([x]\), and by Lemma 4.15 exactly one of them is a sum of two squares in \(K\), so \([x]\in(\mathcal{O}_{K}/\mathfrak{p}_{K}^{2t})^{q}\).
Therefore
\[(\mathcal{O}_{K}/\mathfrak{p}_{K}^{2t})^{q}=(\mathcal{O}_{K}/\mathfrak{p}_{K}^{2t} )^{\times},\]
so
\[\#(\mathcal{O}_{K}/\mathfrak{p}_{K}^{2t})^{q}=q^{2t-1}(q-1).\]
Suppose that \(t\geq t_{0}\) and consider the natural surjection
\[\varphi:\mathcal{O}_{K}^{\times}\to\frac{(\mathcal{O}_{K}/\mathfrak{p}_{K}^{ 2t})^{\times}}{(\mathcal{O}_{K}/\mathfrak{p}_{K}^{2t})^{q}}.\]
Suppose that \(\varphi(x)=1\). By Lemma 4.15 there is some \(y\in\mathcal{O}_{K}^{\times}\) such that \(x\equiv y\pmod{\mathfrak{p}_{K}^{2t}}\) and \(N_{K/\mathbb{Q}_{2}}(y)\equiv 1\pmod{4}\). But then \(x/y\in U_{K}^{(2t)}\), so \(N_{K/\mathbb{Q}_{2}}(x/y)\equiv 1\pmod{4}\), and therefore \(N_{K/\mathbb{Q}_{2}}(x)\equiv 1\pmod{4}\), which means that \(x\in\mathcal{O}_{K}^{q}\). It follows that we have an exact sequence
\[1\to\mathcal{O}_{K}^{q}\to\mathcal{O}_{K}^{\times}\to\frac{(\mathcal{O}_{K}/ \mathfrak{p}_{K}^{2t})^{\times}}{(\mathcal{O}_{K}/\mathfrak{p}_{K}^{2t})^{q}} \to 1,\]
which means that
\[[(\mathcal{O}_{K}/\mathfrak{p}_{K}^{2t})^{\times}:(\mathcal{O}_{K}/\mathfrak{ p}_{K}^{2t})^{q}]=[\mathcal{O}_{K}^{\times}:\mathcal{O}_{K}^{q}]=2,\]
and the result follows.
**Lemma 4.17**.: _For \(0\leq t\leq e_{K}\), we have_
\[\#S_{t}^{q}=(1+\mathbb{1}_{t\geq t_{0}})q^{e_{K}-t}.\]
Proof.: For \(t\geq 1\), this is immediate from Corollary 4.14 and Lemma 4.16. For \(t=0\), the result follows from Lemma 4.13, together with the fact that \(t_{0}=0\) if and only if \(e(K(\sqrt{-1})/K)=1\).
Proof of Lemma 4.1.: The first claim follows from the classification of quadratic extensions in [11, Lemma 4.3]. Lemmas 4.9 and 4.17 tell us that
\[\#\Sigma_{\leq m_{1}}^{\mathrm{ext}}=(1+\mathbb{1}_{m_{1}\leq 2e_{K}-2t_{0}})q ^{\frac{m_{1}}{2}}-1,\]
for even integers \(m_{1}\) with \(0\leq m_{1}\leq 2e_{K}\). The result for \(2\leq m_{1}\leq 2e_{K}\) follows. By Lemma 4.5 and Corollary 4.6, for any quadratic extension \(E/K\), we have \(v_{K}(d_{E/K})=2e_{K}+1\) if and only if \(E=K(\sqrt{\alpha})\) for some \(\alpha\in K^{\times}\) with \(v_{K}(\alpha)=1\). Assume that this is the case. Then Corollary 4.8 tells us that \(E/K\) is extendable if and only if \(\alpha\) is in the norm group of \(K(\sqrt{-1})/K\), and the result follows by basic class field theory.
**Lemma 4.18**.: _We have \(t_{0}\leq\lceil\frac{e_{K}}{2}\rceil\)._
Proof.: Suppose first that \(e_{K}\) is even. Then we need to show that
\[N_{K/\mathbb{Q}_{2}}(U_{K}^{(e_{K})})\subseteq 1+4\mathbb{Z}_{2}.\]
Let \(x\in U_{K}^{(e_{K})}\), so that \(x=1+2y\) for some \(y\in\mathcal{O}_{K}\). Then
\[N_{K/\mathbb{Q}_{2}}(x)\equiv 1+2\operatorname{Tr}_{K/\mathbb{Q}_{2}}(y) \pmod{4}.\]
Writing \(\bar{y}\) for the image of \(y\) in \(\mathbb{F}_{K}\), we have
\[\operatorname{Tr}_{K/\mathbb{Q}_{2}}(y)\equiv e_{K}\cdot\operatorname{Tr}_{ \mathbb{F}_{K}/\mathbb{F}_{2}}(\bar{y})\pmod{\mathfrak{p}_{K}},\]
and this is zero since \(e_{K}\) is even. Therefore \(2\mid\operatorname{Tr}_{K/\mathbb{Q}_{2}}(y)\), so \(N_{K/\mathbb{Q}_{2}}(x)\equiv 1\pmod{4}\).
Now suppose that \(e_{K}\) is odd. Then we need to show that
\[N_{K/\mathbb{Q}_{2}}(U_{K}^{(e_{K}+1)})\subseteq 1+4\mathbb{Z}_{2}.\]
For any \(x\in U_{K}^{(e_{K}+1)}\), we have \(x=1+2y\) for some \(y\in\mathfrak{p}_{K}\), so
\[N_{K/\mathbb{Q}_{2}}(x)\equiv 1+2\operatorname{Tr}_{K/\mathbb{Q}_{2}}(y)\pmod{4}\]
and the result follows since \(v_{\mathbb{Q}_{2}}(\operatorname{Tr}_{K/\mathbb{Q}_{2}}(y))>0\).
### Counting \(C_{4}\)-extensions with a given intermediate field
Given a totally ramified quadratic extension \(E/K\), write \(\Sigma_{E}\) for the set of quadratic extensions \(L/E\) such that \(L/K\) is a \(C_{4}\)-extension. Write \(\Sigma_{E,m_{2}}\) (respectively \(\Sigma_{E,\leq m_{2}}\)) for the set of \(L\in\Sigma_{E}\) such that \(v_{E}(d_{L/E})=m_{2}\) (respectively \(v_{E}(d_{L/E})\leq m_{2}\)).
**Lemma 4.19**.: _Let \(E=K(\sqrt{d})/K\) be a totally ramified extendable extension with \(m_{1}=v_{K}(d_{E/K})\), and let \(m_{2}\leq 4e_{K}\) be an even integer. The following are equivalent:_
1. _The set_ \(\Sigma_{E,\leq m_{2}}\) _is nonempty._
2. _There is some_ \(\beta\in\mathcal{O}_{E}\) _such that_ \(\beta\equiv 1\pmod{\mathfrak{p}_{E}^{4e_{K}-m_{2}}}\) _and_ \(N_{E/K}(\beta)\in dK^{\times 2}\)_._
3. _We have_ \(m_{2}\geq\min\{m_{1}+2e_{K},3m_{1}-2\}\)_._
Proof.: The first two points are equivalent by Corollary 4.6 and Lemma 4.7. The equivalence of (2) and (3) is essentially [1, Proposition 3.15]. At the start of the proof, the authors state that their "condition \((*)\)" is equivalent to (2), and the statement of their proposition is equivalent to (3), where \(t=2e_{K}-\frac{m_{2}}{2}\). Their result is stated for prime ideals of number fields lying over \(2\), but it is trivial to check that the proof works for \(2\)-adic fields.
Let \(E/K\) be a totally ramified quadratic extension. For an integer \(0\leq t\leq 2e_{K}\), define
\[S_{E,t}:=\{u\in\mathcal{O}_{K}^{\times}/\mathcal{O}_{K}^{\times 2}:u\equiv x^{2} \pmod{\mathfrak{p}_{E}^{2t}}\text{ for some }x\in E\}.\]
**Lemma 4.20**.: _Let \(0\leq m_{2}\leq 4e_{K}\) be an even integer such that \(\Sigma_{E,\leq m_{2}}\) is nonempty. Let \(\omega\in E\) such that \(E(\sqrt{\omega})\in\Sigma_{E,\leq m_{2}}\). Then the map_
\[K^{\times}/K^{\times 2}\to\Sigma_{E},\quad u\mapsto E(\sqrt{u\omega})\]
_is surjective and \(2\)-to-\(1\). Moreover, this map restricts to a surjective \(2\)-to-\(1\) map_
\[S_{E,2e_{K}-\frac{m_{2}}{2}}\to\Sigma_{E,\leq m_{2}}.\]
Proof.: The first claim is [1, Proposition 1.2], and the second claim follows from Corollary 4.6.
Fix a totally ramified quadratic extension \(E/K\) with \(m_{1}=v_{K}(d_{E/K})\), and assume that \(m_{1}\) is even. For \(0\leq t\leq 2e_{K}-\frac{m_{1}}{2}\), define \(\mathcal{Z}_{E,t}\) by the short exact sequence
\[1\to S_{E,t}\to K^{\times}/K^{\times 2}\to\mathcal{Z}_{E,t}\to 1.\]
**Lemma 4.21**.: _Let \(E\) be a totally ramified quadratic extension of \(K\) with even discriminant exponent \(m_{1}=v_{K}(d_{E/K})\). Let \(m_{2}\) be an even integer with \(m_{2}\geq m_{1}\). Then we have_
\[\#\mathcal{Z}_{E,2e_{K}-\frac{m_{2}}{2}}=\begin{cases}2q^{\lceil e_{K}-\frac{m _{1}+m_{2}}{4}\rceil}&\text{if $m_{2}\leq 4e_{K}-m_{1}$},\\ 1&\text{otherwise.}\end{cases}\]
Proof.: This is essentially [1, Corollary 3.13]. Under our notation, \(\mathcal{Z}_{E,t}\) corresponds to Cohen, Diaz y Diaz, and Olivier's \(\mathcal{Z}_{\mathfrak{Y}^{2t}}\), defined in [1, Page 486]. As with Lemma 4.19, the statement in [1] is for prime ideals of number fields, but the modifications to the proof are trivial.
**Corollary 4.22**.: _Let \(E/K\) be a totally ramified extendable extension with \(m_{1}=v_{K}(d_{E/K})\) even. Let \(m_{2}\) be an even integer and write \(n_{0}:=\min\{m_{1}+2e_{K},3m_{1}-2\}\). Then we have_
\[\#\Sigma_{E,\leq m_{2}}=\begin{cases}0&\text{if $m_{2}<n_{0}$},\\ q^{\lfloor\frac{m_{1}+m_{2}}{4}\rfloor}&\text{if $n_{0}\leq m_{2}\leq 4e_{K}-m_{1}$},\\ 2q^{e_{K}}&\text{if $m_{2}\geq\max\{4e_{K}-m_{1}+2,n_{0}\}$}.\end{cases}\]
Proof.: Lemma 4.19 deals with the case \(m_{2}<n_{0}\). Let \(n_{0}\leq m_{2}\leq 4e_{K}\). By Lemma 4.19, the set \(\Sigma_{E,\leq m_{2}}\) is nonempty, so Lemma 4.20 tells us that
\[\#\Sigma_{E,\leq m_{2}}=\frac{1}{2}\#S_{E,2e_{K}-\frac{m_{2}}{2}},\]
and the result follows from Lemma 4.21, along with the fact that \(\#(K^{\times}/K^{\times 2})=4q^{e_{K}}\). The result for \(m_{2}>4e_{K}\) follows since \(\#\Sigma_{E}=2q^{e_{K}}\) by Lemma 4.20.
Proof of Lemma 4.2.: By [16, Lemma 4.3], either \(m_{1}=2e_{K}+1\) or \(m_{1}\) is even with \(2\leq m_{1}\leq 2e_{K}\). The case where \(m_{1}\) is even follows easily from Corollary 4.22. For the case with \(m_{1}\) odd, suppose that \(m_{1}=2e_{K}+1\). Then by Lemma 4.5 we have \(E=K(\sqrt{d})\) for \(d\in K^{\times}\) with \(v_{K}(d)=1\). By Lemma 4.7, each \(C_{4}\)-extension \(L/K\) extending \(E\) has \(L=E(\sqrt{\alpha})\) for some \(\alpha\in E^{\times}\) with \(v_{K}(N_{E/K}(\alpha))\) odd. It follows that \(v_{E}(\alpha)\) is odd, so \(v_{E}(d_{L/E})=4e_{K}+1\) by Lemma 4.5.
Proof of Corollary 4.3.: Suppose that \(L/K\) is a \(C_{4}\)-extension with intermediate quadratic field \(E\). Then
\[v_{K}(d_{L/K})=2v_{K}(d_{E/K})+v_{E}(d_{L/E}).\]
So if \(L\in\Sigma_{m}^{C_{4}}\) with \(m_{1}=v_{K}(d_{E/K})\) and \(m_{2}=v_{E}(d_{L/E})\), then \(m=2m_{1}+m_{2}\), and Lemmas 4.1 and 4.2 tell us that either \((m_{1},m_{2})=(2e_{K}+1,4e_{K}+1)\) or \(m_{1}\) and \(m_{2}\) are both even with \(2\leq m_{1}\leq 2e_{K}\) and \(4\leq m_{2}\leq 4e_{K}\). It follows that either \(m\) is even with \(8\leq m\leq 8e_{K}\) or \(m=8e_{K}+3\). If \(m=8e_{K}+3\), then the result follows from Lemmas 4.1 and 4.2.
Now consider the case where \(8\leq m\leq 8e_{K}\) and \(m\) is even. By the discussion above, we have
\[\#\Sigma_{m}^{C_{4}}=\sum_{\begin{subarray}{c}2\leq m_{1}\leq 2e_{K}\\ m_{1}\text{ even}\end{subarray}}N(m_{1},m-2m_{1}).\]
Let \(2\leq m_{1}\leq 2e_{K}\) be even. By Lemma 4.2, whenever \(N_{\text{ext}}(m_{1})\neq 0\) we have
\[\frac{N(m_{1},m-2m_{1})}{N_{\text{ext}}(m_{1})} =\begin{cases}q^{m_{1}-1}\quad\text{if $m_{1}=\frac{m+2}{5}$ and $m_{1}\leq e_{K}$},\\ q^{\lfloor\frac{m-m_{1}}{4}\rfloor}-q^{\lfloor\frac{m-m_{1}-2}{4}\rfloor}\quad \text{if $5m_{1}\leq m\leq 4e_{K}+m_{1}$ and $m_{1}\leq e_{K}$},\\ q^{e_{K}}\quad\text{if $m_{1}=m-4e_{K}-2$ and $m_{1}\leq e_{K}$},\\ 2q^{e_{K}}\quad\text{if $e_{K}<m_{1}\leq 2e_{K}$ and $m_{1}=\frac{m-2e_{K}}{3}$},\\ 0\quad\text{otherwise}.\end{cases}\] \[=\begin{cases}q^{\frac{m-3}{5}}\quad\text{if $m_{1}=\frac{m+2}{5}$ and $8\leq m \leq 5e_{K}-2$},\\ q^{\lfloor\frac{m-m_{1}}{4}\rfloor}-q^{\lfloor\frac{m-m_{1}-2}{4}\rfloor}\quad \text{if $m-4e_{K}\leq m_{1}\leq\min\{\frac{m}{5},e_{K}\}$},\\ q^{e_{K}}\quad\text{if $m_{1}=m-4e_{K}-2$ and $4e_{K}+4\leq m\leq 5e_{K}+2$},\\ 2q^{e_{K}}\quad\text{if $m_{1}=\frac{m-2e_{K}}{3}$ and $5e_{K}<m\leq 8e_{K}$},\\ 0\quad\text{otherwise}.\end{cases}\]
To finish the proof, we just need to observe that
\[q^{\lfloor\frac{m-m_{1}}{4}\rfloor}-q^{\lfloor\frac{m-m_{1}-2}{4}\rfloor}= \begin{cases}q^{\frac{m-m_{1}-1}{4}-1}(q-1)\quad\text{if $m_{1}\equiv m$}\pmod{4},\\ 0\quad\text{if $m_{1}\not\equiv m$}\pmod{4}.\end{cases}\]
Proof of Theorem 1.3.: The possible values of \(m\) come from Lemmas 4.1 and 4.2. The same two lemmas give us the result for \(m=8e_{K}+3\). Now consider the case where \(m\) is even and \(8\leq m\leq 8e_{K}\). Lemma 4.1 tells us that the first, third, and fourth items of Corollary 4.3 become respectively
1. \(\mathbbm{1}_{8\leq m\leq 5e_{K}-2}\cdot\mathbbm{1}_{m\equiv 3\pmod{5}} \cdot q^{\frac{3m-14}{10}}(1+\mathbbm{1}_{m\leq 10(e_{K}-t_{0})-2})(q-1- \mathbbm{1}_{m=10(e_{K}-t_{0})+8})\).
2. \(\mathbbm{1}_{4e_{K}+4\leq m\leq 5e_{K}+2}\cdot q^{\frac{m}{2}-e_{K}-2}(1+ \mathbbm{1}_{m\leq 6e_{K}-2t_{0}+2})(q-1-\mathbbm{1}_{m=6e_{K}-2t_{0}+4})\).
3. \(\mathbbm{1}_{5e_{K}<m\leq 8e_{K}}\cdot\mathbbm{1}_{m\equiv 2e_{K}\pmod{3}} \cdot 2q^{\frac{4e_{K}+m}{6}-1}(1+\mathbbm{1}_{m\leq 8e_{K}-6t_{0}})(q-1- \mathbbm{1}_{m=8e_{K}-6t_{0}+6})\).
Lemma 4.18 turns these into the first three points of Theorem 1.3. It remains to compute the value of
\[\sum_{\begin{subarray}{c}2,m-4e_{K}\leq m_{1}\leq\min\{\frac{m}{5},e_{K}\}\\ m_{1}\equiv m\pmod{4}\end{subarray}}q^{\frac{m-m_{1}}{4}-1}(q-1)N_{\text{ext} }(m_{1}).\]
Lemma 4.1 tells us that
\[N_{\text{ext}}(m_{1})=\begin{cases}2q^{\frac{m_{1}}{2}-1}(q-1)&\text{if $m_{1} \leq 2e_{K}-2t_{0}$},\\ q^{\frac{m_{1}}{2}-1}(q-2)&\text{if $m_{1}=2e_{K}-2t_{0}+2$},\\ q^{\frac{m_{1}}{2}-1}(q-1)&\text{if $m_{1}\geq 2e_{K}-2t_{0}+4$}.\end{cases}\]
Lemma 4.18 tells us that \(2e_{K}-2t_{0}+2>e_{K}\), so the sum is actually
\[\sum_{\begin{subarray}{c}\max\{2,m-4e_{K}\}\leq m_{1}\leq\min\{\frac{m}{5},e_{K} \}\\ m_{1}\equiv m\pmod{4}\end{subarray}}2q^{\frac{m+m_{1}}{4}-2}(q-1)^{2}.\]
For integers \(l\) and \(u\), the substitution \(m_{1}=-m+4k\) makes it easy to see that
\[\sum_{\begin{subarray}{c}l\leq m_{1}\leq u\\ m_{1}\equiv m\pmod{4}\end{subarray}}q^{\frac{m+m_{1}}{4}}=\mathbb{1}_{l\leq u }\frac{q^{b+1}-q^{a}}{q-1},\]
where \(a=\lceil\frac{m+l}{4}\rceil\) and \(b=\lfloor\frac{m+u}{4}\rfloor\). In this case, we have \(l=\max\{2,m-4e_{K}\}\) and \(u=\min\{e_{K},\frac{m}{5}\}\), which gives
\[a=\lceil\max\{\frac{m+2}{4},\frac{m-2e_{K}}{2}\}\rceil,\quad b=\lfloor\min\{ \frac{m+e_{K}}{4},\frac{3m}{10}\}\rfloor.\]
Finally, it is easy to see that \(l\leq u\) if and only if \(e_{K}\geq 2\) and \(10\leq m\leq 5e_{K}\), so we obtain
\[\sum_{\begin{subarray}{c}\max\{2,m-4e_{K}\}\leq m_{1}\leq\min\{e_{K},\frac{m} {5}\}\\ m_{1}\equiv m\pmod{4}\end{subarray}}q^{\frac{m+m_{1}}{4}}=\mathbb{1}_{10\leq m \leq 5e_{K}}\cdot\frac{q^{\lfloor\min\{\frac{m+e_{K}}{4},\frac{3m}{10}\}\rfloor+1 }-q^{\lceil\max\{\frac{m+2}{4},\frac{m-2e_{K}}{2}\}\rceil}}{q-1},\]
and the result follows.
Proof of Corollary 4.4.: Theorem 1.3 tells us that the pre-mass is the sum of the following quantities:
1. \[\frac{1}{2}\cdot\sum_{\begin{subarray}{c}8\leq m\leq 5e_{K}-2\\ m\equiv 8\pmod{10}\end{subarray}}q^{-\frac{7m+14}{10}}(q-1).\]
2. \[\frac{1}{2}\cdot\sum_{\begin{subarray}{c}4e_{K}+4\leq m\leq 5e_{K}+2\\ m\text{ even}\end{subarray}}q^{-\frac{m}{2}-e_{K}-2}(q-1).\]
3. 1. \[\sum_{\begin{subarray}{c}5e_{K}<m\leq 8e_{K}-6t_{0}\\ m\equiv 2e_{K}\pmod{6}\end{subarray}}q^{\frac{4e_{K}-5m}{6}-1}(q-1).\]
4. 2. \[\mathbb{1}_{t_{0}\geq 1}\cdot\frac{1}{2}\cdot q^{-6e_{K}+5t_{0}-6}(q-2).\]
5. \[\frac{1}{2}\cdot\sum_{\begin{subarray}{c}8e_{K}-6t_{0}+12\leq m\leq 8e_{K}\\ m\equiv 2e_{K}\pmod{6}\end{subarray}}q^{\frac{4e_{K}-5m}{6}-1}(q-1).\]
4. 1. 1. \[\frac{1}{2}(q-1)q^{-1}\sum_{\begin{subarray}{c}10\leq m\leq 5e_{K}\\ m\text{ even}\end{subarray}}q^{\lfloor-\frac{7m}{10}\rfloor}.\] 2. \[-\frac{1}{2}(q-1)q^{-2}\sum_{\begin{subarray}{c}10\leq m\leq 5e_{K}\\ m\text{ even}\end{subarray}}q^{\max\{\lceil\frac{-3m+2}{4}\rceil,-\frac{m}{2}- e_{K}\}}.\] 3. \[\begin{cases}q^{-6e_{K}-3}&\text{if $-1\in K^{\times 2}$,}\\ \frac{1}{2}q^{-6e_{K}-3}&\text{if $K(\sqrt{-1})/K$ is quadratic and totally ramified,}\\ 0&\text{otherwise.}\end{cases}\]
We address these one by one.
1. Making the substitution \(m=10k+8\), we have \[\sum_{\begin{subarray}{c}8\leq m\leq 5e_{K}-2\\ m\equiv 8\pmod{10}\end{subarray}}q^{-\frac{7m+14}{10}}=\sum_{k=0}^{\lfloor \frac{e_{K}}{2}\rfloor-1}q^{-7k-7}\] \[=\mathbb{1}_{e_{K}\geq 2}\cdot\frac{1-q^{-7\lceil\frac{e_{K}}{2} \rceil}}{q^{7}-1},\] so the contribution to the pre-mass is \[\frac{1}{2}\cdot\mathbb{1}_{e_{K}\geq 2}\cdot\frac{(q-1)(1-q^{-7\lceil\frac{e_{K} }{2}\rceil})}{q^{7}-1},\] and we can omit the indicator function since \(e_{K}=1\) gives \(1-q^{-7\lceil\frac{e_{K}}{2}\rceil}=0\).
2. Making the substitution \(m=2k\), it is easy to see that \[\sum_{\begin{subarray}{c}4e_{K}+4\leq m\leq 5e_{K}+2\\ m\text{ even}\end{subarray}}q^{-\frac{m}{2}}=\mathbb{1}_{e_{K}\geq 2} \cdot\frac{q^{-2e_{K}-1}-q^{-\lfloor\frac{5e_{K}+2}{2}\rfloor}}{q-1},\] so the contribution is \[\frac{1}{2}\cdot(q^{-3e_{K}-3}-q^{-\lfloor\frac{7e_{K}+6}{2}\rfloor})=\frac{ 1}{2}\cdot q^{-3e_{K}-3}(1-q^{-\lfloor\frac{e_{K}}{2}\rfloor}),\] where we omit the indicator function since \(e_{K}=1\) gives \(q^{-2e_{K}-1}-q^{-\lfloor\frac{5e_{K}+2}{2}\rfloor}=0\).
3. 1. The substitution \(m=2e_{K}+6k\) gives \[\sum_{\begin{subarray}{c}5e_{K}<m\leq 8e_{K}-6t_{0}\\ m\equiv 2e_{K}\pmod{6}\end{subarray}}q^{\frac{4e_{K}-5m}{6}} =\sum_{k=\lfloor\frac{e_{K}}{2}\rfloor+1}^{e_{K}-t_{0}}q^{-e_{K} -5k}\] \[=\mathbb{1}_{2t_{0}<e_{K}}\frac{q^{-5\lfloor\frac{e_{K}}{2} \rfloor-e_{K}}-q^{5t_{0}-6e_{K}}}{q^{5}-1},\]
so the contribution is \[\mathbbm{1}_{2t_{0}<e_{K}}\cdot\frac{(q-1)(q^{-5\lfloor\frac{e_{K}}{2}\rfloor-e_{K }-1}-q^{5t_{0}-6e_{K}-1})}{q^{5}-1}.\]
2. This is already in closed form.
3. The substitution \(m=2e_{K}+6k\) gives \[\sum_{\begin{subarray}{c}8e_{K}-6t_{0}+12\leq m\leq 8e_{K}\\ m\equiv 2e_{K}\pmod{6}\end{subarray}}q^{\frac{4e_{K}-5m}{6}} =\sum_{k=e_{K}-t_{0}+2}^{e_{K}}q^{-e_{K}-5k}\] \[=\mathbbm{1}_{t_{0}\geq 2}\cdot\frac{q^{5t_{0}-6e_{K}-5}-q^{-6e_{K }}}{q^{5}-1}.\] Therefore, the contribution is \[\frac{1}{2}\cdot\mathbbm{1}_{t_{0}\geq 2}\cdot\frac{(q-1)(q^{5t_{0}-6e_{K}-6}- q^{-6e_{K}-1})}{q^{5}-1}.\]
4. 1. We need to compute \[\sum_{\begin{subarray}{c}10\leq m\leq 5e_{K}\\ m\text{ even}\end{subarray}}q^{\lfloor\frac{-7m}{10}\rfloor}=\sum_{k=5}^{ \lfloor\frac{5e_{K}}{2}\rfloor}q^{-\lceil\frac{7k}{5}\rceil}.\] Let \(b\geq 1\) be an integer, and consider the sum \[\sum_{k=5}^{5b}q^{-\lceil\frac{7k}{5}\rceil} =\sum_{a=1}^{b-1}\Big{(}\sum_{k=5a}^{5a+4}q^{-\lceil\frac{7k}{5} \rceil}\Big{)}+q^{-7b}\] \[=\sum_{a=1}^{b-1}q^{-7a}\Big{(}\sum_{l=0}^{4}q^{-\lceil\frac{7l}{ 5}\rceil}\Big{)}+q^{-7b}\] \[=q^{-6}(q^{6}+q^{4}+q^{3}+q+1)\cdot\frac{(1-q^{7-7b})}{q^{7}-1}+q ^{-7b}\] \[=\frac{(q^{-6}-q^{1-7b})(q^{6}+q^{4}+q^{3}+q+1)}{q^{7}-1}+q^{-7b}.\] Suppose that \(e_{K}\) is even. Then we have \[\sum_{k=5}^{\lfloor\frac{5e_{K}}{2}\rfloor}q^{-\lceil\frac{7k}{5} \rceil} =\sum_{k=5}^{\frac{5\cdot\frac{e_{K}}{2}}{2}}q^{-\lceil\frac{7k}{5}\rceil}\] \[=\mathbbm{1}_{e_{K}\geq 2}\cdot\Big{(}\frac{(q^{-6}-q^{1-\frac{7e _{K}}{2}})(q^{6}+q^{4}+q^{3}+q+1)}{q^{7}-1}+q^{-\frac{7e_{K}}{2}}\Big{)}.\]
Suppose instead that \(e_{K}\) is odd. Then we have
\[\sum_{k=5}^{\lfloor\frac{5e_{K}}{2}\rfloor}q^{-\lceil\frac{7k}{5}\rceil} =\sum_{k=5}^{\frac{5e_{K}-1}{2}}q^{-\lceil\frac{7k}{5}\rceil}\] \[=\sum_{k=5}^{5.\frac{e_{K}-1}{2}}q^{-\lceil\frac{7k}{5}\rceil}+q^{- \lceil\frac{7}{5}.\frac{5e_{K}-3}{2}\rceil}+q^{-\lceil\frac{7}{5}.\frac{5e_{K }-1}{2}\rceil}\] \[=\mathbbm{1}_{e_{K}\geq 2}\cdot\Big{(}\frac{(q^{-6}-q^{1-7. \frac{e_{K}-1}{2}})(q^{6}+q^{4}+q^{3}+q+1)}{q^{7}-1}+q^{-7.\frac{e_{K}-1}{2}}\] \[\qquad\qquad+q^{-7.\frac{e_{K}-1}{2}-2}+q^{-7.\frac{e_{K}-1}{2}- 3}\Big{)}.\] In other words, the sum \(\sum_{k=5}^{\lfloor\frac{5e_{K}}{2}\rfloor}q^{-\lceil\frac{7k}{5}\rceil}\) is equal to \[\mathbbm{1}_{e_{K}\geq 2}\cdot\Big{(}\frac{(q^{-6}-q^{1-7\lceil\frac{e_{K}}{2} \rfloor})(q^{6}+q^{4}+q^{3}+q+1)}{q^{7}-1}+q^{-7\lceil\frac{e_{K}}{2}\rfloor}( 1+\mathbbm{1}_{2|e_{K}}(q^{-2}+q^{-3}))\Big{)}.\] Therefore we have a contribution of \[\mathbbm{1}_{e_{K}\geq 2}\cdot\frac{1}{2}(q-1)\Big{(}\frac{(q^{-7}-q^{-7 \lceil\frac{e_{K}}{2}\rfloor})(q^{6}+q^{4}+q^{3}+q+1)}{q^{7}-1}+q^{-7\lceil\frac {e_{K}}{2}\rfloor-1}(1+\mathbbm{1}_{2|e_{K}}(q^{-2}+q^{-3}))\Big{)}.\]
2. We need to evaluate \[\sum_{k=5}^{\lfloor\frac{5e_{K}}{2}\rfloor}q^{\max\{\lceil\frac{-3k+1}{2} \rceil,-k-e_{K}\}}=\sum_{k=5}^{2e_{K}}q^{\lceil\frac{-3k+1}{2}\rceil}+\sum_{k =2e_{K}+1}^{\lfloor\frac{5e_{K}}{2}\rfloor}q^{-k-e_{K}}.\] We have \[\sum_{k=5}^{2e_{K}}q^{\lceil\frac{-3k+1}{2}\rceil} =\sum_{a=3}^{e_{K}}\sum_{k=2a-1}^{2a}q^{\lceil\frac{-3k+1}{2}\rceil}\] \[=\sum_{a=3}^{e_{K}}(q^{-3a+2}+q^{-3a+1})\] \[=(q^{2}+q)\sum_{a=3}^{e_{K}}q^{-3a}\] \[=\mathbbm{1}_{e_{K}\geq 3}\cdot\frac{(q^{2}+q)(q^{-6}-q^{-3e_{K}} )}{q^{3}-1}.\] So the first half of the sum gives a contribution of \[-\mathbbm{1}_{e_{K}\geq 2}\cdot\frac{1}{2}\cdot\frac{(q-1)(q+1)(q^{-7}-q^{-3e_{K} -1})}{q^{3}-1}.\] We also have \[\sum_{k=2e_{K}+1}^{\lfloor\frac{5e_{K}}{2}\rfloor}q^{-k-e_{K}}=\mathbbm{1}_{e_ {K}\geq 2}\cdot\frac{q^{-3e_{K}}-q^{-\lfloor\frac{5e_{K}}{2}\rfloor-e_{K}}}{q -1},\]
so we also get a contribution of
\[-\frac{1}{2}(q^{-3e_{K}-2}-q^{-\lfloor\frac{7e_{K}}{2}\rfloor-2}).\]
## 5. The case \(G=C_{2}\)
Proof of Theorem 1.4.: The conditions on possible \(m\) come from [11, Equation 18]. Moreover, [11, Equation 19] tells us that
\[4\#\Sigma_{m}^{\{1\}}+\#\Sigma_{m}^{C_{2}\times C_{2}}+\#\Sigma_{m}^{C_{4}}+2 \#\Sigma_{m}^{C_{2}}=4(1+\mathbb{1}_{m=8e_{K}+3}(q-2))q^{\lceil\frac{m}{2} \rceil-2-1\,m>4e_{K}(\lfloor\frac{m-1}{4}\rfloor-e_{K})},\]
and the result follows.
**Corollary 5.1**.: _We have_
\[\widetilde{m}(\Sigma^{C_{2}})=q^{-3}-\widetilde{m}(\Sigma^{\{1\}})-\widetilde{ m}(\Sigma^{C_{4}})-\widetilde{m}(\Sigma^{C_{2}\times C_{2}}).\]
Proof.: We have
\[\widetilde{m}(\Sigma^{C_{2}})=\widetilde{m}(\Sigma)-\widetilde{m}(\Sigma^{\{ 1\}})-\widetilde{m}(\Sigma^{C_{4}})-\widetilde{m}(\Sigma^{C_{2}\times C_{2}}),\]
and
\[\widetilde{m}(\Sigma)=q^{-3}\]
by [11, Theorem 2].
|
2310.09280 | A specific model of Hilbert geometry on the unit disc | A new metric on the open 2-dimensional unit disk is defined making it a
geodesically complete metric space whose geodesic lines are precisely the
Euclidean straight lines. Moreover, it is shown that the unit disk with this
new metric is not isometric to any hyperbolic model of constant negative
curvature, nor to any convex domain in R2 equipped with its Hilbert metric. | Charalampos Charitos, Ioannis Papadoperakis, Georgios Tsapogas | 2023-10-13T17:43:40Z | http://arxiv.org/abs/2310.09280v1 | # A specific model of Hilbert geometry on the unit disc
###### Abstract
A new metric on the open 2-dimensional unit disk is defined making it a geodesically complete metric space whose geodesic lines are precisely the Euclidean straight lines. Moreover, it is shown that the unit disk with this new metric is not isometric to any hyperbolic model of constant negative curvature, nor to any convex domain in \(\mathbb{R}^{2}\) equipped with its Hilbert metric.
_2020 Mathematics Subject Classification: 51F99_
## 1 Introduction
On the 8th August 1900, at the Second International Congress of Mathematics held in Paris, David Hilbert delivered a lecture entitled "The future problems of mathematics", in which he presented a collection of open problems. The fourth problem of the list can be stated as follows: If \(\Omega\) is a convex subset of a Euclidean space, find a characterization of all metrics on \(\Omega\) for which the Euclidean lines are geodesics. We can put additional conditions on these metrics on \(\Omega\) by requiring geodesic completeness and Euclidean lines being the unique geodesics. These geometries with the extra requirements are of particular interest and they have been studied extensively.
Before Hilbert, Beltrami in [1] had already shown that the unit disc in the plane, with the Euclidean chords taken as geodesics of infinite length, is a model of the hyperbolic geometry. However, Beltrami did not give a formula for this distance, and this led Klein in [4] to express the distance in the unit disc in terms of the cross radio. Hilbert's fourth problem became a very active research area
and it was gradually realized that the discovery of all metrics satisfying Hilbert's problem was not plausible. Consequently, each metric resolving Hilbert's problem defines a new geometry worth to be studied. A very important class of such metrics, defined by means of the cross ratio, are referred to as _Hilbert metrics_ and play a central role in this research area.
Among the prominent mathematicians worked on the Hilbert's fourth problem, it is worthy to mention Busemann and Pogorelov, see for instance [2], [7], [8]. The ideas of the latter to solve Hilbert's fourth problem came from Busemann, who introduced integral geometry techniques to approach Hilbert's problem. Busemann's idea was to consider for every two points \(x\) and \(y\) in a convex subset \(\Omega\) of the real projective space \(RP^{n}\), the unique geodesic segment \(\left[x,y\right]\) joining these points, and the subset of hyperplanes of \(RP^{n}\) intersecting \(\left[x,y\right]\) equipped with a non negative measure having specific properties. In dimension 2, Pogorelov's solution consisted in showing that every distance between \(x\) and \(y\) satisfying Hilbert's problem is given by a metric \(d(x,y)\) constructed with the help of the measure constructed by Busemann on the subset of hyperplanes mentioned above. There are generalizations of Pogorelov's theorem in greater dimensions and one may see in [6] a detailed discussion on Hilbert's fourth problem.
However, Pogorelov's approach is very general and does not allow for further study of the geometry of these metrics. On the contrary, in the present work a concrete new metric satisfying Hilbert's problem is defined without the use of cross ratio and its geometry is studied. More precisely, it is shown that this metric makes the open unit disk a geodesically complete metric space whose geodesics are of infinite length and are precisely the Euclidean lines. Moreover it is shown that this metric space is not isometric to any hyperbolic model of constant curvature nor to any convex domain in the plane equipped with its Hilbert metric. Finally, the natural Euclidean boundary of the unit disk is shown to coincide with the visual boundary with respect to the new metric, namely, with the set of equivalence classes of asymptotic geodesic rays.
## 2 Definitions
Let \(f:\left(-1,1\right)\longrightarrow\mathbb{R}\) be the function
\[f(t)=\frac{t}{1-\left|t\right|}\]
and define a metric \(d_{I}\) on the interval \(\left(-1,1\right)\) by
\[d_{I}\left(s,t\right)=\left|f\left(s\right)-f\left(t\right)\right|.\]
Clearly,
\[d_{I}\left(t,0\right)=d_{I}\left(0,t\right)=\left|\frac{t}{1-\left|t\right|} \right|=\frac{1}{1-\left|t\right|}-1,\ \ \forall t\in\left(-1,1\right)\]
and
\[d_{I}\left(s,t\right)=f\left(\left|s\right|\right)+f\left(\left|t\right|\right) \text{ if }st<0.\]
Moreover, this metric has the following two properties:
1. For \(s,t\in\left(-1,1\right)\), \(d_{I}\left(s,t\right)\longrightarrow\infty\) as \(t\longrightarrow-1\) or \(1\).
2. For \(q,s,t\in\left(-1,1\right)\) with \(q<s<t\) we have \(d_{I}\left(q,s\right)+d_{I}\left(s,t\right)=d_{I}\left(q,t\right).\)
We now define a metric \(D\) on the open \(2\)-dimensional unit disk \(\mathbb{D}^{2}\) as follows: consider the unit disk in the \(xy-\)plane and identify each ray with an angle \(\theta\in\left[0,2\pi\right].\) For each such ray (eg. angle \(\theta\)) denote by \(\Delta_{\theta}\) the diameter determined by that ray. Observe that the Euclidean length of \(\Delta_{\theta}\) is \(2.\) For any point \(z\) in the disk, denote by \(z_{\theta}\) its projection to the diameter \(\Delta_{\theta}.\) Let \(x,y\) be two points in the disk.
For each \(\theta\in\left[0,2\pi\right],\) set \(d_{\theta}\left(x,y\right):=d_{I}\left(x_{\theta},y_{\theta}\right)\) where the projection points \(x_{\theta},y_{\theta}\) in \(\Delta\theta\) are identified with the corresponding points in the interval \(\left(-1,1\right)\). Define
\[D\left(x,y\right):=\frac{1}{2}\int_{0}^{2\pi}d_{\theta}\left(x,y\right)d\theta. \tag{1}\]
Since, for every \(\theta\in\left[0,\pi\right]\) the diameters \(\Delta_{\theta}\) and \(\Delta_{\theta+\pi}\) coincide, we have \(d_{\theta}\left(x,y\right)=d_{\pi+\theta}\left(x,y\right)\) for any two points \(x,y.\) It follows that
\[D\left(x,y\right)=\frac{1}{2}\int_{0}^{2\pi}d_{\theta}\left(x,y\right)d\theta =\int_{0}^{\pi}d_{\theta}\left(x,y\right)d\theta \tag{2}\]
**Lemma 1** (Triangle Inequality).: _Let \(x,y,z\) be three points in the disk. (a) If the Euclidean segment \(\left[x,z\right]\) in the unit disk contains the point \(y\) then_
\[D(x,z)=D(x,y)+D(y,z).\]
_(b) If \(y\) is a point not contained in \(\left[x,z\right]\) then_
\[D(x,z)<D(x,y)+D(y,z).\]
Proof.: (a) The projection \(y_{\theta}\) will be in the interior of the segment \(\left[x_{\theta},z_{\theta}\right]\subset\Delta_{\theta}\) for all directions \(\theta\) except in the case the direction \(\theta\) is perpendicular to the segment \(\left[x,z\right]\) (in which case \(x_{\theta}\equiv z_{\theta}\)). By Property P2
\[d_{\theta}\left(x,y\right)+d_{\theta}\left(y,z\right)=d_{\theta}\left(x,z\right)\]
which shows the desired equality.
2. Set \(\theta_{yx}\) (resp. \(\theta_{yz}\)) to be the angle which, viewed as a ray, is perpendicular to the Euclidean segment \([y,x]\) (resp. \([y,z]\)). We may assume that \(\theta_{yx}<\theta_{yz}.\) Then for every \(\theta\notin[\theta_{yx},\theta_{yz}]\) the point \(y_{\theta}\) is contained in the segment \([x_{\theta},z_{\theta}]\) so that by Property P2 we have \[d_{\theta}(x,z)=d_{\theta}(x,y)+d_{\theta}(y,z).\]
For \(\theta\in[\theta_{yx},\theta_{yz}]\) the point \(y_{\theta}\) is not contained in the segment \([x_{\theta},z_{\theta}]\) which implies that
\[d_{\theta}(y,z)=d_{\theta}(y,x)+d_{\theta}(x,z)\ \ \text{or,}\ \ d_{\theta}(y,x)=d_{ \theta}(y,z)+d_{\theta}(z,x)\]
depending on whether \(x_{\theta}\in[y_{\theta},z_{\theta}]\) or, \(z_{\theta}\in[y_{\theta},x_{\theta}].\) In the first case we have
\[d_{\theta}(x,z)<d_{\theta}(y,z)\Longrightarrow d_{\theta}(x,z)\lneqq d_{ \theta}(y,z)+d_{\theta}(x,y)\]
and the same strict inequality follows in the second case. This completes the proof of part (b).
**Lemma 2**.: _The function \(D\left(x,y\right)=\int_{0}^{\pi}d_{\theta}\left(x,y\right)d\theta\) defines a metric on the open unit disk._
Proof.: To complete the proof that \(D\) is a metric we only need to show that the integral
\[\int_{0}^{\pi}d_{\theta}\left(x,y\right)d\theta\]
is finite. By Lemma 1, it suffices to show that for any \(x\) in the unit disk the integral \(\int_{0}^{\pi}d_{\theta}\left(O,x\right)d\theta\) is finite. Recall that the point \(O\) in the diameter \(\Delta_{\theta}\) is identified with \(0\in(-1,1)\) and observe that by definition of the function \(f\) the orientation of the diameter \(\Delta_{\theta}\) is irrelevant. Let \(\|\cdot\|\) denote Euclidean length. Clearly,
\[d_{I}(0,x_{\theta})\leq d_{I}(0,\|Ox\|)\]
We then have
\[D(O,x)=\int_{0}^{\pi}d_{\theta}(O,x)d\theta=\int_{0}^{\pi}d_{I}(0,x_{\theta})d \theta\leq\int_{0}^{\pi}d_{I}(0,\|Ox\|)d\theta=\frac{\|Ox\|}{1-\|Ox\|}\pi.\]
It is well known that a curve with endpoints \(x,z\) is a geodesic segment with respect to a metric \(d\) if and only if for every \(y\) in the curve we have \(d(x,y)+d(y,z)=d(x,z).\) It follows, by part (a) of Lemma 1 above, that Euclidean lines in the unit disk are geodesics with respect to the metric \(D\) and part (b) shows that only the Euclidean lines are geodesics with respect to \(D.\) Hence, we have the following
**Proposition 3**.: _The metric space \(\left(\mathbb{D}^{2},D\right)\) is a geodesic metric space whose geodesics are precisely the Euclidean lines in \(\mathbb{D}^{2}.\)_
The following properties follow from the definition of the metric \(D\).
**Proposition 4**.: _(a) Every Euclidean rotation \(R_{\phi}:\mathbb{D}^{2}\longrightarrow\mathbb{D}^{2},\phi\in\left[0,2\pi\right],\) centered at the origin is an isometry of the metric space \(\left(\mathbb{D}^{2},D\right).\) (b) Every Euclidean reflection \(Q_{\phi}:\mathbb{D}^{2}\longrightarrow\mathbb{D}^{2}\) with respect to a line forming an angle \(\phi\in\left[0,2\pi\right]\) with the \(x-\)axis is an isometry of the metric space \(\left(\mathbb{D}^{2},D\right).\)_
Proof.: As \(\int_{0}^{2\pi}d_{\theta}\left(x,y\right)d\theta=\int_{\phi}^{2\pi+\phi}d_{ \theta}\left(x,y\right)d\theta\) part (a) follows.
For (b), it suffices, by (a), to show that the Euclidean reflection \(R_{0}\) with respect to the \(x-\) axis is an isometry. Clearly, for arbitrary \(x,y\in\mathbb{D}^{2}\) and for every \(\theta\in\left[0,2\pi\right]\) we have
\[d_{\theta}\left(x,y\right)=d_{2\pi-\theta}\left(R_{0}(x),R_{0}(y)\right)\]
which implies that \(D(x,y)=D\left(R_{0}(x),R_{0}(y)\right).\)
We now proceed to show that the metric space \(\left(\mathbb{D}^{2},D\right)\) is geodesically complete, that is, every geodesic segment extends uniquely to a geodesic line of infinite length.
If \(\xi\) a point on the boundary, \(d_{\theta}\left(O,\xi\right)\) can be defined via projections as before and it is a positive real for all \(\theta,\) except for a single value in \(\left[0,\pi\right).\) Thus, the integral \(\int_{0}^{\pi}d_{\theta}\left(O,\xi\right)d\theta\) makes sense and we have
**Lemma 5**.: _If \(O\) is the center of the unit disk and \(\xi\) a point on the boundary, the integral \(\int_{0}^{\pi}d_{\theta}\left(O,\xi\right)d\theta\) is not bounded._
Proof.: As above, the point \(O\) in the diameter \(\Delta_{\theta}\) is identified with \(0\in\left(-1,1\right)\) and, if \(\theta_{\xi}\) is the angle determined by \(\xi,\) for all \(\theta\neq\theta_{\xi}\) we have
\[d_{\theta}\left(O,\xi\right) =d_{I}\left(0,\left\|O\xi\right\|\cos\left(\theta-\theta_{\xi} \right)\right)=\left|f\left(\cos\left(\theta-\theta_{\xi}\right)\right)\right|\] \[=\left|\frac{\cos\left(\theta-\theta_{\xi}\right)}{1-\left|\cos \left(\theta-\theta_{\xi}\right)\right|}\right|=\frac{1}{1-\left|\cos\left( \theta-\theta_{\xi}\right)\right|}-1.\]
Then the integral \(\int_{0}^{\pi}d_{\theta}\left(O,\xi\right)d\theta\) equals
\[\frac{1}{2}\int_{0}^{2\pi}\left[\frac{1}{1-\left|\cos\left( \theta-\theta_{\xi}\right)\right|}-1\right]d\theta=\frac{1}{2}\int_{0}^{2\pi} \left[\frac{1}{1-\left|\cos\theta\right|}-1\right]d\theta=\] \[=\frac{1}{2}4\int_{0}^{\pi/2}\left[\frac{1}{1-\cos\theta}-1 \right]d\theta>2\int_{0}^{\pi/2}\left[\frac{1}{\theta}-1\right]d\theta=\infty.\]
The above Lemma permits us to say that the \(D-\)length of a Euclidean ray (or, diameter) is infinite. In a similar manner, if \(\xi,\eta\) are two points on the boundary, \(d_{\theta}\left(\xi,\eta\right)\) is a positive real for all \(\theta,\) except for two values of \(\theta\) in \(\left[0,\pi\right).\) Thus, the integral \(\int_{0}^{\pi}d_{\theta}\left(\xi,\eta\right)d\theta\) makes sense and we have
**Lemma 6**.: _If \(\xi,\eta\) are two points on the boundary, the integral \(\int_{0}^{\pi}d_{\theta}\left(\xi,\eta\right)d\theta\) is not bounded._
Proof.: By Proposition 4, we may assume that the geodesic line determined by \(\xi,\eta\) is perpendicular to the \(x-\)axis with intersection point, say, \(A.\) It suffices to show that \(\int_{0}^{\pi}d_{\theta}\left(A,\xi\right)d\theta\) is not bounded.
Let \(\theta_{\xi}\) be the angle formed by the \(x-\)axis and the geodesic ray joining \(O\) with \(\xi.\) For all \(\theta\in\left[0,\pi/2\right]\setminus\left\{\theta_{\xi}\right\}\) we have
\[d_{\theta}\left(O,\xi\right)=d_{\theta}\left(O,A\right)+d_{\theta}\left(A,\xi\right)\]
and for all \(\theta\in\left[0,\pi/2\right]\) we have
\[d_{\theta}\left(O,\xi\right)<d_{\theta}\left(A,\xi\right)<d_{\theta}\left(O,A \right)+d_{\theta}\left(A,\xi\right).\]
Therefore, the triangle inequality
\[d_{\theta}\left(O,\xi\right)\leq d_{\theta}\left(O,A\right)+d_{\theta}\left(A,\xi\right)\]
holds for all \(\theta\in\left[0,\right]\setminus\left\{\theta_{\xi}\right\}.\) By Lemma 5, \(\int_{0}^{\pi}d_{\theta}\left(O,\xi\right)d\theta\) is not bounded and \(\int_{0}^{\pi}d_{\theta}\left(O,A\right)d\theta=D(O,A)\) is a positive real, thus,
\[\int_{0}^{\pi}d_{\theta}\left(A,\xi\right)d\theta\]
cannot be bounded.
**Remark 7**.: _There exists a large family of metrics making the open unit disk a geodesic metric space satisfying Propositions 3, 4 and Lemmata 5, 6. In fact, for every strictly increasing function \(g:\left(-1,1\right)\rightarrow\mathbb{R}\) which satisfies_
\[\lim_{t\to 1}\int_{0}^{2\pi}\left|g\left(t\cos\theta\right)\right|d \theta=\infty\]
_we may apply the above construction using \(g\) instead of \(f\) and obtain a metric on the open unit disk with the above mentioned properties._
Further properties of \(\left(\mathbb{D}^{2},D\right)\)
For any \(\kappa>0,\) let \(\mathbb{H}_{-\kappa^{2}}^{2}\) denote the standard hyperbolic model of constant negative curvature on the open unit disk with distance function \(d_{\kappa}.\) For a convex domain \(U\) in \(\mathbb{R}^{2}\) denote by \(d_{\mathcal{H}}\) the Hilbert metric for which we refer the reader to [5, Ch.5, Section 6].
**Theorem 8**.: _For all \(\kappa>0,\) the metric spaces \(\left(\mathbb{D}^{2},D\right)\) and \(\left(\mathbb{H}_{-\kappa^{2}}^{2},d_{\kappa}\right)\) are not isometric. Moreover, for any convex domain \(U\) in \(\mathbb{R}^{2}\) equipped with the Hilbert metric \(d_{\mathcal{H}}\) the metric spaces \(\left(\mathbb{D}^{2},D\right)\) and \(\left(U,d_{\mathcal{H}}\right)\) are not isometric._
Proof.: Assume \(F_{\kappa}:\mathbb{D}^{2}\longrightarrow\mathbb{H}_{-\kappa^{2}}^{2}\) is such an isometry which, by homogeneity of \(\mathbb{H}_{-\kappa^{2}}^{2},\) we may assume that \(F_{\kappa}\) preserves the center \(O\) of the disk. Moreover, as the image of the \(x-\)axis under \(F_{\kappa}\) is a line in \(\mathbb{H}_{-\kappa^{2}}^{2}\) containing the origin, by composing \(F_{\kappa}\) with a rotation in \(\mathbb{H}_{-\kappa^{2}}^{2},\) we may assume that \(F_{\kappa}\) preserves the \(x-\)axis. We next show that \(F_{\kappa}\) necessarily preserves the \(y-\)axis as well, by showing that the image of the \(y-\)axis under \(F_{\kappa}\) is a line perpendicular to the \(x-\)axis. To see this, for any \(x\in\left(0,1\right)\) consider the quadrilateral \(XYZW\) where \(X=\left(x,0\right)\), \(Y=\left(0,x\right)\), \(Z=\left(-x,0\right)\) and \(W=\left(0,-x\right).\) Clearly, \(XYZW\) is a square with respect to the metric \(D\) and, hence, so must be its image under \(F_{\kappa}.\) The geodesic segments \(\left[F_{\kappa}(X),F_{\kappa}\left(Z\right)\right]\) and \(\left[F_{\kappa}(Y),F_{\kappa}\left(W\right)\right]\) intersect at \(O=F_{\kappa}(O)\) which is the midpoint for both segments. Moreover, these segments must form a right angle at \(O,\) otherwise the quadrilateral \(F_{\kappa}(X)F_{\kappa}\left(Y\right)F_{\kappa}(Z)F_{\kappa}\left(W\right)\) cannot be a square.
Define the function \(h:\left[0,\infty\right]\longrightarrow\mathbb{R}\) where \(h(b),\) for \(b\in\left[0,\infty\right),\) is the \(d_{1}-\)length of the height of the right angle hyperbolic triangle in \(\mathbb{H}_{-1}^{2}\) with side lengths equal to \(b.\) For \(b=\infty,\)\(h\left(\infty\right)\) is the \(d_{1}-\)length of the height of the right angle ideal hyperbolic triangle in \(\mathbb{H}_{-1}^{2}\) with vertices \(O,\left(1,0\right)\) and \(\left(0,1\right).\) By elementary calculations, \(h\left(\infty\right)\) is the \(d_{1}-\)length of the segment with endpoints \(O\) and \(\left(1-\sqrt{2}/2,1-\sqrt{2}/2\right).\) Thus its Euclidean length is \(\sqrt{2}-1\) and
\[h\left(\infty\right)=\log\frac{1+\left(\sqrt{2}-1\right)}{1-\left(\sqrt{2}-1 \right)}=\log\left(1+\sqrt{2}\right)\approx 0.8813735870 \tag{3}\]
For \(b\in\left[0,\infty\right)\) let \(B\) be the point on the positive \(x-\)axis so that \(OB\) has hyperbolic length \(b\) and denote by \(C\) the trace from the origin of the height of the right angle hyperbolic triangle in \(\mathbb{H}_{-1}^{2}\) with side lengths equal to \(b.\) Then the triangle \(\triangle(OCB)\) has a right angle at \(C\) and \(\widehat{COB}=\pi/4.\) Using the formula
\[\cos\frac{\pi}{4}=\frac{\tanh\left(h\left(b\right)\right)}{\tanh\left(b\right)}\]
we find
\[h\left(b\right)=\frac{1}{2}\log\frac{1+\left(\sqrt{2}/2\right)\tanh b}{1-\left( \sqrt{2}/2\right)\tanh b} \tag{4}\]
We next define an analogous function \(h^{D}\) for the metric space \(\left(\mathbb{D}^{2},D\right)\) with a different domain
\[h^{D}:\left[0,1\right]\longrightarrow\mathbb{R}\]
where for \(x\in\left[0,1\right),\)\(h^{D}(x)\) is the \(D-\)length of the height of the right angle geodesic triangle \(T_{x}\) with vertices \(O,\left(x,0\right)\) and \(\left(0,x\right).\) For \(x=1,\)\(h^{D}\left(1\right)\) is the length of the height of the corresponding ideal geodesic triangle. As all rotations of the unit disk are isometries of \(\left(\mathbb{D}^{2},D\right)\) we have
\[h^{D}\left(x\right)=D\left(O,\frac{\sqrt{2}}{2}x\right)\text{ \ for all }x\in\left[0,1\right]. \tag{5}\]
We will explicitly compute the \(D-\)lengths of the heights of the triangles \(T_{\sqrt{2}/2}\) and \(T_{1}\) and compare them with the corresponding \(d_{k}-\)lengths of the heights of the triangles \(F_{\kappa}\left(T_{\sqrt{2}/2}\right)\) and \(F_{\kappa}\left(T_{1}\right).\) The comparison of the lengths of the heights of \(T_{1}\) and \(F_{\kappa}\left(T_{1}\right)\) will suffice to reach a contradiction for the case \(\kappa=1.\) The triangles \(T_{\sqrt{2}/2}\) and \(F_{\kappa}\left(T_{\sqrt{2}/2}\right)\) are deployed in order to reach a contradiction for all \(\kappa.\)
We first compute the \(D-\)lengths of the heights of the triangles \(T_{\sqrt{2}/2}\) and \(T_{1}.\) For the triangle \(T_{\sqrt{2}/2},\) by (5) and using the easily verified fact that the derivative with respect to \(\theta\) of the function \(\frac{4\tan^{-1}\left(\frac{1}{\sqrt{3}}\tan\left(\frac{\theta}{2}\right) \right)}{\sqrt{3}}\) is \(\frac{2}{2-\cos x},\) we have
\[h^{D}\left(\frac{\sqrt{2}}{2}\right) =D\left(O,\frac{\sqrt{2}}{2}\text{ }\frac{\sqrt{2}}{2}\right)=D\left(O,\frac{1}{2}\right)=\int_{0}^{\pi} \left[\frac{1}{1-\frac{1}{2}\left|\cos\theta\right|}-1\right]d\theta\] \[=\int_{0}^{\pi/2}\left[\frac{1}{1-\frac{1}{2}\cos\theta}-1\right] d\theta+\int_{\pi/2}^{\pi}\left[\frac{1}{1+\frac{1}{2}\cos\theta}-1\right]d\theta\] \[=2\left[\frac{4\tan^{-1}\left(\sqrt{3}\tan\left(\frac{\theta}{2 }\right)\right)}{\sqrt{3}}-\theta\right]_{\theta=0}^{\theta=\pi/2}\] \[=2\left[\frac{8\sqrt{3}-9}{18}\pi\right]=\frac{8\sqrt{3}-9}{9} \pi\text{ \ }\approx 1.695205651 \tag{6}\]
A similar computation, using again the easily verified fact that the derivative with respect to \(\theta\) of the function \(2\sqrt{2}\tan^{-1}\left(\frac{\left(\sqrt{2}+2\right)\tan\left(\frac{\theta}{ 2}\right)}{\sqrt{2}}\right)\) is \(\frac{2+\sqrt{2}}{-\left(1+\sqrt{2}\right)\cos\theta+\sqrt{2}+2},\) shows
\[h^{D}\left(1\right)=D\left(O,\frac{\sqrt{2}}{2}\right)=\frac{3\sqrt{2}-2}{2} \pi\approx 3.522731754 \tag{7}\]
The above calculation along with (3) shows that
\[h^{D}\left(1\right)=\frac{3\sqrt{2}-2}{2}\pi\neq\log\left(1+\sqrt{2}\right)=h \left(\infty\right) \tag{8}\]
and, thus, the triangles \(T_{1}\) and \(F_{1}\left(T_{1}\right)\) cannot be isometric. It follows that \(F_{\kappa}\) cannot be an isometry in the case \(\kappa=1.\)
Before proceeding with the general case, we compute the \(d_{1}-\)length of the height of the triangle \(F\left(T_{\sqrt{2}/2}\right).\) This triangle, being isometric to \(T_{\sqrt{2}/2},\) has side lengths \(D\left(O,\frac{\sqrt{2}}{2}\right)\) and, using (4), its height has \(d_{1}-\)length
\[h\left(D\left(O,\frac{\sqrt{2}}{2}\right)\right) =h\left(\frac{3\sqrt{2}-2}{2}\pi\right)\] \[=\frac{1}{2}\log\frac{1+\left(\sqrt{2}/2\right)\tanh\left(\frac{3 \sqrt{2}-2}{2}\pi\right)}{1-\left(\sqrt{2}/2\right)\tanh\left(\frac{3\sqrt{2}- 2}{2}\pi\right)}\approx 0.8789154496 \tag{9}\]
We proceed now with the general case. Recall that geodesic lines in \(\mathbb{H}_{-\kappa^{2}}^{2}\) and \(\mathbb{H}_{-1}^{2}\) coincide as subsets of \(\mathbb{D}^{2}\) and lengths are multiplied by \(\kappa.\) Therefore, \(F_{\kappa}\left(T_{1}\right)=F_{1}\left(T_{1}\right)\) and the \(d_{\kappa}-\)length of the height of the triangle \(F_{\kappa}\left(T_{1}\right)\) is equal to \(\kappa\,h\left(\infty\right).\) By (8), it follows that \(F_{\kappa}\) can be an isometry only for the model \(\mathbb{H}_{-\kappa_{0}^{2}}^{2}\) where
\[\kappa_{0}=\frac{h^{D}\left(1\right)}{h\left(\infty\right)}=\frac{3\sqrt{2}-2 }{2\log\left(1+\sqrt{2}\right)}\pi>3. \tag{10}\]
To rule out this last case we compare the \(D-\)length of the height of the triangle \(T_{\sqrt{2}/2}\) (computed in (6) above) with the \(d_{\kappa_{0}}-\)length of the height of the triangle \(F_{\kappa_{0}}\left(T_{\sqrt{2}/2}\right).\) As before, the triangles \(F_{1}\left(T_{\sqrt{2}/2}\right)\) and \(F_{\kappa_{0}}\left(T_{\sqrt{2}/2}\right)\) coincide as sets and the \(d_{\kappa_{0}}-\)length of its height is its \(d_{1}-\)length (computed in (9) above) multiplied by \(\kappa_{0}.\) Therefore, if the triangles \(T_{\sqrt{2}/2}\) and \(F_{\kappa_{0}}\left(T_{\sqrt{2}/2}\right)\) were isometric, \(\kappa_{0}\) would have to satisfy
\[h^{D}\left(\frac{\sqrt{2}}{2}\right)=\kappa_{0}\ h\left(D\left(O,\frac{\sqrt{ 2}}{2}\right)\right)\]
which is impossible because \(\kappa_{0}>3\) (see (10)) and the ratio of \(h^{D}\left(\frac{\sqrt{2}}{2}\right)\) and \(h\left(D\left(O,\frac{\sqrt{2}}{2}\right)\right)\) is, by (6) and (9), equal to
\[\frac{h^{D}\left(\frac{\sqrt{2}}{2}\right)}{h\left(D\left(O,\frac{\sqrt{2}}{2 }\right)\right)}=\frac{\frac{8\sqrt{3}-9}{9}\pi}{h\left(\frac{3\sqrt{2}-2}{2} \pi\right)}\approx\frac{1.695205651}{0.8789154496}<3.\]
This completes the proof for arbitrary curvature.
To see that \(\left(\mathbb{D}^{2},D\right)\) is not isometric to any convex domain \(U\) equipped with its Hilbert metric \(d_{\mathcal{H}},\) we will use a result of Busemann and Kelly (see [3, SS29.2])
which states the following: let \(U\) be a bounded open convex domain in \(\mathbb{R}^{2}.\) Restrictions with respect to all lines in \(U\) through one fixed point exist if and only if \(U\) is the interior of an ellipse.
The reflection assumption holds for the metric space \(\left(\mathbb{D}^{2},D\right),\) see Proposition 4. If \(\left(\mathbb{D}^{2},D\right)\) were isometric to some \(\left(U,d_{\mathcal{H}}\right)\) then the same reflection assumption would hold for \(U\) and, hence, \(U\) would have to be the interior of an ellipse making \(d_{\mathcal{H}}\) the hyperbolic metric. As shown above this cannot be the case.
We next restrict our attention to geodesic rays, that is, isometric maps \(\left[0,\infty\right)\rightarrow\mathbb{D}^{2}.\)
**Definition 9**.: _Two geodesic rays \(r_{1},r_{2}:\left[0,\infty\right)\rightarrow\mathbb{D}^{2}\) in the geodesic metric space \(\left(\mathbb{D}^{2},D\right)\) are called asymptotic if the distance function \(t\to D(r_{1}(t),r_{2}(t))\) is bounded._
**Remark 10**.: _Asymptoticity of geodesic rays may be seen as a generalization to arbitrary metric spaces of parallelism of geodesic rays in Euclidean space. Moreover, equivalence classes of asymptotic geodesic rays are the tool to define the visual boundary of a geodesic metric space. It is well known, see for example [5, Prop. 10.1.4], that two geodesic rays in a geodesic metric space are asymptotic if and only if their images are at finite Hausdorff distance, a notion defined below._
**Definition 11**.: _For two geodesic rays \(r_{1}\) and \(r_{2}\) in a geodesic metric space \(\left(X,d\right)\), define their Hausdorff distance by_
\[d_{H}\left(r_{1},r_{2}\right)=\max\left\{\sup_{x\in\operatorname{Im}r_{1}}d \left(x,\operatorname{Im}r_{2}\right),\sup_{x\in\operatorname{Im}r_{2}}d\left( x,\operatorname{Im}r_{1}\right)\right\} \tag{11}\]
_where the distance of a point \(\alpha\) from a set \(B\) is \(d\left(\alpha,B\right)=\inf_{\beta\in B}d\left(\alpha,\beta\right).\)_
As geodesics in \(\left(\mathbb{D}^{2},D\right)\) coincide with Euclidean lines, it is natural to examine whether the natural Euclidean boundary \(\mathbb{S}^{1}\) of \(\mathbb{D}^{2}\) coincides with the set of equivalence classes of asymptotic geodesic rays in \(\left(\mathbb{D}^{2},D\right).\)
We say that two geodesics rays \(r_{1}\) and \(r_{2}\) coincide at infinity if, as Euclidean lines, intersect the same point of \(\mathbb{S}^{1}\equiv\partial\mathbb{D}^{2}.\)
**Theorem 12**.: _Let \(r_{1}\) and \(r_{2}\) be two geodesic rays in \(\left(\mathbb{D}^{2},D\right).\) Then \(r_{1}\) and \(r_{2}\) coincide at infinity if and only if they are asymptotic._
Proof.: The only if direction follows from Lemma 6. In view of the above Remark we will show that \(r_{1}\) and \(r_{2}\) coincide at infinity then their images are at finite Hausdorff distance.
Since rotations around the origin \(O\) are isometries (see Proposition 4) we may assume that the common point at infinity is \(\left(1,0\right)\in\partial\mathbb{D}^{2}.\) We will first examine the case where one of the geodesic rays is the positive \(x-\)axis and the other one is contained in the upper half disk forming an angle \(\omega\in\left[0,\pi/2\right)\) with the \(x-\)axis
at \(\left(1,0\right)\). Clearly, the Hausdorff distance is increasing with respect to \(\omega\). Thus, we may consider a geodesic ray being contained entirely in the first quadrant forming an angle \(\omega\in\left[\pi/4,\pi/2\right)\) with the \(x-\)axis at \(\left(1,0\right)\). The general case will then follow easily.
Set \(\lambda=\tan\omega\) and for any \(x\) satisfying \(0<x<1\) consider the point \(B=\left(x,0\right)\) on the (geodesic) \(x-\)axis and the point \(A=\left(x,\lambda(1-x)\right)\) on the other geodesic ray. Set \(\theta_{x}\) to be the angle formed by the segment \(OA\) and the \(x-\)axis. Clearly the distance \(D\left(A,B\right)\) depends on \(x\) and it suffices to show that
\[\lim_{x\to 1}D\left(A,B\right)\text{ is bounded.} \tag{12}\]
Recall that for a direction \(\Delta_{\theta}\), \(\theta\in\left[0,\pi\right]\) and a point \(X\) in the interior of the unit disk we denote by \(X_{\theta}\) its projection on the diameter \(\Delta_{\theta}\). For the points \(A\) and \(B\) we have
\[\left\|OA_{\theta}\right\|=\left\|OA\right\|\cos\left(\theta-\theta_{x}\right) =x\frac{\cos\left(\theta-\theta_{x}\right)}{\cos\theta_{x}}\text{ \ and \ }\left\|OB_{\theta}\right\|=x\cos\theta.\]
As \(A\) is contained in the first quadrant, for all \(\theta\in\left[0,\frac{\pi}{2}\right]\) we obtain
\[d_{\theta}\left(A,B\right)=d_{I}\left(A_{\theta},B_{\theta}\right)=\frac{1}{1- \left\|OA_{\theta}\right\|}-\frac{1}{1-\left\|OB_{\theta}\right\|}=\frac{1}{1- x\frac{\cos\left(\theta-\theta_{x}\right)}{\cos\theta_{x}}}-\frac{1}{1-x\cos \theta}. \tag{13}\]
In a similar manner we find
\[d_{\theta}\left(A,B\right)=\begin{cases}\frac{1}{1-x\frac{\cos\left(\theta- \theta_{x}\right)}{\cos\theta_{x}}}+\frac{1}{1+x\cos\theta},&\text{if }\theta\in\left[\frac{\pi}{2},\frac{\pi}{2}+ \theta_{x}\right]\\ -\frac{1}{1+x\frac{\cos\left(\theta-\theta_{x}\right)}{\cos\theta_{x}}}+ \frac{1}{1+x\cos\theta},&\text{if }\theta\in\left[\frac{\pi}{2}+ \theta_{x},\pi\right]\end{cases}\]
In view of (12) we will only examine the limit as \(x\to 1\) of the integral
\[\int_{0}^{\theta_{0}}d_{\theta}\left(A,B\right)d\theta=\int_{0}^{\theta_{0}} \left[\frac{1}{1-x\frac{\cos\left(\theta-\theta_{x}\right)}{\cos\theta_{x}}}- \frac{1}{1-x\cos\theta}\right]d\theta\]
for sufficiently small \(\theta_{0}\) (to be chosen later) because all the above expressions for \(d_{\theta}\left(A,B\right)\) are continuous and bounded on \(\left[\theta_{0},\pi-\theta_{0}\right]\) and the integral \(\int_{\pi-\theta_{0}}^{\pi}d_{\theta}\left(A,B\right)d\theta\) is treated similarly. By substituting
\[\sin\theta_{x}=\frac{\lambda\left(1-x\right)}{\sqrt{x^{2}+\lambda^{2}\left(1 -x\right)^{2}}}\text{ \ and \ }\cos\theta_{x}=\frac{x}{\sqrt{x^{2}+\lambda^{2}\left(1-x\right)^{2}}}\]
in (13) we obtain
\[d_{\theta}\left(A,B\right) =\frac{x\sin\theta_{x}\sin\theta}{\cos\theta_{x}-x\cos\left(\theta- \theta_{x}\right)-x\cos\theta_{x}\cos\theta+x^{2}\cos\left(\theta-\theta_{x} \right)\cos\theta}\] \[=\frac{\lambda\left(1-x\right)\sin\theta}{1-x\cos\theta-\lambda \left(1-x\right)\sin\theta-x\cos\theta+x^{2}\cos^{2}\theta+\lambda x\left(1-x \right)\cos\theta\sin\theta}\] \[=\frac{\lambda\left(1-x\right)\sin\theta}{\left(1-x\cos\theta \right)\left(1-x\cos\theta-\lambda\left(1-x\right)\sin\theta\right)} \tag{14}\]
Define
\[\Phi\left(\theta\right)=\frac{\sin\theta}{1-x\cos\theta-\lambda\left(1-x \right)\sin\theta}\]
and a straightforward calculation shows that
\[\Phi^{\prime}\left(\theta\right)=\frac{-x+\cos\theta}{\left(1-x\cos\theta- \lambda\left(1-x\right)\sin\theta\right)^{2}}.\]
Therefore, there exists a unique angle \(\omega_{x}\in\left(\theta_{x},\frac{\pi}{2}\right)\) such that
\[\cos\omega_{x}=x\Longleftrightarrow\Phi^{\prime}\left(\theta\right)=0.\]
Using the equalities \(\cos\omega_{x}=x\) and \(\sin\omega_{x}=\sqrt{1-x^{2}}\) it is easily shown that
\[\sqrt{1-x}\Phi\left(\omega_{x}\right) =\frac{\sqrt{1-x}\sin\omega_{x}}{1-x\cos\omega_{x}-\lambda\left(1 -x\right)\sin\omega_{x}}=\frac{\sqrt{1-x}\sqrt{1-x^{2}}}{1-x^{2}-\lambda\left( 1-x\right)\sqrt{1-x^{2}}}\] \[=\frac{\sqrt{1+x}\left(1-x\right)}{\left(1-x\right)\left(1+x- \lambda\sqrt{1-x^{2}}\right)}\longrightarrow\frac{\sqrt{2}}{2}\ \ \text{ as }\ x\to 1. \tag{15}\]
The choice of \(x\) determines both \(A\) and \(B\) as well as \(\theta_{x}\), thus, we may choose \(\theta_{0}\) such that
\[\text{ for \ all \ }\theta\leq\theta_{0},\sqrt{1-x}\Phi\left(\omega_{x}\right) \leq 2. \tag{16}\]
As the quantity \(\Phi\left(\theta\right)\) attains its maximum at \(\theta=\omega_{x}\) we have, using (14),
\[\int_{0}^{\theta_{0}}d_{\theta}\left(A,B\right)d\theta =\int_{0}^{\theta_{0}}\frac{\lambda\left(1-x\right)}{\left(1-x \cos\theta\right)}\Phi\left(\theta\right)d\theta\leq\int_{0}^{\theta_{0}} \frac{\lambda\left(1-x\right)}{\left(1-x\cos\theta\right)}\Phi\left(\omega_{x }\right)d\theta\] \[=\int_{0}^{\theta_{0}}\frac{\lambda\sqrt{1-x}}{\left(1-x\cos \theta\right)}\sqrt{1-x}\Phi\left(\omega_{x}\right)d\theta\leq 2\int_{0}^{\theta_{0}} \frac{\lambda\sqrt{1-x}}{\left(1-x\cos\theta\right)}d\theta \tag{17}\]
where the latter inequality follows from (16). It suffices to show that \(\int_{0}^{\theta_{0}}\frac{\sqrt{1-x}}{\left(1-x\cos\theta\right)}d\theta\) is bounded which follows from the following identity
\[\int\frac{1}{1-x\cos\theta}d\theta=\frac{2}{\sqrt{1-x^{2}}}\tan^{-1}\left( \sqrt{\frac{1+x}{1-x}}\tan\frac{\theta}{2}\right)\]
and the observation that the range of the inverse tangent function is a bounded interval:
\[\int_{0}^{\theta_{0}}\frac{\sqrt{1-x}}{\left(1-x\cos\theta\right)}d\theta =\frac{\sqrt{1-x}}{\sqrt{1-x^{2}}}\left[\tan^{-1}\left(\sqrt{\frac {1+x}{1-x}}\tan\frac{\theta}{2}\right)\right]_{\theta=0}^{\theta=\theta_{0}}\] \[=\frac{1}{\sqrt{1+x}}\tan^{-1}\left(\sqrt{\frac{1+x}{1-x}}\tan \frac{\theta_{0}}{2}\right).\]
We now discuss the case of two arbitrary asymptotic geodesic rays \(r_{1}\) and \(r_{2}\). As mentioned at the beginning of the proof, we may assume that the common boundary point is \(\left(1,0\right)\in\partial\mathbb{D}^{2}.\) Let \(\omega_{i}\in\left(-\pi/2,\pi/2\right),\)\(i=1,2\) be the angle formed by \(r_{i}\) and the \(x-\)axis. Denote by \(r_{x}\) the geodesic ray whose image is the positive \(x-\)axis in the unit disk. If both \(\omega_{1},\omega_{2}\) are positive and, say, \(\omega_{1}<\omega_{2}\) then \(d_{H}\left(r_{1},r_{2}\right)<d_{H}\left(r_{x},r_{2}\right)\). If \(\omega_{1}\omega_{2}<0\) then a triangle inequality argument asserts that \(d_{H}\left(r_{1},r_{2}\right)\leq d_{H}\left(r_{x},r_{1}\right)+d_{H}\left(r_ {x},r_{1}\right).\) This completes the proof of the theorem.
**Acknowledgments.** The authors would like to thank the anonymous referee for very helpful comments and suggestions which, among other things, improved the exposition of the proof of Theorem 8.
Conflict of Interest statement: On behalf of all authors, the corresponding author states that there is no conflict of interest.
Data Availability Statement: Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
|
2302.09091 | Neutrino-Assisted Early Dark Energy is a Natural Resolution of the
Hubble Tension | It has very recently been claimed that the neutrino-assisted early dark
energy model -- a promising resolution of the Hubble tension that can
ameliorate the theoretical fine-tuning and coincidence problems that plague
other theories -- does not provide natural or cosmologically interesting
results. In this short paper, we show that these conclusions are incorrect for
three reasons. First, we identify errors in the calculations. Second, we
dispute the definition in of what constitutes an 'interesting' and 'natural'
model. Finally, we demonstrate that the conclusions of were arrived at without
fully exploring the full parameter space of the model. Neutrino-assisted early
dark energy remains a natural and interesting potential resolution of the
Hubble tension that merits further study. | Mariana Carrillo González, Qiuyue Liang, Jeremy Sakstein, Mark Trodden | 2023-02-17T19:00:05Z | http://arxiv.org/abs/2302.09091v1 | # Neutrino-Assisted Early Dark Energy is a Natural Resolution of the Hubble Tension
###### Abstract
It has very recently been claimed [1] that the neutrino-assisted early dark energy model -- a promising resolution of the Hubble tension that can ameliorate the theoretical fine-tuning and coincidence problems that plague other theories -- does not provide natural or cosmologically interesting results. In this short paper, we show that these conclusions are incorrect for three reasons. First, we identify errors in the calculations. Second, we dispute the definition in [1] of what constitutes an "interesting" and "natural" model. Finally, we demonstrate that the conclusions of [1] were arrived at without fully exploring the full parameter space of the model. Neutrino-assisted early dark energy remains a natural and interesting potential resolution of the Hubble tension that merits further study.
## I Introduction
The Hubble tension [2; 3; 4; 5] is one of the biggest mysteries confounding modern cosmologists. Despite a continued updating and interrogation of various cosmological datasets over the last decade, the disagreement between early- and late-time measurements of the Hubble constant \(H_{0}\) has persisted and worsened to the point where the discrepancy has surpassed \(5\sigma\)[2; 3; 5; 6; 7]. The inability of the \(\Lambda\)CDM cosmological standard model to account for all of the astrophysical and cosmological observations has motivated theorists to consider the tantalizing possibility that the Hubble tension is a signal of new physics beyond the cosmological standard model.
A plethora of theoretical models that can resolve the tension have been proposed that have met with varying degrees of success [3; 4; 8]. Among the various proposals, Early Dark Energy (EDE) [9] has emerged as a promising candidate [4; 8]. In this scenario, a minimally-coupled scalar field \(\phi\) is frozen at some initial condition \(\phi_{i}\) at early times, but begins to roll around the epoch of matter-radiation equality (MRE). During this phase of rolling, the scalar accounts for \(\sim 10\%\) of the energy budget of the universe and increases the Hubble parameter compared to \(\Lambda\)CDM. This has the effect of decreasing the sound horizon, which inversely increases the early-time measurements of \(H_{0}\) so that they are consistent with the (larger) late-time measurements (see [3; 5; 8] for a detailed explanation of this mechanism).
The minimal EDE scenario suffers from theoretical fine-tunings and a coincidence problem. The scalar field mass must be fine-tuned to \(m_{\phi}\sim 10^{-29}\)eV (the Hubble scale at MRE) in order for it to transit from the overt to under-damped regime at this epoch. Such small scalar masses present a technical-naturalness challenge for EDE models, since quantum corrections will drive the mass towards the cut-off of the effective field theory (EFT). Additionally, the physics of MRE is completely disconnected from the physics of the scalar presenting a coincidence (or _why now_) problem for EDE models. Why should the onset of EDE occur at MRE and not some other time?
Two of us have proposed a framework that ameliorates a number of the theoretical issues with EDE -- neutrino-assisted early dark energy (\(\nu\)EDE) [10]. Here, EDE has a Yukawa coupling to neutrinos, which are relativistic in the early universe but become non-relativistic when the temperature of the universe is of order their mass. When this happens, the neutrinos inject energy into the scalar, giving it a "kick". It is a cosmic coincidence that the sum of the neutrino masses is of order the temperature at MRE. Thus, if the neutrino mass spectrum is dominated by one species, the kick naturally happens at approximately the correct time that EDE needs to become active to resolve the \(H_{0}\) tension without the need to fine-tune the scalar field mass. Similarly, there is no need to fine-tune the initial conditions because the scalar can begin at its minimum -- the natural initial condition -- and be displaced by the kick. In subsequent work [11], the four of us calculated the quantum corrections to the scalar field mass in this setup and found that a light mass is technically natural, thereby resolving the final EDE fine-tuning. We also constructed numerical solutions and explored how to generalize the model (including the form of the conformal coupling) in order to ensure that the EFT remains well behaved at high redshifts. \(\nu\)EDE is therefore a theoretically appealing potential resolution of the Hubble tension.
In a recent paper [1], it has been claimed that there are no regions of the \(\nu\)EDE parameter space where the scenario is "natural" or "cosmologically interesting". The purpose of this paper is to demonstrate that these claims are incorrect. The authors of [1] make three specific
claims
1. That there is a maximum magnitude of the kick that prevents the success of the mechanism;
2. That there are no models with the initial condition \(\phi_{i}=0\) (which they claim is the "natural" one) that can inject the correct amount of EDE at MRE to resolve the tension; and
3. That there are no "interesting" models with \(\phi_{i}\neq 0\) because the authors consider these initial conditions to be unnatural, and they find uncoupled models with the same initial conditions that can also provide sufficient EDE to resolve the tension.
In what follows, we will demonstrate why we strongly disagree with these claims.
We will first show that the equations in [1] used to arrive at claim (1) are incorrect, arising from the use of an unphysical neutrino energy density and pressure. We next explain why claims (2) and (3) would be incorrect. In brief, the natural initial condition is not \(\phi_{i}=0\) because, as we discussed at length in [11], the coupling to neutrinos shifts the minimum of the effective potential governing the scalar field dynamics away from zero and towards large values. Large initial values of the field, that are dismissed as unnatural in [1], are, in fact natural, and the ensuing phenomenology remains interesting.
## II Neutrino-assisted early dark energy is natural and interesting
In this section we take each of the claims above in turn and explain why we disagree with the results in [1].
### Derivation of the \(\nu\)EDE Equations of Motion
The action for \(\nu\)EDE is1
Footnote 1: We will not specify the potential \(V(\phi)\) in what follows, since our conclusions are general and the \(\nu\)EDE framework works for a generic potential. The models studied in [10; 11] were chosen to be of the form \(V(\phi)\sim\lambda\phi^{4}\), so that the potential is simple and renormalizable.
\[S = \int{\rm d}^{4}x\sqrt{-g}\left[-\frac{1}{2}\partial_{\mu}\phi \partial^{\mu}\phi-V(\phi)\right] \tag{1}\] \[+ S_{m}[g_{\mu\nu};\Psi_{m}]+S_{\nu}[\bar{g}_{\mu\nu};\Psi_{\nu}],\]
where \(S_{m}\) is the action for all matter fields \(\Psi_{m}\), and \(S_{\nu}\) is the action for neutrino fields \(\Psi_{\nu}\). This implies that all matter fields except neutrinos move on geodesics of the Einstein frame metric, \(g_{\mu\nu}\), and neutrinos move on geodesics of the Jordan frame metric, \(\bar{g}_{\mu\nu}\), to which they couple minimally. The two metrics are related via
\[\bar{g}_{\mu\nu}=A^{2}(\phi)g_{\mu\nu}\,\quad{\rm with}\quad A(\phi)=\exp( \beta\phi/M_{\rm Pl}). \tag{2}\]
The cosmological equation of motion for the scalar resulting from the action (1) is (assuming a flat FLRW metric)
\[\ddot{\phi}+3H\dot{\phi}+V^{\prime}(\phi)=\frac{\beta}{M_{\rm Pl}}\Theta(\nu) \tag{3}\]
where \(\Theta(\nu)=g_{\alpha\beta}\Theta^{\alpha\beta}(\nu)\) is the trace of the Einstein frame energy momentum tensor \(\Theta^{\alpha\beta}(\nu)=2/\sqrt{-g}\delta S_{\nu}/\delta g_{\alpha\beta}\). This is not covariantly conserved (\(\nabla_{\alpha}\Theta^{\alpha\beta}\neq 0\)) due to the non-minimal coupling between the scalar and neutrinos. In contrast, the Jordan frame energy-momentum tensor \(\bar{\Theta}^{\alpha\beta}(\nu)=2/\sqrt{-\bar{g}}\delta S_{\nu}/\delta\bar{g}_ {\alpha\beta}\) is covariantly conserved with respect to the Jordan frame connection i.e., \(\bar{\nabla}_{\alpha}\bar{\Theta}^{\alpha\beta}(\nu)=0\). This implies that we should apply thermodynamics to derive the neutrinos' pressure and density in the Jordan frame and translate all quantities into the Einstein frame. The two energy-momentum tensors are related by the following formulae [12]:
\[\Theta^{\alpha\beta}(\nu) = A^{6}\bar{\Theta}^{\alpha\beta}(\nu),\quad\Theta^{\alpha}_{\ \beta}(\nu)=A^{4}\bar{\Theta}^{\alpha}_{\ \beta}(\nu),\] \[\Theta_{\alpha\beta}(\nu) = A^{2}\bar{\Theta}_{\alpha\beta}(\nu),\quad\quad\Theta(\nu)=A^{ 4}\bar{\Theta}(\nu). \tag{4}\]
This implies that the Jordan and Einstein frame pressure and density are related via
\[P_{\nu}=A^{4}\bar{P}_{\nu},\quad\rho_{\nu}=A^{4}\bar{\rho}_{\nu}. \tag{5}\]
It is \(\bar{P}_{\nu}\) and \(\bar{\rho}_{\nu}\) that must be calculated using the Fermi-Dirac distribution. Doing so, one finds
\[\bar{\Theta}(\nu) = 3\bar{P}_{\nu}-\bar{\rho}=-\frac{g_{\nu}}{2\pi^{2}}\bar{T}_{\nu} ^{4}\tau\left(\frac{m_{\nu}}{\bar{T}_{\nu}}\right);\] \[\tau(x) = x^{2}\int_{x}^{\infty}\frac{(u^{2}-x^{2})^{\frac{1}{2}}}{e^{u} +1}{\rm d}u, \tag{6}\]
where, since neutrinos decouple while being relativistic, the Jordan frame temperature is \(\bar{T}_{\nu}=\bar{T}_{0}/\bar{a}\) with \(\bar{a}\) the Jordan frame scale factor. The relation between the Jordan and Einstein frame temperature \(T_{\nu}\) is given by2\(\bar{T}_{\nu}=T_{\nu}/A(\phi)\). Returning to equation (6), one has
Footnote 2: We can relate the Jordan and Einstein frame temperatures as follows. First, we need to relate the two scale factors. The line-element in the Jordan frame is
\[{\rm d}\bar{s}^{2} = \bar{g}_{\mu\nu}{\rm d}x^{\mu}{\rm d}x^{\nu}=A^{2}(\phi){\rm d}s^{2}\] \[= -A^{2}(t){\rm d}t^{2}+A^{2}(t)a^{2}(t)\delta_{ij}{\rm d}x^{i}{\rm d }x^{j},\]
where \({\rm d}s^{2}=-{\rm d}t^{2}+a(t)^{2}\delta_{ij}{\rm d}x^{i}{\rm d}x^{j}\) with \(a(t)\) the Einstein frame scale factor and \(A(t)=A(\phi(t))\). Defining the Jordan frame coordinate time \(\bar{t}(t)\) by \({\rm d}\bar{t}=A(t){\rm d}t\) the Jordan frame line-element can be brought into the standard coordinate time form: \[{\rm d}\bar{s}^{2} = -{\rm d}\bar{t}^{2}+A^{2}(\bar{t})a^{2}(\bar{t})\delta_{ij}{\rm d }x^{i}{\rm d}x^{j}=-{\rm d}\bar{t}^{2}+\bar{a}^{2}(\bar{t})\delta_{ij}{\rm d}x ^{i}{\rm d}x^{j}.\]
From this, one can see that \(\bar{a}=Aa\) and \(\bar{T}_{\nu}=T_{\nu}/A\).
Using equation (4), equation (3) becomes
\[\ddot{\phi}+3H\dot{\phi}+V^{\prime}(\phi)=-\frac{g_{\nu}\beta}{2\pi^{2}M_{\rm Pl} }T_{\nu}^{4}\tau\left(\frac{m_{\nu}}{T_{\nu}}\right). \tag{8}\]
This is identical to the formula derived in [10], except for the argument of \(\tau\) which involves \(\widetilde{T}_{\nu}=T_{\nu}/A(\phi)\) instead of \(T_{\nu}\), which we used in [10; 11] as an approximation valid in the limit \(\beta\phi/M_{\rm Pl}\ll 1\) in order to ensure that the model constituted a healthy EFT. In this limit,the temperatures in both frames are equivalent, whereas away from this limit, there is a small shift in the time that the energy is injected. Importantly the factor of \(A(\phi)^{4}\) that arises when transforming the energy-momentum tensor from the Jordan to the Einstein frame is cancelled when the temperature is similarly transformed. Independent of the potential or the time of the injection, this equation of motion predicts a kick of order \(\Delta\phi\approx-0.03\beta M_{\rm Pl}\)[10], as long as we remain within the regime of validity of the EFT. Note that over this range, this quantity increases linearly with \(\beta\).
### Comparison with the Derivation in [1]
In [1], the derivation begins from the scalar field equation of motion (our equation (3) and equation (A1) in the appendix of [1]):
\[\ddot{\phi}+3H\dot{\phi}+V^{\prime}(\phi)=\frac{\beta}{M_{\rm Pl}}\Theta(\nu) =\frac{\beta}{M_{\rm Pl}}(3P-\rho). \tag{9}\]
Next, an equivalent expression is used
\[\ddot{\phi}+3H\dot{\phi}+V^{\prime}(\phi)=\frac{\beta}{M_{\rm Pl}}\widetilde{ \Theta}(\nu)e^{\beta\frac{\phi}{M_{\rm Pl}}}. \tag{10}\]
with \(\widetilde{P}=P\exp(-\beta\phi/M_{\rm Pl})\), \(\widetilde{\rho}=\rho\), and \(\widetilde{\Theta}(\nu)=(3\widetilde{P}-\widetilde{\rho})\). This can be thought of as a simple redefinition of variables, and so is certainly allowed. However, crucially, the authors of [1] then set
\[\widetilde{\Theta}(\nu)=-\frac{g_{\nu}}{2\pi^{2}}T_{\nu}^{4}\tau\left(\frac{ m_{\nu}}{T_{\nu}}\right) \tag{11}\]
(equation (A3) of the appendix). In our view this step is flawed because \(\widetilde{P}\) and \(\widetilde{\rho}\) are not a physical pressure and density3 The Jordan frame is the unique frame in which the neutrinos obey Fermi-Dirac statistics. Although one can make a redefinition of variables to make the computation easier, the physics should not depend on the choice of frame or thermodynamics variables. Because of this, the subsequent equations in [1] differ from the correct equations, and are only approximately in agreement with them in the limit \(\beta\phi/M_{\rm Pl}\ll 1\). Ultimately, [1] contains a modified form of equation (8) that differs from the correct equation by a spurious exponential factor
Footnote 3: Some authors refer to these quantities as the _conserved_ density and pressure [12; 13; 14]. This is because these quantities satisfy the FLRW continuity equation in the Einstein frame, and so redshift as they would if the scalar were uncoupled. This does not however imply that they are physical. They are quantities that are sometimes useful for calculation purposes.
\[\ddot{\phi}+3H\dot{\phi}+V^{\prime}(\phi)=-\frac{g_{\nu}\beta}{2\pi^{2}M_{\rm Pl }}T_{\nu}^{4}\tau\left(\frac{m_{\nu}}{T_{\nu}}\right)e^{\beta\frac{\phi}{M_{ \rm Pl}}}. \tag{12}\]
The presence of this extra factor, coupled with extrapolating the model beyond the regime of validity of the EFT results in an expression for the kick \(\Delta\phi\) that has a maximum. While this is a critical problem with the results in [1], it is not the only issue since, as we will show in the next section, the kick magnitude cannot determine whether a model can or cannot solve the Hubble tension problem.
### Naturalness of \(\nu\)EDE
We now turn to claims regarding the naturalness and cosmological relevance of \(\nu\)EDE, beginning with the concept of naturalness. The argument in [1] is that, taking \(V(\phi)=\lambda\phi^{4}/4\), the natural initial condition for \(\phi\) is \(\phi_{i}=0\), since this corresponds to the minimum of the potential. However, this fails to account for the fact that the dynamics of the field are governed by an effective potential [10; 11]
\[V_{\rm eff}(\phi)=V(\phi)-\beta\Theta(\nu)\frac{\phi}{M_{\rm Pl}}, \tag{13}\]
which is minimized at
\[\phi_{\rm min}=\left(\frac{\beta\Theta(\nu)}{\lambda M_{\rm Pl}}\right)^{ \frac{1}{3}}. \tag{14}\]
The natural initial condition is thus \(\phi_{i}=\phi_{\rm min}\)4. Crucially, \(\phi_{i}\) differs significantly from zero and, in fact, a simple estimate shows
Footnote 4: In the original \(\nu\)EDE paper [10] we set \(\Theta(\nu)\approx 0\) before the kick since the neutrino is ultra-relativistic in the early universe. The natural initial condition was therefore taken to be \(\phi_{i}=0\), which minimizes \(V(\phi)=\lambda\phi^{4}/4\). It was later appreciated [11] that the small but non-zero \(\Theta(\nu)\) would drive the minimum far from \(\phi=0\), making \(|\phi_{i}|=|\phi_{\rm min}|\gg 0\) the natural initial condition.
\[\frac{|\phi_{\rm min}(z)|}{M_{\rm Pl}}\approx 10^{-6}\left(\frac{\beta}{800} \right)^{\frac{1}{3}}\left(\frac{10^{-98}}{\lambda}\right)^{\frac{1}{3}} \left(\frac{m_{\nu}}{0.3{\rm eV}}\right)^{\frac{2}{3}}(1+z)^{\frac{2}{3}}. \tag{15}\]
This can also be seen in Fig. 1.
We now turn to the concept of a "cosmologically interesting" phenomenology for \(\nu\)EDE. In [1], cosmologically interesting models are defini
occurring at redshift \(z\in[1585,6309]\) that injects a fractional energy density \(f_{\rm EDE}\in[7\%,13\%]\). It is reasonable to use this as a phenomenological criteria for the original EDE model [9], since in that case the kick is sharp. However, this is not how the \(\nu\)EDE model resolves the Hubble tension. As was pointed out in [11], in \(\nu\)EDE the kick has a smaller magnitude but lasts longer, and therefore it is necessary to consider the integrated effect rather than solely the maximum magnitude. Without a full comparison with cosmological datasets, it is therefore insufficient to falsify the model by just looking at the kick.
Furthermore, the conclusion in [1] that \(\nu\)EDE is not "interesting" is drawn because the analysis fails to find any \(\phi_{i}=0\) models that inject the correct amount of EDE at the redshift of MRE. The same analysis reveals some parameter values that accomplish this with \(\phi_{i}\neq 0\) but dismisses these because \(\phi_{i}\neq 0\) is not "natural". As we have shown above, \(\phi_{i}\neq 0\) is in fact the correct initial condition, and there is a large range of initial conditions that are natural. Thus, by this definition, \(\nu\)EDE would be cosmologically interesting. A further claim in [1] is that those \(\nu\)EDE models that do inject \(\mathcal{O}(10\%)\) EDE around MRE are still uninteresting because it is possible to find uncoupled models with the same \(\lambda\) and initial conditions that accomplish a similar timely injection. The motivation for \(\nu\)EDE is that it has theoretically-attractive features addressing both the fine-tuning and coincidence problems that its uncoupled counterparts lack. In this sense, the existence of such counterparts has no bearing on the appeal of \(\nu\)EDE.
Finally, before concluding, we remark that the analysis in [1] does not fully explore the theory parameter space. First, the values of \(\beta\) and \(\lambda\) investigated constitute only a small region of the viable parameter space uncovered in [11]. Second, while the sum of the neutrino masses is tightly constrained in the base \(\Lambda\)CDM scenario, it is too restrictive to a priori fix a particular value (as was done in [1]) in the \(\nu\)EDE framework, since the constraints can weaken substantially once new physics is introduced [15; 16]. It is therefore reasonable to consider values away from the Planck best-fit, especially since the EDE-neutrino coupling will induce modifications of the Boltzmann hierarchy for neutrinos [17]. Any numerical exploration of the \(\nu\)EDE parameter space should account for this.
## III Conclusions
In this brief paper, we have addressed the claims made in [1] and have demonstrated that the conclusions therein do not hold. One problem is that the equations used to derive a maximum for the kick are not the correct equations relevant for \(\nu\)EDE. Another problem is that, regardless of the equations used, once the correct initial conditions are used, claims regarding whether the model is natural or cosmologically interesting are dramatically altered. Indeed, for this reason an analysis such as that in [1] cannot exclude the model, no matter what parameter range is explored. Certainly, neutrino-assisted early dark energy will be constrained by detailed comparisons with cosmological datasets. This work is underway, and at present \(\nu\)EDE remains a natural and interesting potential resolution of the Hubble tension.
## Acknowledgements
The work of QL and MT is supported in part by US Department of Energy (HEP) Award DE-SC0013528.
|
2305.02163 | From Early Theories of Dzyaloshinskii-Moriya Interactions in Metallic
Systems to Today's Novel Roads | Since the early 1960's, the discovery of Dzyaloshinskii-Moriya interaction
(DMI) helped to explain the physical mechanisms behind certain magnetic
phenomena, such as net moment in antiferromagnets, or enhanced anisotropy field
from heavy metals impurity in dilute Cu:Mn alloy. Since the researchers unveil
the key role that DMI plays in stabilizing chiral Neel type magnetic domain
wall and magnetic skyrmions, the studies on DMI have received growing interest.
Governed by spin-orbit coupling (SOC) and various types of inversion symmetry
breaking (ISB) in magnetic systems, DMI drives the forming of distinct
morphologies of magnetic skyrmions. Our aim is to briefly introduce the
research history of DMI and its significance in the field of modern
spintronics. | Albert Fert, Mairbek Chshiev, André Thiaville, Hongxin Yang | 2023-05-03T14:55:24Z | http://arxiv.org/abs/2305.02163v1 | # From Early Theories of Dzyaloshinskii-Moriya Interactions
###### Abstract
Since the early 1960's, the discovery of Dzyaloshinskii-Moriya interaction (DMI) helped to explain the physical mechanisms behind certain magnetic phenomena, such as net moment in antiferromagnets, or enhanced anisotropy field from heavy metals impurity in dilute Cu:Mn alloy. Since the researchers unveil the key role that DMI plays in stabilizing chiral Neel type magnetic domain wall and magnetic skyrmions, the studies on DMI have received growing interest. Governed by spin-orbit coupling (SOC) and various types of inversion symmetry breaking (ISB) in magnetic systems, DMI drives the forming of distinct morphologies of magnetic skyrmions. Our aim is to briefly introduce the research history of DMI and its significance in the field of modern spintronics.
## 1 Introduction
The Dzyaloshinskii-Moriya interaction (DMI) is an anti-symmetric interaction that forces the spins of neighboring atomic sites to align perpendicular to each other. The Heisenberg interaction between two spins favors parallel (ferromagnetic) or anti-parallel (antiferromagnetic) states, whereas DMI induces a clockwise or counter-cl- ockwise rotation between the spins. The presence of DMI requires breaking inversion symmetry, and the existence of sizable spin-orbit coupling (SOC). It acts as a key ingredient for noncollinear
magnetism and chiral magnetism, leading to chiral domain walls and magnetic skyrmions. Such peculiar spin textures are of great interest in both fundamental and application aspects. Racetrack memory and logic devices based on skyrmions and chiral domain walls are very promising spintronic candidates. From the non-centrosymmetric bulk magnets to the metallic multilayer systems, the DMI effect has been intensively studied both theoretically and experimentally.
Here, we recall the history of the early models of DMI, the density functional theory (DFT) approaches of calculating DMI, the discovery of magnetic skyrmions and chiral domain walls and their potential applications. Furthermore, we briefly discuss some prospects of interlayer DMI and possibilities of its ferroelectric control, including in systems comprising two-dimensional magnets.
**2. Early models of the Dzyaloshinskii-Moriya interaction**
Early developments of DMI were first aimed to explain the origin of "weak" ferromagnetism in some antiferromagnetic materials. Previously, it had been noted that some materials considered to be antiferromagnetic, such as \(\alpha\)-Fe\({}_{2}\)O\({}_{3}\), or the MnCO\({}_{3}\) and CoCO\({}_{3}\) carbonate compounds, exhibited spontaneous magnetization behavior, with very small magnetic moment compared to that of respective magnetic atoms. Neel attributed the net moment in these antiferromagnets to an impurity effect.[1] Thus, the purity and uniformity of these crystals would strongly affect the ferromagnetic properties, and in an ideal antiferromagnetic crystal, such spontaneous magnetization would vanish. However, later reports showed that ferromagnetism could persist in very pure crystals.[2] Meanwhile, Li proposed that the net moment in these crystals could originate from canted spins in the antiferromagnetic domain walls, as the spin canting could possibly give rise to the net moment in the crystals.[3] However, the formation of such domain walls is not energetically favorable. In 1957, Dzyaloshinskii used the Landau second-order phase transition theory in order to demonstrate that "weak" ferromagnetism in \(\alpha\)-Fe\({}_{2}\)O\({}_{3}\) could be due to the spin canting state of the material.[4; 5] Specifically, the symmetry of a magnetic crystal is determined by the space group of atom and spin distributions, leading to different classes of magnetic states. As shown in Fig. 1(a), three magnetic states can be identified in \(\alpha\)-Fe\({}_{2}\)O\({}_{3}\), namely, state I with spins directed along the crystal axis, state II with spins lying in one of the planes of symmetry, and state III with some spins along second-order
axes. The state I is the commensurate antiferromagnetic state without net moment, while states II and III can exhibit spontaneous magnetic moment. By investigating the thermodynamic potentials of \(\alpha\)-Fe\({}_{2}\)O\({}_{3}\), Dzyaloshinskii proved that the transition between states I and II (or III) will occur at a given temperature and pressure. He also suggested that the presence of a different type of spin interaction was responsible for the aforementioned transition that causes a tilting between the spins of neighboring atomic sites.
At the beginning of 1960, Moriya pointed out that the spin interaction Dzyaloshinskii suggested should be an anti-symmetric interaction of the form:
\[\overrightarrow{D}\cdot\left[\overrightarrow{S_{i}}\times\overrightarrow{S_ {j}}\right] \tag{1}\]
with \(\overrightarrow{D}\) named the Dzyaloshinskii-Moriya interaction (DMI) vector, \(\overrightarrow{S_{i}}\) and \(\overrightarrow{S_{j}}\) indicate the spins of two atomic sites \(i\) and \(j\).[6] By including the effect of spin-orbit coupling to Anderson's superexchange theory,[7] Moriya deduced that the presence of \(\overrightarrow{D}\) requires spin-orbit coupling and inversion symmetry breaking in magnetic crystals.
Several months later, Moriya developed a general theory to describe the microscopic mechanism of DMI.[8] In Moriya's model, for two magnetic atoms with only \(3d\) orbitals at atomic sites \(i\) and \(j\), the two-site Hubbard-type Hamiltonian reads:
\[H=H_{0}^{i}+H_{0}^{j}+T^{ij}+H_{SO}^{i}+H_{SO}^{j}, \tag{2}\]
where \(H_{0}^{i}\) denotes the localized \(3d\) electrons on site \(i\):
\[H_{0}^{i}=\sum_{m,\sigma}\varepsilon_{i,m}c_{im\sigma}^{\dagger}\,c_{im\sigma }+U\sum_{m\sigma\neq m^{\prime}\sigma^{\prime}}n_{im\sigma}n_{im\sigma^{\prime }}, \tag{3}\]
in which \(\varepsilon_{i,m}\) is the orbital energy of \(3d\) electron, and \(U\) indicates the Coulomb repulsion.[8, 9] The hopping between sites \(i\) and \(j\) reads:
\[T^{ij}=\sum_{n,m,\sigma}t_{in,jm}(c_{in\sigma}^{\dagger}c_{jm\sigma}+c_{jm \sigma}^{\dagger}c_{in\sigma}), \tag{4}\]
where \(t_{ij}\) denotes the hopping integral between sites \(i\) and \(j\), and \(t_{in,jm}\) is the contribution between orbitals \(n\) and \(m\) for \(t_{ij}\), respectively. \(H_{SO}^{i}\) in Eq. 2 represents the spin-orbit coupling (SOC) term at site \(i\):
\[H_{SO}^{i}=\xi\mathbf{L_{i}}\cdot\mathbf{S_{i}}\, \tag{5}\]
where \(\mathbf{L_{i}}\) and \(\mathbf{S_{i}}\) denote the angular momentum and spin momentum at atomic site \(i\). Under
the limit of large \(U\left(U>>t_{ij}\right)\), the last three terms of Hamiltonian in Eq. (2) can be treated as perturbation to \(H_{0}=H_{0}^{l}+H_{0}^{j}\), the effective interaction between two atomic sites spins \(\mathbf{S}_{i}\) and \(\mathbf{S}_{j}\) can be derived as:
\[H_{eff}=-J_{ij}\mathbf{S}_{i}\cdot\mathbf{S}_{j}+\mathbf{D}_{ij}\cdot\left(\mathbf{S}_{i}\times \mathbf{S}_{j}\right)+\mathbf{S}_{i}\cdot\mathbf{\Gamma}_{ij}\cdot\mathbf{S}_{j}, \tag{6}\]
Here, the scalar \(J_{ij}\) in the first term is the Heisenberg exchange interaction obtained from second order of perturbation of Hamiltonian \(H_{0}\), which is a symmetric interaction with \(J_{ij}=J_{ji}\) and has the order\((t_{ij})^{2}/U\). The second term is the DMI vector \(\mathbf{D}_{ij}\), that can be obtained considering \(H_{SO}\). \(\mathbf{D}_{ij}\) is antisymmetric, with \(\mathbf{D}_{ij}=-\mathbf{D}_{ji}\).The strength of \(\mathbf{D}_{ij}\) is proportional to \(\tilde{\xi}(t_{ij})^{2}/U\). When the fourth order perturbation is included, the symmetric tensorial interaction \(\mathbf{\Gamma}_{ij}\) can be derived. \(\mathbf{\Gamma}_{ij}\) has the smallest energy scale of \(\tilde{\xi}^{2}(t_{ij})^{2}/U\), which can be neglected. A physical picture of this result can be described as follows: The electron hopping between nearest neighbor magnetic atoms does not occur with spin-flipping in the absence of SOC, and the neighboring spins prefer collinear configuration due to superexchange interactions. The spin-flipping hopping process of electrons between nearest neighbor magnetic atoms only occurs while SOC effect is considered. Such two sites spin-flipping hopping process defines the microscopic origin of DMI.
As a blueprint for the effect of crystal symmetry on DMI, Moriya proposed five criteria, later known as the Moriya rules. [8] If two magnetic ions located at the points \(A\) and \(B\), respectively, and the center at \(AB\) is denoted by \(C\), then:
1. When an inversion center located at \(C\), \(\overrightarrow{D}=0\)
2. When a mirror plane perpendicular to \(AB\) passes through \(C\), \(\overrightarrow{D}\parallel\) mirror plane or \(\overrightarrow{D}\perp AB\)
3. When there is a mirror plane including \(A\) and \(B\), \(\overrightarrow{D}\perp\) mirror plane
4. When a two-fold rotation axis perpendicular to \(AB\) passes through \(C\), \(\overrightarrow{D}\perp\) two-fold rotation axis
5. When there is an n-fold axis (n \(\geq\) 2) along \(AB\),
\(\overline{D}\) || _AB_
The schematic representations of Moriya rules are shown in Fig. 1(b). For a ferromagnetic state, the adjacent spins are parallel to each other. As shown in the left panel of Fig. 1(c), DMI vectors with opposite signs result in clockwise and anticlockwise rotation between ferromagnetic aligned spins.
In 1976, Smith predicted that for ferromagnetic metals, spin-orbit scattering of the conduction electrons by the nonmagnetic impurities could give rise to additional term of DMI arising from Ruderman-Kittel-Kasuya-Yoshida (RKKY) mechanism [10, 11]. Fert and Levy extended this theory, and, for the non-centrosymmetric situation shown in the right panel of Fig. 1(c), calculated the DMI arising from electron exchange scattering on the two magnetic atoms and SOC scattering on a non-magnetic atom with strong SOC [12, 13]. They successfully explained the drastically enhanced anisotropy field induced by heavy \(d\) metal (Au, Pt) impurities in Cu:Mn dilute alloys hosting spin glass states. The DMI vector of the model proposed by Fert and Levy can be written as:
\(\overline{D}_{ljl}(\vec{R}_{li},\vec{R}_{lj},\vec{R}_{lj})=-V_{1}\frac{sin[k_{ F}(|\vec{R}_{li}|+|\vec{R}_{lj}|+|\vec{R}_{lj}|)+(\pi/10)Z_{d}](\vec{R}_{li} \cdot\vec{R}_{lj})(\vec{R}_{lu}\times\vec{R}_{lj})}{|\vec{R}_{li}|^{3}|\vec{R} _{lj}|^{3}|\vec{R}_{lj}|},\) (7)
where \(\vec{R}_{li},\ \vec{R}_{lj}\) and \(\vec{R}_{ij}\) are the distance vectors of the three sides of the triangle formed by the magnetic ions at site \(i,j\), and the spin-orbit center \(l\). The parameter \(V_{1}=\)
\([135\pi\lambda_{d}\Gamma^{2}(\sin(Z_{d}\pi/10))/(32k_{F}^{3}E_{F}^{2})]\) refers to parameters of the electron gas (\(k_{F},\ E_{F}\)), their exchange interaction with the magnetic atoms (\(\Gamma\)), and parameters of the \(d\) electrons of the heavy metal impurity (\(\lambda_{d},\) and \(Z_{d}\)).
The DMI mechanism of Fig. 1(c) has been extended by Fert to the non-centrosymmetric situation at an interface between a magnetic metal and a nonmagnetic metal (NM) of large SOC, [13] which leads to interfacial DMIs with, in most case, DMI vectors in the plane of the interface.
## 3 Spin spirals from interfacial Dzyaloshinskii-Moriya interactions
In the early 2000s, researchers unveiled non-collinear magnetic states in the 3d metal monolayer/NM heterostructures [14, 15, 16, 17, 18]. Magnetic frustration, i.e., competing antiferromagnetic Heisenberg exchange from further neighbors could give rise to non-collinear magnetic ground state in a magnet [18, 19]. The energy spectrum of the spin spirals could serve as a describer for
non-collinear magnetism, in which the spin moment at site \(\mathbf{r}_{i}\) can be generally described as \(\mathbf{\hat{S}}_{i}=[\cos(\mathbf{q}\cdot\mathbf{r}_{i})\sin\theta,\sin(\mathbf{q}\cdot\mathbf{r}_{i })\sin\theta,\cos\theta]\), where \(\mathbf{q}\) is the spiral wave vector, and \(\theta\) denotes the cone angle. Figs. 2(a) plots four types of homogenous spin spirals, namely, cone Neel type, plane Neel type, cone Bloch type and plane Bloch type spirals.[17] Such collective rotation of spins can be considered as a generalized translation action from the point of view of the generalized Bloch theorem (gBT).[20; 21]
Specifically, in a magnetic crystal, the eigenfunction \(\Psi_{\mathbf{k}}(\mathbf{r})\) for one-electron Hamiltonian \(H\) takes the form of a Bloch function:
\[\Psi_{\mathbf{k}}(\mathbf{r})\ \ =\ \ \ \ \ \mathrm{e}^{-\mathrm{i}\mathbf{k}\cdot\mathbf{r}}\ \ \mathrm{u}_{\mathbf{k}}(\mathbf{r}), \tag{8}\]
where \(\mathbf{k}\) and \(\mathbf{r}\) represents momentum and position vectors, respectively, \(\ \mathrm{u}_{\mathbf{k}}(\mathbf{r})\) is a periodic function with the same periodicity as the magnetic crystal, \(\ \mathrm{e}\) is the Euler number and \(\mathrm{i}\) denotes the imaginary unit. For a non-colinear magnetic periodical system, Sandratskii adopted the concept of spin space group (SSG) to depict the collective rotation actions of atomic spins.[20; 21] The group element of SSG are the rotation actions, noted as \(\ s_{R}\). Due to group isomorphism between SSG and generalized translation group, \(\ s_{R}\) could be represented by generalized translation action \(\ \mathbf{t}_{\mathbf{n}}\), with the latter is the element of generalized translation group. Because of the similarity between generalized translation action and ordinary translation action in periodical atomic systems, one can possibly associate generalized translation to momentum vector \(\mathbf{k}\) of Brillouin zone. The generalized translation operator \(\ \mathbf{R}_{n}\) is defined as:
\[\mathbf{R}_{\mathbf{n}}\ \ =\ \ e^{-\mathrm{i}\mathbf{q}\cdot\mathbf{t}_{\mathbf{n}}}, \tag{9}\]
where \(\mathbf{q}\) is the aforementioned spiral wave vector. The rotation angles of atomic spins can be given by \(\mathbf{q}\cdot\mathbf{t}_{\mathbf{n}}\). In the generalized form of Bloch function \(\ \Psi_{\mathbf{k}}^{\mathrm{q}}(\mathbf{r})\ =\mathrm{e}^{-\mathrm{i}\mathbf{k}\cdot\mathbf{r}}\ \ \mathrm{u}_{\mathrm{k}}^{ \mathrm{q}}(\mathbf{r})\), the spinors function \(\ \mathrm{u}_{\mathrm{k}}^{\mathrm{q}}(\mathbf{r})\) have the generalized periodicity of Hamiltonian, with \(\ \mathbf{R}_{n}\ \ \mathrm{u}_{\mathrm{k}}^{\mathrm{q}}(\mathbf{r})\ =\ \mathrm{u}_{ \mathrm{k}}^{q}(\mathbf{r})\).
The gBT here offers the possibility of calculating the energy dispersion \(E[q]\) associated with spin spiral vector length \(q\), which can be applied in density functional theory (DFT), the Korringa-Kohn-Rostoker (KKR) frameworks and tight-binding method.[20] For the magnetic frustration-induced spin spiral ground states, \(E[q]=E[\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
interfacial DMI (iDMI) will inevitably lift the degeneracy of two spin spirals with opposite chirality. From 2007 on, the Hamburg+Julich group discovered long period spin spirals with a unique chirality in Mn monolayer on W (110) and W (001) substrates, which were the first experimental evidences of iDMI. [22; 23]
Computational derivation of iDMI parameters needs to deal with SOC and non-collinear magnetism simultaneously. However, for a given angular momentum \(\mathbf{l}\) and spin momentum \(\mathbf{s}\) of a magnetic crystal, the SOC operator \(\mathbf{l\cdot s}\) cannot commute with the generalized translation operator \(\mathbf{R}_{n}\), thus SOC and gBT are exclusive. By consequence, the SOC effect cannot be included directly while calculating the spin spiral energy.
The Julich group suggested a method to calculate the SOC-affected spin spiral energy dispersion by treating the SOC effect as a first-order perturbation.[24] For the ferromagnetic (FM)/heavy metal interfaces, one can introduce a plane Neel type spiral with \(\mathbf{\widehat{S}}_{i}=[\cos(\mathbf{q}\cdot\mathbf{r}_{i}),\sin(\mathbf{q}\cdot\mathbf{r}_{i}),0]\) to investigate the interfacial DMI. The Hamiltonian with SOC term \(H_{SOC}\) included for the spin spiral reads:
\[H_{tot}=H_{0}+H_{SOC}\text{,} \tag{10}\]
where \(H_{0}\) denotes the unperturbed spin spiral Hamiltonian. The Kohn-Sham equation of \(H_{0}\) and \(H_{tot}\) can be described as
\[H_{0}\varphi_{0,\mathbf{v}}(\mathbf{q})=\epsilon_{\mathbf{0,v}}(\mathbf{q})\varphi_{0,\mathbf{v}} (\mathbf{q}), \tag{11}\]
\[H_{tot}\varphi_{ft,\mathbf{v}}(\mathbf{q,k})=(H_{0}+H_{soc})\varphi_{ft,\mathbf{v}}(\mathbf{ q,k})=\epsilon_{ft,\mathbf{v}}(\mathbf{q,k})\varphi_{ft,\mathbf{v}}(\mathbf{q,k}), \tag{12}\]
where \(\varphi_{0,\mathbf{v}}(\mathbf{q})\) and \(\epsilon_{\mathbf{0,v}}(\mathbf{q})\) are the unperturbed eigenstates and energy spectrum, respectively. \(\epsilon_{ft,\mathbf{v}}(\mathbf{q})\) is the spectrum of the total Hamiltonian \(H_{tot}\) with eigenstates \(\varphi_{ft,\mathbf{v}}(\mathbf{q})\). By applying the magnetic force theorem,[25; 26; 27] the energy shift resulting from SOC effect \(E_{DM}(\mathbf{q})\) can be obtained by summation over all occupied states:
\[E_{DM}(\mathbf{q})=\sum_{\mathbf{v}}^{o.c.}\epsilon_{ft,\mathbf{v}}-\sum_{\mathbf{v}}^{o.c.} \epsilon_{0,\mathbf{v}}\approx\sum_{\mathbf{v}}n_{\mathbf{v}}(\mathbf{q})\delta\epsilon_{\mathbf{ v}}(\mathbf{q}), \tag{13}\]
where \(\delta\epsilon_{\mathbf{v}}(\mathbf{q})=\left\langle\varphi_{0,\mathbf{v}}(\mathbf{q})|H_{soc }|\varphi_{0,\mathbf{v}}(\mathbf{q})\right\rangle\), and \(n_{\mathbf{v}}\) is the occupation number of the unperturbed states.
Here, as representative examples, Figs. 2 (c) -(d) plot the homogenous plane spin spiral in ferromagnetic Ir(111)/Fe/Pd and antiferromagnetic Rh(001)/Ir/Fe films.[28; 29] When SOC is neglected, the energy minimum of spin spiral energy \(E[\mathbf{q}]\) locates at \(q=0\) (ferromagnetic
ground state (Fig. 2(c)) or \(q=\sqrt{2}/2\) (antiferromagnetic ground state, see Fig. 2(d)). Once SOC is included, the energy dispersions \(E[q]\) for both cases in Figs. 2 (c)-(d) show an asymmetric behavior due to the presence of DMI. The DMI energy defined as \(\Delta E_{DMI}=(E[q]-E[-q])/2\) shows a linear dependence on \(q\), thus one can determine the effective DMI parameter using \(D=\frac{\mathrm{d}E[q]}{\mathrm{d}q}\).
In 2017, Sandratskii proved that if the spin-orbit operator \(\mathbf{l}\cdot\mathbf{s}\) is restricted to the direction of rotation axis \(\mathbf{\hat{n}}\), the form of \((\mathbf{l}\cdot\mathbf{\hat{n}})\cdot(\mathbf{s}\cdot\mathbf{\hat{n}})\) is commute with the generalized translation operator \(\mathbf{R}_{n}\).[30] In another word, for a given spin spiral, if the SOC Hamiltonian \(H_{SOC}\) in Eq. (10) is constrained to a single component along the direction of the rotational axis, the SOC included spin spiral energy spectrum of \(H_{tot}\) can be obtained using self-consistent calculations. This approach is the so-called qSO method, which is an extension of the first-order perturbation theory. With the qSO method, the first-order perturbation theorem of gBT is no longer limited to the full-potential DFT software.[31, 32, 33]
## 4 Interface-induced Neel-type domain walls
Magnetic domains arise to minimize the sample's magnetostatic energy, given its shape. In between these domains, domain walls (DW) appear, whose structure and energy are the result of a trade-off between various energy terms.[34, 35] The DWs can be classified into two types: the Bloch-type DWs and the Neel-type DWs. Within the former, magnetization rotates in a plane parallel to the wall plane, so that no magnetostatic volume charges appear, while within the latter the magnetization rotates in a plane perpendicular to the wall plane. The long-range dipole-dipole interaction dominates in bulk magnets and thicker magnetic films, thus the DW in such samples are usually of the Bloch-type, as shown in Fig. 3(a). In nanoscale samples such as ultrathin films and nanowires, the magneto-static energy is weakened, whereas the SOC effects like perpendicular magnetic anisotropy (PMA) and interface-induced DMI (iDMI) are enhanced due to the inversion symmetry breaking (ISB). Thus, the Neel-type DWs can be stabilized in thin films with PMA and sizeable iDMI. To distinguish the Neel DWs found in in-plane anisotropy thin films, the Neel-type DWs induced by iDMI are called Dzyaloshinskii DWs or chiral Neel DWs, as plotted in Fig. 3(a).[36]
The presence of chiral Neel DWs was confirmed by measuring the current-driven DW motion in Co/Pt thin films, in which the DWs velocity shows asymmetric aspect depending on the chirality of iDMI [37, 38]. The direct experimental observation of chiral Neel DWs was reported by Chen _et al_. in Fe/Ni bilayers epitaxially grown on Cu(100) substrate [39, 40], as shown in Fig. 3(b). They confirmed that the growth order of Fe/Ni thin films allows determining the chirality of chiral Neel DWs that is caused by the iDMI at the Fe/Ni interface. Although in some cases such as tetragonal ferrimagnetic layers, Neel-type DWs may be induced by the bulk DMI, the presence of iDMI is the crucial ingredient for Neel-type DWs in perpendicularly magnetized ultrathin films [41, 42, 43, 44].
The interface-induced chiral Neel DWs are more advantageous compared to the Bloch-type DWs in current-driven dynamics. Both the Bloch-type DWs and chiral Neel DWs can be driven by spin-transfer torque (STT), while only the chiral Neel DWs can be driven by the spin-orbit torque (SOT), with higher efficiency (see Fig. 3(c)) [45, 46, 47, 48, 49, 50, 51]. Parkin _et al_ moreover proved that in synthetic antiferromagnetic (SAF), the velocity of chiral Neel DWs can be enhanced to several 100 m/s, which shows the potential of designing high-speed spintronic devices [52, 53].
## 5 Chirality-dependent total energy difference calculation of iDMI
Due to interfacial inversion symmetry breaking, the iDMI is ubiquitous in the ferromagnetic/nonmagnetic (FM/NM) interfaces. The chirality and magnitude of iDMI strongly depends on material combination and film thickness. Thus, optimizing the construction of magnetic multilayers is crucial for large iDMI. However, most theoretical studies using the first-order perturbation theorem of gBT focus on the properties of FM monolayer on the HM substrates. In 2015, Yang _et al_. developed the chirality-dependent total energy difference approach to determine the iDMI parameters [54]. As a representative example of the Co/Pt systems in Fig. 3(d), the microscopic DMI parameter \(d_{tot}\) can be obtained by calculating the difference of the DFT energies \(E_{cw}\) and \(E_{\text{acw}}\) of clockwise and anticlockwise spin configurations, respectively, which reads:
\[d_{tot}=\frac{E_{cw}-E_{acw}}{m}\, \tag{14}\]
where \(m\) depends on the spin spiral period, with additional possibility to evaluate DMI strength \(d^{k}\) concentrated in a single atomic layer \(k\). With this approach, the size, chirality and the energy sources of iDMI at the interfaces of various FM/HM heterostructures, FM/graphene and FM/oxide interfaces have been determined.[54; 55; 56] In particular, unlike the Fert-Levy mechanism more suitable for FM/HM heterostructures, FM/2D and FM/oxide interfaces have been determined can be attributed to a more complex mechanism: the interface-induced twofold spin energy spectrum degeneracy breaking, known as the Rashba effect.[55; 56; 57; 58; 59] The Rashba Hamiltonian is described as
\[H_{R}=\ \ \alpha_{R}(\mathbf{\sigma}\times\mathbf{k})\cdot\mathbf{\hat{z}}, \tag{15}\]
where \(\ \ \alpha_{R}\) indicates the Rashba coefficient, \(\ \mathbf{\sigma}\) is the Pauli matrix vector of atomic spins and \(\mathbf{k}\) is the momentum of atomic orbitals. From theoretical models, several groups suggested that DMI could be induced by Rashba effect.[60; 61; 62] The relation between DMI strength \(d\) and Rashba coefficient is described as:
\[d=2\ \ k_{R}A, \tag{16}\]
where \(\ k_{R}=\frac{2\alpha_{R}m_{e}}{h^{2}}\) is a constant determined by Rashba coefficient \(\alpha_{R}\), effective mass of electron \(\ m_{e}\) and the reduced Planck constant \(\ h\), \(\ A\) is the spin stiffness parameter. The energy source of Rashba-type DMI is contributed by the interfacial magnetic atoms rather than the adjacent non-magnetic atoms.[54; 55; 31] Moreover, for FM heterostructures with multiple interfaces, the chirality-dependent total energy difference approach allows extracting the iDMI contribution from each monolayer and interface, which can provide guidelines to maximize iDMI for FM multilayers.[56]
## 6 Magnetic Skyrmions
Skyrmions are particles-like swirling configurations. The concept of skyrmions was first proposed by Skyrme in 1962 when he tried to explain how subatomic particles can exist as discrete entities surrounded by a continuous nuclear field.[63] In 1975, Belavin and Polyakov proved that such metastable quasi-particles could exist in 2D ferromagnets.[64] From the 1990s, Bogdanov _et al._ theoretically predicted that magnetic skyrmions could be induced and stabilized by DMI.[65; 66; 67]
In 2009, magnetic skyrmions were discovered in the MnSi crystals by the Pfleiderer, Boni and colleagues, using small angle neutron scattering.[68; 69] Shortly after, Yu _et al_. obtained the first real space images of skyrmions in Fe\({}_{0.5}\)Co\({}_{0.5}\)Si films, by Lorentz transmission electron microscopy, as shown in Fig. 4(a).[70] For these cubic compounds of the B20 cristallographic type, the skyrmions are the Bloch type due to bulk DMI. The first instance of a skyrmions lattice in an ultrathin film was found in the Fe monolayer on Ir(110) substrate by Heinze _et al._ (see Fig. 4(b)).[71] In such monolayer, skyrmions are stabilized by iDMI and four-spins interactions. Later reports showed that isolated skyrmions could be found in Ir(111)/Fe/Pd ultrathin films due to iDMI[72]. Till today, bulk materials hosting skyrmions consist of a variety of acentric magnetic crystals.[73; 74; 75; 76; 77; 78] In these magnets, the helicity of skyrmions varies from Bloch-type, Neel-type and antiskyrmions (see Fig. 4(c)) depending on the respective DMI vectors.
In the past decades, FM multilayer thin films hosting skyrmions have received greater attention due to their compatibility with the contemporary magnetic storage media and technologies. Moreover, by adjusting film thickness and material combinations in the FM based multilayers, one can elaborately control the iDMI, perpendicular magnetic anisotropy and exchange stiffness, thereby tuning the size, temperature stability and dynamics of skyrmions. In 2015, Chen _et al._ carefully tuned the interlayer interaction in ultrathin Cu (001)/Ni/Fe multilayers, and achieved a field-free skyrmions phase at room temperature (see Fig. 4(d)).[79] The FM/HM multilayer systems are of high research interest as skyrmions hosting materials, with the common strategies to use two FM/HM interfaces with opposite iDMI chirality, and to use both the FM/oxide and FM/HM interfaces (see Figs. 4(e)-(g)).[80; 81; 82; 83; 84; 85]
Due to the non-trivial topology of skyrmions, charge carriers can feel an extra force while they pass skyrmions.[86; 87; 88] It has also been remarked that, compared to the current-driven DW motion, the critical current required for SOT- and STT-driven skyrmion motion can be much lower.[89; 90; 91; 92; 93] However, skyrmions are deflected sideways when moving, as a manifestation of their non-trivial topology which is called gyrotropic force, or skyrmion Hall effect (SkHE).[94; 95; 96] This gives rise to a transverse velocity during the current driven skyrmion dynamics (see Fig. 4(h)). With synthetic antiferromagnetic (SAF) structures (see Fig. 4(i)), the gyrotropic force in the upper and lower magnetic layers compensate each other, so that skyrmions can be
driven in a straight racetrack[97].
## 7 Applications
The DMI is one of the important spin-orbit properties at the basis of spin-orbitronics and its applications. For example, DMI is essential in the concept of the chiral Neel DWs, which are involved in the current developments of racetrack memories, and plays an important role in the switching of SOT-RAMs for logic and memory functions[36]. The concept of racetrack memory introduced by Parkin in 2008 was based on motion of DWs driven by STT in magnetic films with in-plane magnetization, see Fig. 5(a)[98]. The situation changed with the demonstration of the stabilization of chiral Neel DWs in perpendicularly magnetized magnetic films and the prediction of their fast motion by SOT[36]. As the SOT-induced fast motion of such Neel DWs was rapidly confirmed by Emori _et al._ and Kwang-Su Ryu _et al._, the most recent efforts for the development of DW-based racetrack memory have been performed in this direction[50, 51]. In 2015, Yang and Parkin proposed a racetrack memory based on the chiral Neel DWs in SAF structures, which can further increase memory speed and minimize the size of devices[99]. In the recent years, the research on magnetic devices has considered exploiting the motion of skyrmions, as described below.
With skyrmions, a variety of devices have been proposed including storage, logic and neuromorphic devices. Skyrmion racetrack memories based on HM/FM films and current-induced motion of skyrmions were first proposed by Fert _et al._[90], in which "0" and "1" states are associated to the absence or presence of one skyrmion, see Fig. 5(b). As the spacing between neighboring skyrmions can be of the order of their diameter or, approximately, of the order of a DW width, one can expect a higher memory density with skyrmions than with DWs. By placing two magnetic tunnel junctions (MTJ) on a racetrack of HM/FM film to generate and to detect skyrmions states, Zhang _et al._ proposed the magnetic skyrmion transistor device through voltage-gate control[100, 101]. For FM/oxides interface, DMI could be modulated by ion-gating. [102]. With the ion gating technique, Fillion _et al_. realized reversible control of skyrmions chirality in FeCoB/TaO\({}_{\rm x}\) multilayers[103]. From such designs, spin-logic devices based on skyrmions have been intensively studied[104, 105, 106, 107]. As skyrmions are encodable particle-like structures, multiple skyrmions could also be used as a multi-valued memory[108, 109, 110]. This type of concept is also suitable for neural network related applications[111].
Apart from electric current, skyrmions can also be driven by electrical field, temperature gradient, and spin waves, which could also enable potential applications.[112, 113, 114, 115]
Finally, DMI plays an important role in several other applications of spin-orbitronics. For example, spin tilts by DMI on the sample's edges, and motions of DMI-induced Neel DW are involved in the switching of perpendicular magnetization by SOT in devices of SOT-MRAM type.[116, 117] Recently, Yu _et al._ show that perpendicular magnetization switching can be realized by DMI torque.[118]
## 8 Perspective
Until now, most of our discussion was focused on the FM-based multilayer films, with inversion symmetry along the axis perpendicular to the films broken by an interface, thus making the presence of iDMI inevitable. Some of the FM-nonmagnetic (NM) -FM stacks could also break the in-plane inversion symmetry, and accordingly lead to an interlayer DMI coupling the spins in successive layers.[119, 120, 121, 122] In the simplest situation, the interlayer DMI can be expressed as a coupling between the magnetizations \(\mathbf{m_{1}}\) and \(\mathbf{m_{2}}\) of the top and bottom layers, \(\mathbb{E}_{DMI}=-\mathbf{D_{1,2}}\cdot\left(\mathbf{m_{1}\times m_{2}}\right).\) This coupling leads to a small canting of the magnetizations \(\mathbf{m_{1}}\) and \(\mathbf{m_{2}}\) from the perpendicular direction, as represented in Fig.6(e), and to the possibility of field free switching by SOT.[123]
Moreover, composition gradient and oblique growth of ultrathin films can also lead to additional symmetry breaking in multilayers, resulting in a gradient-induced DMI (g-DMI), that inevitably comprises bulk DMI components.[124, 125, 126, 127, 128, 129] The presence of g-DMI can facilitate perpendicular magnetization switching in SOT devices.[129, 130]
Recent breakthroughs in realizing two-dimensional (2D) intrinsic ferromagnetic films offer other candidates for future spintronics.[131, 132, 133, 134] However, except for some rare cases, most of the 2D magnets obtained from exfoliation are centrosymmetric, resulting in a vanished total DMI.[33] Experimental and theoretical reports demonstrate that fabricating van der Waals heterostructures (see Fig. 6(a)) and use of chemical absorption (see Fig. 6(b)) can be effective to introduce ISB in the 2D magnets, and thereby generate sizable DMI to stabilize skyrmionic spin textures.[135, 136, 137] Another strategy to introduce ISB in the 2D magnets is to artificially build "Janus" magnets such as MnSeTe, CrGe(Si,Te)\({}_{3}\), CrSeTe, and etc.[138, 139, 140, 141, 142, 143, 144, 145] As shown in Fig.6 (c),
the DMI values in Janus magnets MnSeTe and MnSTe are strong enough to stabilize skyrmions. More recently, researches showed that antiskyrmions could be found in a group of 2D magnets with P-4m2 space group (see Fig. 6(d)) [146; 147]).
Fascinating 2D magnets belong to the type-I 2D multiferroics, in which the coupling between magnetism and electric polarization (ME) could provide a convenient way of electric field control of magnetism. Theoretical models predicted that DMI chirality and strength in magnets depend strongly on the electric polarization [148; 149]. Since the 2D multiferroics such as the VOI\({}_{2}\) monolayer and the Ca\({}_{3}\)FeOsO\({}_{7}\) bilayer harbor intrinsic ISB, DMI and topological magnetic textures tuned by an electrical field can be realized [150; 151]. Furthermore, in the CrN monolayer (see Fig. 6(f)) and the Co (MoS\({}_{2}\))\({}_{2}\) monolayer, the transformation between four states of skyrmions can be tailored by an out-of-plane electric field [152; 31].
In addition, it is inevitable to induce ripples for a 2D material either freestanding or on a substrate as soon as the size of 2D material is large enough. If such a curved system is magnetic, it is highly possible to achieve DMI in low dimensional magnets [153]. The presence of DMI in curved one-dimensional CrBr\({}_{2}\) and 2D MnSe\({}_{2}\)[154], as well as CrI\({}_{3}\) nanotubes has been theoretically confirmed [155]. Lastly, the twisting technique can also introduce ISB for moire lattice 2D materials [156; 157]. Therefore, DMI and skyrmions could be induced in moire lattice 2D magnets [158; 157].
###### Acknowledgements.
This work was supported by the National Key Research and Development Program of China (MOST) (Grants Nos. 2022YFA1405100 and 2022YFA1403601), the National Natural Science Foundation of China (Grant No. 12174405), and the European Union's Horizon 2020 research and innovation Program under grant agreement 881603 (Graphene Flagship).
|
2310.19093 | Extending the Cooperative Dual-Task Space in Conformal Geometric Algebra | In this work, we are presenting an extension of the cooperative dual-task
space (CDTS) in conformal geometric algebra. The CDTS was first defined using
dual quaternion algebra and is a well established framework for the simplified
definition of tasks using two manipulators. By integrating conformal geometric
algebra, we aim to further enhance the geometric expressiveness and thus
simplify the modeling of various tasks. We show this formulation by first
presenting the CDTS and then its extension that is based around a cooperative
pointpair. This extension keeps all the benefits of the original formulation
that is based on dual quaternions, but adds more tools for geometric modeling
of the dual-arm tasks. We also present how this CGA-CDTS can be seamlessly
integrated with an optimal control framework in geometric algebra that was
derived in previous work. In the experiments, we demonstrate how to model
different objectives and constraints using the CGA-CDTS. Using a setup of two
Franka Emika robots we then show the effectiveness of our approach using model
predictive control in real world experiments. | Tobias Löw, Sylvain Calinon | 2023-10-29T17:47:45Z | http://arxiv.org/abs/2310.19093v1 | # Extending the Cooperative Dual-Task Space
###### Abstract
In this work, we are presenting an extension of the cooperative dual-task space (CDTS) in conformal geometric algebra. The CDTS was first defined using dual quaternion algebra and is a well established framework for the simplified definition of tasks using two manipulators. By integrating conformal geometric algebra, we aim to further enhance the geometric expressiveness and thus simplify the modeling of various tasks. We show this formulation by first presenting the CDTS and then its extension that is based around a cooperative pointpair. This extension keeps all the benefits of the original formulation that is based on dual quaternions, but adds more tools for geometric modeling of the dual-arm tasks. We also present how this CGA-CDTS can be seamlessly integrated with an optimal control framework in geometric algebra that was derived in previous work. In the experiments, we demonstrate how to model different objectives and constraints using the CGA-CDTS. Using a setup of two Franka Emika robots we then show the effectiveness of our approach using model predictive control in real world experiments.
Geometric Algebra, Dual-Arm Manipulation, Optimal Control
## I Introduction
With the increasing desire to deploy robots in human environments, the need for robots to have human-like manipulation capabilities arises. One inherent ability that humans have is to manipulate objects using both their hands and arms, which is needed for example when objects that are either too large or too heavy need to be manipulated. In order to match these capabilities and to be able to mimic them, robotic systems also need to be able to cooperatively control two arms in order to perform tasks in human environments.
Apart from dual-arm systems being more human-like in terms of form factor, they also have some technical advantages. One can have the stiffness and strength of parallel manipulators combined with the flexibility and dexterity of serial manipulators [1]. Furthermore, since they increase the redundancy in the task-space due to their high number of degrees of freedom, they are better suited for intricate tasks that require a high manipulability such as screw assembly [2] and dishwashing [3]. Other applications of bimanual systems are manipulating articulated objects [4] or cables [5]. More advantages and examples are listed in [6].
Since many problems in robotics boil down to optimization problems that can be solved efficiently with various state-of-the-art solvers, it is of great interest to facilitate the modeling and increase the expressiveness of the formulations. Choosing the correct representation can make a huge difference in terms of how much prior knowledge we can embed into the formulation of those optimization problems. These are in robotics often very geometric, hence it is very beneficial to choose representations that intuitively allow to incorporate the geometry of the problem. For the case of dual-arm manipulation, the cooperative dual-task space (CDTS) [7] was proposed. This approach uses dual quaternion algebra (DQA), which not only unifies the treatment of position and orientation, it also allows the representation of various geometric primitives [8]. These primitives can then be used to simplify the modeling of the tasks.
Dual quaternions have a strong connection to geometric algebra, especially the variants known as projective (PGA) and conformal (CGA) geometric algebra [9], since dual quaternions are isomorphically embedded in their sub-algebras [10]. The geometric algebras, however, are richer algebras that offer more geometric primitives and, more importantly, they offer the geometric construction of primitives based on operations such as intersections [10]. This leads to new possibilities when formulating objectives and constraints based on the geometric primitives in the CGA-CDTS compared to the DQ-CDTS, which we will show in the experiments.
In this article, we formulate the CDTS in conformal geomet
Fig. 1: Cooperative dual-task space using conformal geometric algebra. The figure shows the two manipulators as well as their individual, relative and absolute motors. Additionally, it shows the cooperative pointpair.
ric algebra and show how this formulation naturally extends the DQ-CDTS. The resulting CGA-CDTS retains the same properties of a compact representation of two arm system as the DQ-CDTS, while adding a useful geometric primitive, the cooperative pointpair, that represents both end-effector positions simultaneously. Furthermore, we demonstrate how the CGA-CDTS can be used in the optimal control formulation with geometric primitives for manipulators that we presented in [11]. Hence, this article aims to explain the basic mathematical formulations of various optimization problems using the CGA-CDTS.
## II Background
### _Cooperative Dual-Task Space_
We will introduce the cooperative dual-task space here conceptually, and not mathematically, since its original definition uses dual quaternion algebra. In contrast to that, we will be defining and extending it using conformal geometric algebra. So, for the sake of conciseness, we are keeping this introduction on a high level.
The cooperative dual-task space was proposed as a compact and singularity-free representation of a two-arm system [7]. It is defined using two poses that depend on the end-effector poses of the two manipulators, one is the relative and the other one is the absolute pose. All poses are represented using unit dual quaternions, which have advantages compared to other representations, such as coupled position and orientation, singularity-free representation and efficient computation [12].
Based on the compact representation of the CDTS, various control strategies have been proposed. In [13], a coupled task-space admittance controller was presented, that allowed for a geometrically consistent stiffness term. A reactive control strategy was developed in [14] that leveraged geometric primitives for task relaxations and priorities. Task priorities for control in the CDTS were also proposed in [15]. In order to exploit human demonstration that allow the teaching of cooperative motions, motion primitives for bimanual systems were presented based on the CDTS [16].
Note that the results of the research that is based on the DQ-CDTS can also be used with the CGA-CDTS, albeit with mathematical changes due to the different algebra. Furthermore, the mentioned advantages of DQA also apply to CGA, since dual quaternions and the corresponding subalgebra in CGA, i.e. the motors, are isomorphic [17].
### _Geometric Algebra_
Geometric algebra is a single algebra for geometric reasoning, alleviating the need of utilizing multiple algebras to express geometric relations. In this article, we are using the variant known as conformal geometric algebra (CGA). We use the following notation: \(x\) to denote scalars, \(\boldsymbol{x}\) for vectors, \(\boldsymbol{X}\) for matrices, \(X\) for multivectors and \(\boldsymbol{\mathcal{X}}\) for matrices of multivectors.
A general element for computation in geometric algebra is called a multivector. There are three main products that can be used with multivectors: the geometric product \(XY\), the inner product \(X\cdot Y\) and the outer product \(X\wedge Y\). The trivial vector case shows that the geometric product combines the inner \(\cdot\) and the outer \(\wedge\) product
\[\boldsymbol{ab}=\boldsymbol{a}\cdot\boldsymbol{b}+\boldsymbol{a}\wedge \boldsymbol{b}. \tag{1}\]
The outer product is a spanning operation that allows the creation of geometric primitives from points \(P_{i}\). For example, two points \(P_{1}\wedge P_{2}\) yield a point pair, three points \(P_{1}\wedge P_{2}\wedge P_{3}\) a circle and four points \(P_{1}\wedge P_{2}\wedge P_{3}\wedge P_{4}\) a sphere. There are more primitives, that we will introduce when we use them in the experiments.
Rigid body transformations are represented by motors \(M\). They can be applied to multivectors by a sandwiching operation, similar to how quaternions rotate vectors
\[Y=MX\widetilde{M}, \tag{2}\]
where \(\widetilde{M}\) stands for the reverse of a motor. Motors are exponential mappings of so-called bivector (i.e. the subspaces spanned by the outer product of two vectors), the inverse operation is the logarithmic map
\[M=\exp(B)\quad\Longleftrightarrow\quad B=\log(M). \tag{3}\]
The motor \(M(\boldsymbol{q})\) corresponding to the forward kinematics of a kinematic chain can be computed as the product of the individual joint motors
\[M(\boldsymbol{q})=\prod_{i=1}^{N}M_{i}(q_{i}). \tag{4}\]
The analytic Jacobian can then be found as the derivative of the forward kinematics motor defined in Equation (4), i.e.
\[\boldsymbol{\mathcal{J}}^{A}(\boldsymbol{q})=\frac{\partial M(\boldsymbol{q}) }{\partial\boldsymbol{q}}=\left[\frac{\partial M(\boldsymbol{q})}{\partial q _{1}}\dots\frac{\partial M(\boldsymbol{q})}{\partial q_{N}}\right]. \tag{5}\]
At this point, we only introduced the most important concepts for understanding the proposed method. For a complete introduction we refer interested readers to [18] and [19].
## III Method
The CDTS in conformal geometric algebra is an extension of the CDTS in dual quaternion algebra. We first present the basic reformulation of the CDTS in CGA and then its extension by using the additional geometric primitives. Lastly, we present how the CGA-CDTS can be used within an optimal control framework using geometric algebra that we previously proposed for manipulation tasks.
### _Conformal Geometric Algebra Cooperative Dual-Task Space_
The geometric algebra equivalent of the relative and absolute dual quaternions of the DQ-CDTS are defined using motors. Given the joint configurations of the two manipulators, \(\boldsymbol{q}_{1}\) and \(\boldsymbol{q}_{2}\), respectively, we can easily find their end-effector motors \(M_{1}(\boldsymbol{q}_{1})\) and \(M_{2}(\boldsymbol{q}_{2})\) using the forward kinematics in CGA. From this it is straightforward to formulate the relative motor as
\[M_{r}(\boldsymbol{q}_{1},\boldsymbol{q}_{2})=\widetilde{M}_{2}(\boldsymbol{q} _{2})M_{1}(\boldsymbol{q}_{1}), \tag{6}\]
while its Jacobian, i.e. the relative analytic Jacobian, can be found as
\[\boldsymbol{\mathcal{J}}_{r}^{A}(\boldsymbol{q}_{1},\boldsymbol{q}_{2})=\left[ \widetilde{M}_{2}(\boldsymbol{q}_{2})\boldsymbol{\mathcal{J}}_{1}^{A}( \boldsymbol{q}_{1})\quad\widetilde{\boldsymbol{\mathcal{J}}}_{2}^{A}( \boldsymbol{q}_{2})M_{1}(\boldsymbol{q}_{1})\right]. \tag{7}\]
Similarly, the absolute motor can be found as
\[\begin{split} M_{a}(\boldsymbol{q}_{1},\boldsymbol{q}_{2})& =M_{2}(\boldsymbol{q}_{2})M_{r/2}(\boldsymbol{q}_{1},\boldsymbol{q}_{2})\\ &=M_{2}(\boldsymbol{q}_{2})\exp\left(\frac{1}{2}\log\left(M_{r}( \boldsymbol{q}_{1},\boldsymbol{q}_{2})\right)\right),\end{split} \tag{8}\]
with its corresponding absolute analytic Jacobian
\[\begin{split}\boldsymbol{\mathcal{J}}_{a}^{A}(\boldsymbol{q}_{1}, \boldsymbol{q}_{2})=& M_{2}(\boldsymbol{q}_{2})\boldsymbol{ \mathcal{J}}_{M_{r/2}}^{A}(\boldsymbol{q}_{1},\boldsymbol{q}_{2})\\ &+\left[\boldsymbol{0}\quad\boldsymbol{\mathcal{J}}_{2}^{A}( \boldsymbol{q}_{2})M_{r/2}(\boldsymbol{q}_{1},\boldsymbol{q}_{2})\right], \end{split} \tag{9}\]
where \(\boldsymbol{\mathcal{J}}_{M_{r/2}}^{A}(\boldsymbol{q}_{1},\boldsymbol{q}_{2})\) is the analytic Jacobian of the motor \(M_{r/2}(\boldsymbol{q}_{1},\boldsymbol{q}_{2})\). It can be found as
\[\boldsymbol{\mathcal{J}}_{M_{r/2}}^{A}(\boldsymbol{q}_{1},\boldsymbol{q}_{2})= \boldsymbol{J}_{\mathbb{B}\rightarrow\mathcal{M}}(B_{r/2})\boldsymbol{J}_{ \mathcal{M}\rightarrow\mathbb{B}}(M_{r})\boldsymbol{J}_{r}^{A}(\boldsymbol{q }_{1},\boldsymbol{q}_{2}). \tag{10}\]
Here, \(B_{r/2}\) is the logarithm of the motor \(M_{r/2}\). The matrices \(\boldsymbol{J}_{\mathbb{B}\rightarrow\mathcal{M}}\) and \(\boldsymbol{J}_{\mathcal{M}\rightarrow\mathbb{B}}\) are the Jacobians of the exponential and logarithmic mapping respectively. We already showed the derivation of the Jacobian of the logarithmic mapping in the appendix of [11]. The Jacobian of the exponential mapping can be found in Appendix A.
Both the relative and the absolute analytic Jacobians are \(1\times 2N\) multivector matrices that contain motors as their elements. Hence, when expanding it to normal matrix algebra they become \(8\times 2N\) matrices.
Since motors in CGA can be used to transform any geometric primitive that is part of the algebra in a uniform way, it is easy to find cooperative geometric primitives. Their definition can be trivially found using Equation (2), where \(M\) is either the relative \(M_{r}(\boldsymbol{q}_{1},\boldsymbol{q}_{2})\) or absolute \(M_{a}(\boldsymbol{q}_{1},\boldsymbol{q}_{2})\) motor and \(X\) can be any geometric primitive. The corresponding Jacobians are then found using the respective Jacobians \(\boldsymbol{\mathcal{J}}_{r}^{A}(\boldsymbol{q}_{1},\boldsymbol{q}_{2})\) and \(\boldsymbol{\mathcal{J}}_{a}^{A}(\boldsymbol{q}_{1},\boldsymbol{q}_{2})\).
### _Cooperative Pointpair_
In extension to the CDTS that was defined using dual quaternion algebra, the CDTS presented here using CGA also allows a geometric primitive that corresponds to both end-effector positions simultaneously. This cooperative pointpair is defined as the outer product of the two end-effector points, i.e.
\[P_{cdts}=M_{1}(\boldsymbol{q}_{1})\boldsymbol{e}_{0}\widetilde{M}_{1}( \boldsymbol{q}_{1})\wedge M_{2}(\boldsymbol{q}_{2})\boldsymbol{e}_{0} \widetilde{M}_{2}(\boldsymbol{q}_{2}). \tag{11}\]
The Jacobian of the cooperative pointpair can be found as
\[\boldsymbol{\mathcal{J}}_{P_{cdts}}=\left[\boldsymbol{\mathcal{J}}_{P_{cdts}, 1}\quad\boldsymbol{\mathcal{J}}_{P_{cdts},2}\right], \tag{12}\]
where
\[\begin{split}\boldsymbol{\mathcal{J}}_{P_{cdts},1}=& \boldsymbol{\mathcal{J}}_{1}^{A}(\boldsymbol{q}_{1})\boldsymbol{e}_{0} \widetilde{M}_{1}(\boldsymbol{q}_{1})\wedge M_{2}(\boldsymbol{q}_{2}) \boldsymbol{e}_{0}\widetilde{M}_{2}(\boldsymbol{q}_{2})\\ &+M_{1}(\boldsymbol{q}_{1})\boldsymbol{e}_{0}\widetilde{ \boldsymbol{\mathcal{J}}}_{1}^{A}(\boldsymbol{q}_{1})\wedge M_{2}(\boldsymbol{q }_{2})\boldsymbol{e}_{0}\widetilde{M}_{2}(\boldsymbol{q}_{2}),\end{split} \tag{13}\]
and
\[\begin{split}\boldsymbol{\mathcal{J}}_{P_{cdts},2}=& M_{1}( \boldsymbol{q}_{1})\boldsymbol{e}_{0}\widetilde{M}_{1}(\boldsymbol{q}_{1}) \wedge\boldsymbol{\mathcal{J}}_{2}^{A}(\boldsymbol{q}_{2})\boldsymbol{e}_{0} \widetilde{M}_{2}(\boldsymbol{q}_{2})\\ &+M_{1}(\boldsymbol{q}_{1})\boldsymbol{e}_{0}\widetilde{M}_{1}( \boldsymbol{q}_{1})\wedge M_{2}(\boldsymbol{q}_{2})\boldsymbol{e}_{0} \widetilde{\boldsymbol{\mathcal{J}}}_{2}^{A}(\boldsymbol{q}_{2}).\end{split} \tag{14}\]
Note that the cooperative pointpair is a direct representation of both points and is not the same as stacking the two points. Therefore the Jacobian matrix is also different, which will lead to different solutions of optimization problems. An example of this is shown in Figure 2, where two Franka Emika robots are tasked to reach a plane, once individually (i.e. by stacking their end-effector points) and once cooperatively (i.e. by using the cooperative pointpair that is presented here). It can be seen that the corresponding solution configurations are not the same, which shows that the cooperative pointpair representation lets the two robots influence each other.
Evidently, this cooperative pointpair Jacobian has a singularity in the case when both end-effector positions are equal. This will cause the outer product, by definition, to be zero. In practice, this can be avoided by posing a constraint on the distance between end-effectors, which usually is required anyways.
### _Using the CGA-CDTS for Optimal Control_
Integrating the CGA-CDTS into the optimal control framework that we presented in [11] can be achieved by replacing the single end-effector motor with the relative and absolute motors, respectively. The objective of reaching a target motor is then formulated as
\[E_{M}(\boldsymbol{q}_{1},\boldsymbol{q}_{2})=\log\left(\widetilde{M}_{target}M( \boldsymbol{q}_{1},\boldsymbol{q}_{2})\right), \tag{15}\]
where \(M(\boldsymbol{q}_{1},\boldsymbol{q}_{2})\) can be either \(M_{r}(\boldsymbol{q}_{1},\boldsymbol{q}_{2})\) or \(M_{a}(\boldsymbol{q}_{1},\boldsymbol{q}_{2})\).
Alternatively, we can define a residual multivector for reaching a geometric primitive \(X_{d}\) as
\[E_{X_{d}}(\boldsymbol{q}_{1},\boldsymbol{q}_{2})=X_{d}\wedge M(\boldsymbol{q}_ {1},\boldsymbol{q}_{2})X\widetilde{M}(\boldsymbol{q}_{1},\boldsymbol{q}_{2}), \tag{16}\]
again using either the relative or absolute motor. Deriving the respective Jacobians is straightforward using the definitions
Fig. 2: Dual-Arm manipulator reaching a plane. The target plane is shown in red. The white Franka Emika robots show the initial configurations, the green ones are the result of individually reaching the plane, and the blue ones cooperatively.
of the relative and absolute analytic Jacobians in Equations (7) and (9). With this, these residual multivectors of the CGA-CDTS can then directly be used to define objectives or constraints for optimal control problems, which would mean for example that a relative or absolute geometric primitive should be reached.
The cooperative pointpair can also be used in order to define tasks as optimization problems. The first way to do so is using the outer product in a similar way to the above relative and absolute residual multivectors, i.e.
\[E_{cdts}(\boldsymbol{q}_{1},\boldsymbol{q}_{2})=P_{target}\wedge P_{cdts}( \boldsymbol{q}_{1},\boldsymbol{q}_{2}), \tag{17}\]
with the Jacobian
\[\boldsymbol{\mathcal{J}}_{E_{cdts}}(\boldsymbol{q}_{1},\boldsymbol{q}_{2})=P_ {target}\wedge\boldsymbol{\mathcal{J}}_{P_{csts}}(\boldsymbol{q}_{1}, \boldsymbol{q}_{2}), \tag{18}\]
This objective means that the two manipulators should cooperatively reach a single point. The implications and results of this objective are further detailed in the experiment section.
Another common use-case are containment relationships for the cooperative pointpair with respect to other geometric primitives. These can then be used to define tasks where the dual-arm system should cooperatively reach a target. The mathematical formulation is using a product called the commutator product \(\times\), i.e.
\[E_{cdts}(\boldsymbol{q}_{1},\boldsymbol{q}_{2}) =X_{d}\times P_{cdts}(\boldsymbol{q}_{1},\boldsymbol{q}_{2})\] \[=\frac{1}{2}(X_{d}P_{cdts}(\boldsymbol{q}_{1},\boldsymbol{q}_{2} )-P_{cdts}(\boldsymbol{q}_{1},\boldsymbol{q}_{2})X_{d}). \tag{19}\]
The Jacobian of this containment relationship can be found as
\[\boldsymbol{\mathcal{J}}_{E_{cdts}}(\boldsymbol{q}_{1},\boldsymbol{q}_{2})=X_ {d}\times\boldsymbol{\mathcal{J}}_{E_{cdts}}(\boldsymbol{q}_{1},\boldsymbol{ q}_{2}). \tag{20}\]
The interpretation of the residual multivector \(E_{cdts}(\boldsymbol{q}_{1},\boldsymbol{q}_{2})\) is that it should be reduced to zero if the cooperative pointpair is contained within the desired geometric primitive \(X_{d}\), e.g. if both end-effector points lie on a circle.
The distance between the two end-effector can be constrained using the inner product between the two end-effector points, i.e.
\[E_{d}(\boldsymbol{q}_{1},\boldsymbol{q}_{2})=-2M_{1}(\boldsymbol{q}_{1}) \boldsymbol{e}_{0}\widetilde{M}_{1}(\boldsymbol{q}_{1})\cdot M_{2}( \boldsymbol{q}_{2})\boldsymbol{e}_{0}\widetilde{M}_{2}(\boldsymbol{q}_{2})-d^ {2}, \tag{21}\]
where \(d\) is the desired distance.
## IV Experiments
In this section, we are presenting various cooperative tasks that are defined in the CGA-CDTS. For each task we are providing the mathematical definition of the optimization problem. We are then solving those optimization problems using standard solvers such as Gauss-Newton. For simplicity, the problems in simulation are formulated essentially as inverse kinematics problems, the same objectives and constraints can, however, be used in optimal control problems that are for example then used for model predictive control. We are demonstrating this in the real world experiments, where the problems are then solved by using a variant of the iterative linear quadratic regulator (iLQR) [20]. Both the simulation and the real-world experiments use the same setup of two table-top mounted Franka Emika robots. Additional material for the experiments as well as the videos of the real world experiments can be found on the accompanying website 1. All geometric algebra computations are done using our open-source library _gafo_2.
Footnote 1: [https://geometric-algebra.tobiloew.ch/cdts/](https://geometric-algebra.tobiloew.ch/cdts/)
Footnote 2: [https://github.com/idiap/gafo](https://github.com/idiap/gafo)
### _Cooperatively Reaching a Point_
As mentioned before, the cooperative pointpair models both end-effector positions simultaneously. Hence, when formulating a simple optimization problem such as reaching a single point, the system automatically chooses which manipulator should perform the task. Since the information of both manipulators is encoded in one geometric primitive, it alleviates the need to construct the problem using conditional formulations. The problem can be expressed compactly using the outer product, i.e.
\[\boldsymbol{q}^{*}=\min_{\boldsymbol{q}}\Bigl{\|}P_{target}\wedge P_{cdts}( \boldsymbol{q}_{1},\boldsymbol{q}_{2})\Bigr{\|}_{2}^{2}. \tag{22}\]
An example of this task is shown in Figure 3. Notice how in both cases the manipulator that is closer to the point performs the reaching task while the other remains in its initial configuration.
Fig. 3: Two Franka Emika robots reaching for a single point in the cooperative dual-task space. The initial configurations are shown in white and the final ones in gray. The target point is shown in red.
We also show this experiment in using the real-world setup and the accompanying video can be found on our website.
### _Cooperatively Reaching a Circle_
One of the geometric primitives that is available in CGA but not DQA is a circle. Hence, we can use the CGA-CDTS to define a task where a dual arm system should cooperatively reach a circle. An example application of reaching a circle would be holding a filled bucket with two manipulators. Mathematically, a circle is obtained by the outer product of three points.
The problem of cooperatively reaching a circle is formulated as a constrained optimization problem using the cooperative pointpair
\[\begin{split}\mathbf{q}^{*}=\min_{\mathbf{q}}&\Big{\|} \log\left(\widetilde{M}_{r}(\mathbf{q}_{0})M_{r}(\mathbf{q})\right)\Big{\|}_{2}^{2}\\ &\text{s.t. }C\times P_{c\ell ts}(\mathbf{q}_{1},\mathbf{q}_{2})=0.\end{split} \tag{23}\]
This optimization problem formulates the task of reaching a circle, while trying to maintain the initial relative motor. The result of this problem can be seen in Figure 4.
Formulating the task in this manner circumvents the mentioned singularity issue in the Jacobian of the cooperative pointpair, since the system will reach circle while keeping as close as possible to their initial relative poses.
### _Aligning Orientation Axis_
When two manipulators are cooperatively manipulating an object, often it is necessary to partially constrain their relative orientation, which is necessary in nearly all dual-arm grasping scenarios of rigid objects. One way to do so is enforcing collinear lines in the desired direction at the end-effector level. This is again achieved by using the commutator product, i.e.
\[\begin{split} E_{L_{1}L_{2}}&(\mathbf{q}_{1},\mathbf{q}_{2 })\\ &=M_{1}(\mathbf{q}_{1})L_{1}\widetilde{M}_{1}(\mathbf{q}_{1})\times M_{2} (\mathbf{q}_{2})L_{2}\widetilde{M}_{2}(\mathbf{q}_{2}).\end{split} \tag{24}\]
In Figure 5, we show the results of minimizing \(E_{L_{1}L_{2}}(\mathbf{q}_{1},\mathbf{q}_{2})\), where \(L_{1}=L_{2}=\mathbf{e}_{0}\wedge(\mathbf{e}_{0}+\mathbf{e}_{1}+\frac{1}{2}\mathbf{e}_{\infty}) \wedge\mathbf{e}_{\infty}\), which corresponds to aligning two lines that pass through the x-axes of the frames at the end-effectors of the two manipulators.
There are, of course, other ways to achieve this behavior, e.g. by constraining the orientation part of the relative motor. We chose to demonstrate this method of aligning orientation, however, because it provides a lot of flexibility, since the two lines \(L_{1}\) and \(L_{2}\) can be chosen arbitrarily.
A similar instance of this problem can be defined using the absolute motor of the CGA-CDTS. By using the absolute translator \(T_{a}(\mathbf{q}_{1},\mathbf{q}_{2})\) we can move a desired line to the absolute position and require it to be identical to the line moved by the absolute motor. This defines a desired orientation with respect to the arbitrary axis that is defined by the line. The formulation hence is
\[\begin{split} E_{T_{L}}&(\mathbf{q}_{1},\mathbf{q}_{2})\\ &=T_{a}(\mathbf{q}_{1},\mathbf{q}_{2})L\widetilde{T}_{a}(\mathbf{q}_{1},\mathbf{ q}_{2})\times M_{a}(\mathbf{q}_{1},\mathbf{q}_{2})L\widetilde{M}_{a}(\mathbf{q}_{1},\mathbf{q}_{2}). \end{split} \tag{25}\]
### _Balancing a Plate_
In this real-world experiment we are combining several of the previous constraints in order to implement the task of balancing a plate. First, the robot lifts the plate to a height of 20cm above the table, then it will try to keep it in that position. This is formulated as constrained optimization problem, where the objective is to stay close to the initial configuration and the constraint is to keep the \(z\)-axis of absolute motor perpendicular to the \(xy\)-plane.
We formulate this task as an optimal control problem
\[\begin{split}\mathbf{u}^{*}=&\min_{\mathbf{u}}\Big{\|}E_{N} (\mathbf{x}_{N})\Big{\|}_{2}^{2}+\sum_{k=0}^{N-1}\Big{\|}E_{k}(\mathbf{x}_{k})\Big{\|}_ {2}^{2}+\|\mathbf{u}_{k}\|_{R}^{2}\\ &\text{s.t. }\mathbf{x}_{k+1}=f(\mathbf{x}_{k},\mathbf{u}_{k})\end{split} \tag{26}\]
where \(\mathbf{x}=\begin{bmatrix}\mathbf{q}_{1}^{\top},\mathbf{q}_{2}^{\top},\mathbf{\tilde{q}}_{1 }^{\top},\mathbf{\tilde{q}}_{2}^{\top}\end{bmatrix}^{\top}\) and \(\mathbf{u}=\begin{bmatrix}\mathbf{\tilde{q}}_{1}^{\top},\mathbf{\tilde{q}}_{2}^{\top} \end{bmatrix}^{\top}\). As the state dependent cost we choose the residual multivectors of Equations (24) and (25), where the line \(L\) is chosen to be the \(z\)-axis, i.e. \(L=\mathbf{e}_{0}\wedge(\mathbf{e}_{0}+\mathbf{e}_{3}+\frac{1}{2}\mathbf{e}_{\infty})\wedge\bm {e}_{\infty}\). This means that the two manipulators should simultaneously keep their relative grasping positions on the plate and to keep the plate horizontal.
This formulation is given to a model predictive controller that uses second order system dynamics to compute desired accelerations. Using inverse dynamics we then compute torque commands for the control of the two manipulators. We chose a short horizon of 10 timesteps with \(\Delta t=0.01\), in order to achieve a very reactive behavior of the controller. The model
Fig. 4: Two Franka Emika robots reaching a circle in the cooperative dual-task space while trying to maintain the same relative motor.
Fig. 5: Aligning the x-Axis.
predictive controller is then run at 100Hz and the inverse dynamics controller at 1000Hz.
Since there are only relative constraints in this task, the robots have a compliant behavior when one of them or the plate is moved by hand. The other one then adapts its configuration accordingly. We show several different configurations in Figure 6. If no external forces are applied, the robots stay in their current configuration. This experiment can also be seen in the accompanying video on our website.
## V Conclusion
In this article we presented an extension of the cooperative dual-task space in conformal geometric algebra, namely the CGA-CDTS. This extension keeps all the benefits of the original formulation that is based on dual quaternions, but adds more tools for geometric modeling of the dual-arm tasks. After reformulating the CDTS, we showed how the cooperative pointpair can be used to simultaneously represent both end-effector positions and how that can be exploited for cooperative reaching tasks and how the additional geometric primitives facilitate the modeling of dual-arm tasks. We then demonstrated the integration of the CGA-CDTS into an existing framework for optimal control using geometric algebra. In future work the ideas of the CGA-CDTS could be used to facilitate the modeling of the task spaces of robotic hands. For a 3-finger hand, for example, the cooperative pointpair could become a cooperative circle.
|
2306.04042 | A Comprehensive Model of Snow Crystal Faceting | Crystal faceting can emerge via two broad physical mechanisms: anisotropic
attachment kinetics on growing crystals and anisotropic surface energies on
near-equilibrium crystals. For the case of the ice/vapor system, anisotropic
attachment kinetics is the dominant faceting mechanism, while the possible
occurrence of equilibrium faceting has been debated for many decades. In this
investigation we examine ice/vapor faceting at low supersaturations over the
temperature range -15C<T<0C, where evidence of a roughening transition has been
previously reported. Our findings indicate that a comprehensive attachment
kinetics model can explain all the experimental data to date, while assuming an
essentially isotropic surface energy (which is supported by other
considerations). Specifically, our kinetic model naturally explains the
observed disappearance of prism faceting on slowly growing ice crystals in
vacuum at T>-2C, thus suggesting that snow crystal faceting is caused by
anisotropic attachment kinetics even at extremely slow growth rates. | Kenneth G. Libbrecht, James Walkling | 2023-06-06T22:14:57Z | http://arxiv.org/abs/2306.04042v1 | # A Comprehensive Model of Snow Crystal Faceting
###### Abstract
Crystal faceting can emerge via two broad physical mechanisms: anisotropic attachment kinetics on growing crystals and anisotropic surface energies on near-equilibrium crystals. For the case of the ice/vapor system, anisotropic attachment kinetics is the dominant faceting mechanism, while the possible occurrence of equilibrium faceting has been debated for many decades. In this investigation we examine ice/vapor faceting at low supersaturations over the temperature range \(-15C<T<0C\), where evidence of a roughening transition has been previously reported. Our findings indicate that a comprehensive attachment kinetics model can explain all the experimental data to date, while assuming an essentially isotropic surface energy (which is supported by other considerations). Specifically, our kinetic model naturally explains the observed disappearance of prism faceting on slowly growing ice crystals in vacuum at \(T>-2C\), thus suggesting that snow crystal faceting is caused by anisotropic attachment kinetics even at extremely slow growth rates.
## 1 Introduction
Natural crystal facets are observed on many mineral crystals, with ice and quartz being two of the most common examples. In most mineral systems, faceted surfaces emerge during crystal growth involving anisotropic attachment kinetics, which is an intrinsically non-equilibrium process. Specifically, facet surfaces (having low Miller indices) accumulate material more slowly than other surfaces, with growth often being limited by terrace nucleation on the molecularly smooth facets. In this situation, the slowest-growing surfaces typically define the overall growth morphology, yielding faceted growth forms. For the specific case of ice growing from water vapor (snow crystals), hexagonal prisms are the simplest and most common fully faceted morphology, although pyramidal facets also sometimes appear at low temperatures [2006Tap, 2021Lib].
In generally rarer circumstances, crystal facets can also appear in the absence of growth, where the Equilibrium Crystal Shape (ECS) is determined by minimizing the total surface energy of an isolated test crystal. If the surface energies on faceted surfaces are substantially lower than on non-faceted (rough) surfaces, then the ECS will be faceted [1980Hey, 1987Hey]. There is much discussion of faceted ECSs in the scientific literature, and it is often thought that the ECS for the ice/vapor system is a faceted hexagonal prism [1997Pru]. However, the available evidence to date suggests that the ice/vapor surface-energy anisotropy is quite small at temperatures above -15 C, so the snow-crystal ECS is very nearly spherical in this temperature range [2012Lib2].
Faceting observed in snow crystals typically arises from nucleation barriers that greatly
suppress the growth of faceted basal and prism surfaces [1982Kur, 1984Kur, 1984Kur1, 1987Kob, 1998Nel, 2021Lib]. As shown below, the terrace nucleation mechanism yields exceedingly slow growth rates at low supersaturations, especially at low temperatures, often yielding highly faceted growth forms. When small ice crystals are grown from water vapor in near-vacuum conditions, the growth forms are typically simple hexagonal prisms.
The experimental situation becomes a bit confusing at temperatures above -2 C, however, as we describe in detail below. Basal faceting remains pronounced at all temperatures, but prism faceting is present in some circumstances while remaining absent in others. For example, we have observed strong prism faceting in air at temperatures as high as -0.2 C [2021Lib2], while prism faceting at -2 C is sometimes (but not always) substantially reduced for crystals grown in near-vacuum conditions. Similar observations by other researchers have been interpreted as evidence for changes in the ECS with temperature [1985Col], and perhaps a roughening transition on prism surfaces [1991Elb]. Overall, the experimental observations have not painted a clear picture of ice faceting behavior, and the ice/vapor ECS remains a topic of scientific debate.
Our overarching goal in this paper is to develop a comprehensive model of faceting in the ice/vapor system, focusing especially on simple faceted prisms that appear at low growth rates. From the outset we assume that the surface-energy anisotropy negligibly small, so the ice/vapor ECS is essentially spherical. The available evidence suggests that the real ECS likely exhibits only minute facets on an otherwise spherical form [2012Lib2], supporting our spherical approximation. We also assume a terrace-nucleation model to describe growth of the basal and prism facets using model parameters determined from experimental ice-growth measurements [2013Lib, 2021Lib].
With these model assumptions, we find that we can explain essentially all the available experimental observations to a reasonable degree, with the caveat that there remain substantial uncertainties in both the experiments observations and our model calculations. Our model shows that faceting from anisotropic attachment kinetics is important in all but the most extreme conditions, and that an anisotropic surface energy is not a necessary requirement to explain the existing data.
Importantly, our model establishes a theoretical framework for further investigations of snow crystal faceting, and for further consideration of the ice/vapor ECS and how it could be definitively observed. The model therefore makes important progress in the continuing exploration of crystal growth dynamics in the ice/vapor system, particularly under physical conditions approaching the triple point.
## 2 A Basic Analytic Model of Snow Crystal Faceting
During snow crystal formation, a variety of physical processes influence the growth dynamics, including attachment kinetics, particle and heat diffusion, and surface energy effects [2021Lib]. The formation of snow crystals in air is mainly governed by the interplay of particle diffusion and attachment kinetics, typically yielding complex morphologies that are both branched and faceted. In this paper, we focus our attention on slow growth that yields simple faceted prisms, especially in low-pressure experiments, where particle diffusion plays a relatively small role. We begin our model development by defining a suitable parameterization of the growth dynamics and attachment kinetics.
The basic tenets of molecular attachment kinetics have been generally understood for about a century [1882Her, 1915Knu, 1990Yok] and are explained in numerous textbooks describing the physics of crystal growth
[1990, 1999, 2004]. For the icc/vapor system we write the Hertz-Knudsen relation [2021]
\[v_{n}=\alpha v_{kin}\sigma_{surf} \tag{1}\]
for the growth of a flat surface, where \(v_{n}\) is the crystal growth velocity perpendicular to the growing surface, \(\alpha\) is a dimensionless _attachment coefficient_, \(\sigma_{surf}=(c_{surf}-c_{sat})/c_{sat}\) is the dimensionless water vapor supersaturation at the surface, \(c_{surf}\) is the water-vapor number density just above the surface, \(c_{sat}=c_{sat}(T)\) is the saturated number density of a surface in equilibrium at temperature \(T\), and
\[v_{kin}=\frac{c_{sat}}{c_{ice}}\sqrt{\frac{kT}{2\pi m_{mol}}} \tag{2}\]
is the _kinetic velocity_, in which \(m_{mol}\) is the mass of a water molecule, \(c_{ice}=\rho_{ice}/m_{mol}\) is the number density of ice, and \(\rho_{ice}\) is the mass density of ice.
For a non-faceted (a.k.a. rough) ice surface, measurements indicate \(\alpha_{rough}\approx 1\) under most conditions [2021], while \(\alpha_{facet}\ll 1\) when \(\sigma_{surf}\) is low, yielding strong basal and prism faceting over a broad range of environmental conditions.
### Attachment kinetics
Measurements of \(\alpha_{facet}\) have shown that the attachment kinetics are primarily limited by terrace nucleation under typical growth conditions [2013], prompting us to write \(\alpha_{facet}\) as [1996]
\[\alpha_{facet}(\sigma_{surf})=Ae^{-\sigma_{0}/\sigma_{surf}} \tag{3}\]
where \(A(T)\) and \(\sigma_{0}(T)\) are dimensionless parameters that are generally different for the basal and prism facets, with
\[\sigma_{0}(T)=\frac{S\beta^{2}a^{2}}{k^{2}T^{2}} \tag{4}\]
where \(a\) is the molecular size, \(k\) is the Boltzmann factor, \(T\) is the surface temperature, and \(\beta\) is the step energy of a terrace edge. This expression includes a dimensionless geometrical factor \(S\approx 1\) to absorb several small theoretical factors [2021].
The validity of this functional form for terrace nucleation in the icc/vapor system has been verified by experiments over a broad range of temperatures and supersaturations [201], and the current state-of-the-art for measurements of \(A(T)\) and \(\sigma_{0}(T)\) on both the basal and prism facets is presented in considerable detail in [2021] and references therein.
### Surface Energy Effects
Snow crystal growth rates are influenced by the icc/vapor interfacial energy mainly via the Gibbs-Thomson effect, which describes how the equilibrium vapor pressure above a curved surface is higher than that above a flat surface. Again this is a well-known result in statistical mechanics and crystal-growth theory, yielding a modified Hertz-Knudsen relation [1996]
\[v_{n}=\alpha v_{kin}(\sigma_{surf}-d_{sv}\kappa) \tag{5}\]
where
\[d_{sv}=\frac{\gamma_{sv}}{c_{ice}kT}\approx 1\;nm \tag{6}\]
is the Gibbs-Thomson length, \(\gamma_{sv}\) is the surface energy of the solid/vapor interface, and \(\kappa\) is the local surface curvature. For a spherical surface we have \(\kappa=2/R\) where \(R\) is the radius of the sphere. Note that \(\kappa=0\) only for flat surfaces of infinite size, and we must assume \(\kappa>0\) for any facets on finite-size test crystals. For an approximately isometric hexagonal prism (a common experimental case) with an overall effective radius approximately equal to \(R\), we take \(\kappa\approx 2/R\) for the facet surfaces.
We assume that \(\gamma_{sv}\) has a constant value independent of surface orientation in this paper, implying a spherical ECS for the
ice/vapor system. The evidence suggests that this is an excellent approximation at temperatures above -15 C, although there have been no definitive measurements of the ice ECS at any temperature to date (in our opinion). Questions relating to the ice ECS and the anisotropy of the ice/vapor surface energy remain a topic of scientific debate.
The nucleation dynamics on faceted surfaces of finite size will be affected by the Gibbs-Thomson effect, as there can be no nucleation if the effective radius of a facet is smaller than the critical terrace radius in nucleation theory. The theoretical questions that arise in such circumstances are beyond the scope of this paper, but our investigation suggests that this effect is rather small, being significant only for exceptionally small crystals held at very low surface supersaturations. Nevertheless, we approximate the resulting changes by modifying the attachment kinetics on faceted surfaces to be
\[\alpha_{facet}(\sigma_{surf})=Ae^{-\sigma_{0}/(\sigma_{surf}-d_{sw}\kappa_{ facet})} \tag{7}\]
## 2 Particle diffusion
Snow crystal growth is often strongly limited by the slow diffusion of water vapor molecules through the surrounding medium (usually air), and a full 3D solution to the problem of diffusion-limited growth remains a challenging computational task [2021Lib]. However, the spherical case has a simple analytic solution that can be useful for approximating the supersaturation field around a nearly isometric hexagonal prism. In this one-dimensional diffusion problem, the supersaturation at all points can be written [2021Lib]
\[\sigma(r)=\sigma_{\infty}-\frac{R}{r}\big{(}\sigma_{\infty}-\sigma_{surf}\big{)} \tag{8}\]
where
\[\sigma_{surf}=\frac{\alpha_{diff}}{\alpha+\alpha_{diff}}\sigma_{\infty} \tag{9}\]
with
\[\alpha_{diff} =\Big{(}\frac{\alpha_{sat}}{c_{ice}}\frac{D}{v_{kin}}\Big{)} \frac{1}{R}\] \[=\frac{X_{0}}{R} \tag{10}\]
and \(D\) is the diffusion constant, giving the crystal growth velocity
\[v_{n}=\left(\frac{\alpha\alpha_{diff}}{\alpha+\alpha_{diff}}\right)v_{kin} \sigma_{\infty} \tag{11}\]
Rearranging these expressions gives
\[\sigma(r)=\sigma_{\infty}-\frac{1}{4\pi\rho_{ice}X_{0}v_{kin}}\frac{dM}{dt} \frac{1}{r} \tag{12}\]
where \(dM/dt\) is the rate of change of the mass of the crystal.
At very slow growth rates Equation (12) becomes \(\sigma(r)\rightarrow\sigma_{\infty}\) at all \(r\), as one would expect, and this expression can provide a reasonable estimate of the supersaturation field around a growing prism provided \(\alpha\ll\alpha_{diff}\). The approximation becomes less useful when this inequality becomes less valid, and it becomes essentially useless when \(\alpha_{diff}<\alpha\).
Looking at our central question of faceting in this paper, we see that particle diffusion tends to enhance the formation of faceted forms with sharp corners and edges via the Mullins-Sekerka instability [1964Mul, 2021Lib]. With a hexagonal prism, for example, the hexagonal tips stick out farther into the supersaturated surroundings, so the increased supersaturation associated with particle diffusion tends to promote the tip growth. At high growth rates and with large crystals, this effect leads to branching and complex growth morphologies. At low growth rates and with small crystals, particle diffusion may do little more than slightly encourage the formation of faceted forms.
## 3 Heat diffusion
In a low-pressure experimental environment, the particle diffusion constant becomes quite large, and under these conditions the effects of
particle diffusion often become negligible. In these same circumstances, however, thermal diffusion often becomes an important factor limiting crystal growth. With a particle resting on a substrate, for example, latent heat released during growth often dissipates by being conducted through the ice to the supporting substrate. The resulting heat flow produces a temperature gradient within the crystal, with the top surface being warmer than the part of the crystal contacting the substrate. The morphological effects from this heating are often seen when ice crystals are grown in a near-vacuum environment [1972Lam].
As with particle diffusion, the full 3D heat diffusion problem can be quite challenging to solve. Fortunately, 1D analytic solutions again provide useful insights into the overall scale of the problem, and they can be used to reasonably estimate of how latent heating affects crystal growth rates in experiments. For the case of a uniform sheet of ice growing on a substrate [2021Lib], the growth rate can be written
\[\mathbf{v}_{n}=\left(\frac{\alpha\alpha_{therm}}{\alpha+\alpha_{the}}\right)\mathbf{v} _{kin}\sigma_{\infty} \tag{13}\]
with
\[\alpha_{therm}=\frac{\kappa_{ice}}{\eta L_{sv}\rho_{ice}\mathbf{v}_{kin}}\frac{G}{H} \tag{14}\]
where \(H\) is the thickness of the sheet, \(G=1\), and
\[\eta=\frac{1}{c_{sat}}\frac{d\mathbf{c}_{sat}}{dT} \tag{15}\]
For the case of a small hexagonal prism growing on a substrate, this samc expression can approximate the growth when \(G\) is replaced by a geometrical constant of order unity. Thus we can use this simple analytic expression to approximately model the full effects of latent heating in practical experimental situations.
In contrast to particle diffusion, thermal diffusion tends to hinder the formation of faceted forms with sharp corners and edges. For the case of a simple prism growing on a substrate, the combination of perpendicular and lateral growth yields the highest crystal temperature increases at the tips and edges farthest from the substrate, so these sharp structures will tend to round from latent heating effects [1972Lam].
## 3 Slow Growth of Simple Ice Prisms in Vacuum
With the model underpinnings described above, let us now consider the slow growth of a simple ice prism defined by the geometrical parameters in Figure 1. If the crystal rests on an inert heat-conducting substrate in a low-pressure environment, then we can ignore particle diffusion and assume that \(\sigma_{surf}\) has a uniform value over the entire surface of the crystal, with \(\sigma_{surf}>0\) usually being provided by a far-away ice reservoir with \(T_{reservoir}>T_{crystal}\).
Figure 1: The geometry of a simple hexagonal prism with edges and corners rounded by the Gibbs-Thomson effect.
Numerous experimental observations have shown that basal growth is generally slower than prism growth at low \(\sigma_{surf}\) for temperatures above -3 C, owing to the large basal nucleation barrier that exists even at high temperatures close to 0 C [2013Lib, 2021Lib]. For this reason, we typically assume \(dR_{thick}/dt\approx 0\) in the discussion that follows. Removing this assumption would not change our overall conclusions appreciably, as the interesting faceting behaviors are generally restricted to the prism facets at high temperatures.
To simplify our model further, we assume that the growth morphology is roughly stable in time, by which we mean that \(\tau_{corner}\) does not change appreciably as \(R_{tip}\) and \(R_{facet}\) increase. This stability assumption cannot be absolutely accurate for a developing crystal over long periods of time, but approximately stable growth of this nature is frequently observed with slowly growing ice prisms, which is the primary focus of this paper.
Our assumption of stable growth allows us to write
\[\frac{dR_{tip}}{dt}\approx G_{0}\frac{dR_{facet}}{dt} \tag{16}\]
where \(G_{0}\) is a geometrical constant that depends on the prism morphology. For a perfect hexagon, \(G_{0}=2/\sqrt{3}=1.155\), while \(G_{0}=1\) for a circular shape. We have found that the exact value of \(G_{0}\) between these limits has a negligible effect on our overall model predictions.
Ignoring crystal heating for the moment, Equation 5 becomes
\[\mathbf{v}_{tip}\approx\alpha_{tip}\mathbf{v}_{kin}\big{(}\sigma_{surf}-d_{sv}\kappa_{tip }\big{)} \tag{17}\]
describing the tip growth with \(\mathbf{v}_{tip}=dR_{tip}/dt\), where typically \(\alpha_{tip}\approx 1\) for the rounded tip surface. Likewise the prism facet growth is given by
\[\mathbf{v}_{facet}\approx\alpha_{facet}\mathbf{v}_{kin}\big{(}\sigma_{surf}-d_{sv} \kappa_{facet}\big{)} \tag{18}\]
For the special case of a nearly isometric ice prism, we have \(R_{thick}\approx R_{facet}\), giving
\[\kappa_{tip}\approx\frac{1}{R}+\frac{1}{r_{corner}} \tag{19}\]
\[\kappa_{facet}\approx\frac{2}{R} \tag{20}\]
where \(R\approx R_{thick}\approx R_{facet}\) is the effective radius of the isometric prism. Note that \(\kappa_{facet}\) refers to the center of a prism facet (bordered by two prism/prism edges and two prism/basal edges), while \(\kappa_{tip}\) refers to the center of a prism/prism edge (midway between the two basal surfaces). With these assumptions, our stability condition in Equation 16 becomes
\[\alpha_{tip}\big{(}\sigma_{surf}-d_{sv}\kappa_{tip}\big{)}\] \[\approx G_{0}\alpha_{facet}\big{(}\sigma_{surf}-d_{sv}\kappa_{facet }\big{)} \tag{21}\]
This expression is the primary result from our basic analytic model, and it shows that the degree of prism faceting is driven largely by the difference between \(\kappa_{tip}\) and \(\kappa_{facet}\) (for this special case of a nearly isometric hexagonal prism growing in a low-pressure environment). Because \(\alpha_{tip}\approx 1\gg\alpha_{facet}\) in most slow-growth conditions, the left side of Equation 21 would nearly always be larger than the right side if not for the difference between \(\kappa_{tip}\) and \(\kappa_{facet}\).
For example, starting with a perfectly sharp hexagonal prism, \(\tau_{corner}\to 0\) and \(\kappa_{tip}\to\infty\), producing a strong suppression of \(\mathbf{v}_{tip}\) via the Gibbs-Thomson effect, causing the prism corner to round. As \(\tau_{corner}\) increases during this proccess, the Gibbs-Thomson effect lessens until the two sides of Equation 21 balance. Similarly, starting with a spherical crystal having \(\tau_{corner}=R\) usually gives \(\mathbf{v}_{tip}>\mathbf{v}_{facet}\), which causes the prism corner to sharpen. Regardless of the starting point, we see that Equation 21 describes the condition for stable growth with a value of \(R/\tau_{corner}\) that is roughly independent of time.
Assuming a stable morphology can be achieved, Equation 21 can be rewritten to yield the model prediction
\[\frac{R}{r_{corner}}\approx\frac{R}{d_{sv}}\bigg{(}\sigma_{surf}-\frac{G_{0}v_{ facet}}{\alpha_{tip}v_{kin}}\bigg{)}-1 \tag{22}\]
We recognize that the above approximations for \(\kappa_{tip}\) and \(\kappa_{facet}\) are only approximate, but we believe that they capture the underlying physics reasonably well. The values of these two parameters have the right order of magnitude, and their difference yields stable growth driven by sensible assumptions. Thus, while this basic analytic model is not absolutely accurate, we believe that it is useful for examining the physics underlying the faceting process over a broad range of conditions.
## Heat diffusion
When evaluating these equations in realistic experimental situations, one result that quickly appears is that latent heating must be incorporated into any model of the growth of simple ice prisms on a substrate in a near-vacuum environment [1972Lamb, 2021Lib]. Rather than resorting to full 3D finite-element diffusion calculations, we approximately incorporated latent-heating effcets into our model by replacing Equations 17 and 18 with
\[v_{tip}\approx\alpha_{tip,tot}v_{kin}\big{(}\sigma_{surf}-d_{sv}\kappa_{tip} \big{)} \tag{23}\]
and
\[v_{facet}\approx\alpha_{facet,tot}v_{kin}\big{(}\sigma_{surf}-d_{sv}\kappa_{ facet}\big{)} \tag{24}\]
where
\[\alpha_{tip,tot}=\frac{\alpha_{tip}\alpha_{the}}{\alpha_{tip}+\alpha_{therm}} \tag{25}\]
and
\[\alpha_{facet,tot}=\frac{\alpha_{facet}\alpha_{therm}}{\alpha_{facet}+\alpha _{therm}} \tag{26}\]
which then yields the modified stability condition
\[\frac{R}{r_{corner}}\approx\frac{R}{d_{sv}}\bigg{(}\sigma_{surf}-\frac{G_{0}v_ {facet}}{\alpha_{tip,tot}v_{kin}}\bigg{)}-1 \tag{27}\]
Because \(\alpha_{tip,tot}<\alpha_{tip}\) in most circumstances, this expression immediately shows that latent heating tends to decrease \(R/r_{corner}\) at any given value of \(v_{facet}\).
Once again, Equation 27 is not meant to be an exact expression, as we made several rather crude approximations regarding the Gibbs-Thomson effect and latent heating. Despite its shortcomings, however, we have found that this basic analytic model is quite useful for approximating the essential physics in various scenarios. Below we examine what this model says about ice crystal faceting as a function of temperature and other experimental parameters.
## Particle diffusion
We also considered the effects of particle diffusion on our model, using Equation 12 to make a rough estimate of the supersaturation field around a growing crystal. For a nearly isometric prism growing in conditions with \(\alpha<\alpha_{diff}\), we found that the main difference between \(\sigma_{surf}\) and \(\sigma_{\infty}\) arose from the general trend in \(\sigma(\mathbf{r})\) surrounding the crystal. Because an isometric hexagonal prism is not too different from a spherical shape, particle diffusion yielded only a modest difference between \(\sigma_{surf,facet}\) and \(\sigma_{surf,tip}\). Moreover, this difference was roughly proportional to pressure, while our main interest was comparing with crystal growth experiments done at low pressure. For these reasons, we found that the attachment kinetics and heat diffusion were the main drivers of faceting behavior (at low pressures), to the point that particle diffusion effects could be neglected without changing our main scientific conclusions.
## Thin plates in air
Our model is not much changed if we abandon the assumption of nearly isometric ice prisms, and it is especially useful to consider the
growth of thin hexagonal ice plates in air, as such structures are a commonly observed in experiments in air at temperatures above -3 C.
For the thin-plate case with \(R_{thick}\ll R_{facet}\), our surface-curvature terms become
\[\kappa_{tip}\approx\frac{1}{R_{thick}}+\frac{1}{r_{corner}} \tag{28}\]
\[\kappa_{facet}\approx\frac{1}{R_{thick}}+\frac{1}{R_{facet}} \tag{29}\]
while the stability condition in Equation 21 remains unchanged, yielding
\[\frac{R_{facet}}{r_{core}}\approx\frac{R_{facet}}{d_{sv}}\bigg{(} \sigma_{surf}-\frac{G_{0}v_{facet}}{\alpha_{tip}v_{kin}}\bigg{)}\] \[-\frac{R_{facet}}{R_{thick}} \tag{30}\]
which reduces to Equation 22 for an isometric prism.
For thin plates growing in air, however, we often have \(d_{sv}/R_{facet}\ll\sigma_{surf}\) and \(\alpha_{facet}\ll\alpha_{tip}\), where \(\sigma_{surf}=\big{(}\sigma_{surf}-d_{sv}/R_{thick}\big{)}\). And in these limits the stability condition becomes simply
\[r_{corner}\approx\frac{d_{sv}}{\sigma_{surf}} \tag{31}\]
independent of \(R_{facet}\) and \(R_{thick}\), while \(v_{facet}\approx\alpha_{facet}v_{kin}\sigma_{surf}\).
## 3 Higher-order effects from heat and particle diffusion
The above discussion focuses mainly on diffusion effects that arise from the lowest-order spatial changes in the temperature profile in a growing crystal (at low background gas pressure on a substrate) and in the supersaturation field around a growing crystal (at higher pressures, typically one atmosphere).
Extending this to higher-order effects, we find that heat diffusion tends to suppress corner growth, thus further reducing \(R/r_{corner}\) relative to that calculated using the pseudo-1D model above. This follows because the corners of a nearly isometric faceted crystal growing on a substrate will experience the most latent heating on the crystal, as their thermal path to the substrate is the longest. The higher temperature rise at the corners suppresses their growth relative to other surfaces, thus increasing \(r_{corner}\) and reducing \(R/r_{corner}\).
In contrast, higher-order particle diffusion effects produce the opposite tendency, increasing \(R/r_{corner}\) relative to that calculated using a 1D model. This comes about because of the Mullins-Sekerka instability [1964Mul], which tends to sharpen corners in particle-diffusion-limited growth.
The main takeaway from these paragraphs is that our basic model of faceting described above will likely overestimate \(R/r_{corner}\) when significant heating is present, while underestimating \(R/r_{corner}\) when particle diffusion is important. However, full 3D diffusion calculations are needed to fully quantify these statements in both cases.
## 4 Model predictions
We now examine some predictions from this analytic model by evaluating Equation 27 as a function of various experimental parameters. Beginning with the case of small isometric ice prisms growing in a near-vacuum environment, we choose the parameters:
\[\begin{array}{c}R_{facet}=R_{thick}=R=20\ \mathrm{\SIUnitSymbolMicro m}\\ \alpha_{therm}=(10\ \mathrm{\SIUnitSymbolMicro m})/\mathrm{R}=0.5\\ d_{sv}=1\ \mathrm{\SIUnitSymbolMicro nm}\\ G_{0}=1.155\end{array}\]
Our model evaluation begins by defining a table of \(\sigma_{surf}\) values and calculating \(v_{facet}\) using
\[\begin{array}{c}\alpha_{facet}=A_{1}e^{-\frac{\sigma_{0,3}}{(\sigma_{surf} -d_{sv}facet)}}+\\ A_{2}e^{-\frac{\sigma_{0,2}}{(\sigma_{surf}-d_{sv}facet)}}\end{array} \tag{32}\]
with \(\kappa_{facet}=2/R\), giving
\[v_{facet}=\frac{\alpha_{facet}\alpha_{therm}}{\alpha_{facet}+ \alpha_{therm}}\times\] \[v_{kin}\big{(}\sigma_{surf}-d_{sv}\kappa_{facet}\big{)} \tag{33}\]
Along with
\[\alpha_{tip,tot}=\frac{\alpha_{therm}}{1+\alpha_{therm}} \tag{34}\]
this gives us all we need to calculate \(R/r_{corner}\) as a function of \(v_{facet}\).
Table 1 shows the parameters we used for the prism attachment kinetics, which were chosen from experimental measurements of ice growth rates as a function of temperature and supersaturation [2021Lib]. We believe that these parameters are fairly accurate at the lowest temperatures but become more uncertain at the temperature increases. Using the sum of two nucleation processes is a convenient parameterization to include what we have called the "SDAK-2" phenomenon at the higher temperatures [2021Lib]. This phenomenon is speculative at present, and more work is needed to sort out the prism attachment kinetics at high temperatures. However, the parameters in Table 1 yield reasonable representations of the available measurements, and we believe that the remaining uncertainties in the details do not greatly affect the model results.
Applying these various inputs yields the curves shown in Figures 2 and 3, which demonstrate the overall trends seen with our model. In both graphs we have calculated \(R/r_{corner}\) as a function of \(v_{facet}\) for several ice growth temperatures. The quantity \(R/r_{corner}\) serves as a reasonable proxy for the overall degree of faceting, as \(R/r_{corner}\to 1\) for an unfaceted (round) crystal exhibiting no prism faceting, and \(R/r_{corner}\gg 1\) for a crystal exhibiting pronounced prism faceting. As mentioned above, strong basal faceting was assumed in our model from the outset, based on experimental observations.
Figure 2 shows first that the degree of prism faceting depends strongly on growth temperature, with more pronounced faceting at lower temperatures. This phenomenon clearly derives from the temperature dependence of the attachment kinetics on the prism facets, as measurements indicate that \(\sigma_{0,prism}\) increases strongly with decreasing temperature, as indicated in Table 1 [2021Lib]. The overall trend in \(\sigma_{0,prism}\) is well supported by experiments at temperatures below -2 C, while measurements at higher temperatures are more uncertain.
Figure 2 also shows that \(R/r_{corner}\) increases with increasing \(v_{facet}\), and this can be understood from the Gibbs-Thomson effect applied to the prism tips. The tip radius decreases as \(\sigma_{surf}\) increases, resulting from a change in the balance between tip and facet growth needed to sustain a stable growth morphology, as described above.
We also see that latent heating is essentially negligible at the lowest growth rates, as one would expect, becoming important as the growth rate increases (for the assumed near-vacuum growth conditions). Latent heating is also generally more important at higher temperatures, which can be understood by examining how \(\alpha_{therm}\) changes with temperature [2021Lib].
Finally, Figure 2 shows that prism faceting at low temperatures persists down to quite low growth rates, even though our model assumes a spherical ECS. This happens because a finite nucleation barrier yields a facet growth rate that decreases exponentially with the applied supersaturation, which does not happen on the
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline T & vkin & \(\Lambda\)1 & sig0,1 & \(\Lambda\)2 & sig0,2 \\ (C) & (\(\mu\)m/sec) & & & & \\ \hline -1 & 690 & 0.3 & 3e-5 & 0.7 & 1e-3 \\ \hline -2 & 635 & 0.25 & 3e-4 & 0.75 & 1.5e-3 \\ \hline -3 & 585 & 0.2 & 1e-3 & 0.8 & 3e-3 \\ \hline -5 & 496 & 0.2 & 2e-3 & 0.8 & 5.5e-3 \\ \hline -7 & 419 & 0.5 & 8e-3 & 0.5 & 1e-2 \\ \hline -15 & 208 & 1 & 3e-2 & - & - \\ \hline \end{tabular}
\end{table}
Table 1: The parameters used to describe the prism attachment kinetics at different temperature using Equation 32.
rough tip surface. This phenomenon suggests that it will be difficult, from an experimental perspective, to "grow" an ECS in an environment with \(\sigma_{surf}>0\).
Moving our attention to Figure 3, we see that the values of \(R/r_{corner}\) are higher with larger crystals, provided one ignores latent heating. This follows simply because \(r_{corner}\) is roughly constant at a given growth rate, so the ratio \(R/r_{corner}\) goes up with larger \(R\).
Figure 3 also shows that the effects of latent heating are much more pronounced with larger crystals, as one would expect because \(\alpha_{therm}\) decreases at \(1/R\).
We adjusted the various parameters in our model to examine how this affects the plots shown in Figures 2 and 3. We found that the red curves were sensitive to our choice of \(\alpha_{therm}\), meaning that our model of latent heating confirms our expectations that 1) thermal effects can be quite significant, and 2) our model only gives a rough estimate for how heating affects the degree of faceting. We also found that these results are only as good as our estimated parameters for the attachment kinetics, as one would expect. Even with these model uncertainties, however, the overall trends seen in Figures 2 and 3 appear to be quite robust with respect to modest changes in input parameters.
Note that we must have \(R/r_{corner}\to 1\) as \(\nu_{facet}\to 0\) in this model because we assumed a spherical ECS at all temperatures. This behavior is indeed seen, but remarkably low values of \(\nu_{facet}\) must be obtained before \(R/r_{corner}\to 1\) at low growth temperatures. This makes sense, as described above, because the nucleation barrier yields extremely low growth rates when \(\sigma_{surf}\ll\sigma_{0}\).
## 4 Time needed to realize the ECS and stable growth forms
It is instructive to examine how long it takes to achieve a stable growth morphology, as supplying the requisite time is not always practical in ice-growth experiments. A lower limit can be estimated from how long it takes a circular seed crystal to "fill out" into a faceted
Figure 3: This shows the same model calculations as in Figure 2, but this time examining larger crystals with \(R=500\) μm. Here we see that faceting is more pronounced than in Figure 2 (higher \(R/r_{corner}\) values) if latent heating is ignored, but the overall effects of latent heating are much greater with the larger crystals.
Figure 2: This graph shows the degree of prism faceting (as quantified by \(R/r_{corner}\)) as a function of the prism growth velocity, showing crystals with \(R=20\) μm growing at several different temperatures. The black curves ignore latent heating, while the red curves include this effect in the model. These curves show that: 1) prism faceting is most pronounced at lower temperatures, and 2) prism faceting persists even at quite low crystal growth rates, even though the model assumed a spherical ECS.
hexagonal prism, assuming that the final state has a large value of \(R/r_{corner}\).
In this case the faceting time is mainly determined by the initial growth rate of the non-faceted corner before it becomes sharp, giving roughly
\[\tau_{facet}\approx\frac{G_{1}R}{v_{kin}\big{(}\sigma_{surf}-d_{sv}\kappa_{facet }\big{)}} \tag{35}\]
where \(G_{1}\) is a geometrical factor of order unity and we assumed \(\alpha_{tip}\gg\alpha_{facet}\) and \(\alpha_{tip}\approx 1\). Because \(\tau_{facet}\) is suitably small when considering the experimental observations described below, it is reasonable to expect that these ice crystals all had time to reach, or nearly reach, a stable growth morphology.
Note that the "fill out" time in Equation 35 is generally much shorter than the time needed to reach the ECS, which is approximately [2012Lib2]
\[\tau_{equilibrate}\approx\frac{R^{2}}{2\alpha v_{kin}d_{sv}} \tag{36}\]
For the case of a faceted ice prism relaxing to a spherical ECS, we see that \(\tau_{equilibrate}\) is inversely proportional to \(\alpha_{facet}\), which can result in extremely long equilibration times in practical ice-growth experiments.
## 3 Numerical Modeling
While the stable-growth model described above is quite useful for examining faceting in slowly growing snow crystals over a broad range of conditions, it is not ideal for making detailed comparisons with targeted experimental investigations. If an experiment carefully defines the initial conditions, seed crystal morphology, growth conditions, and then measures the faceting behavior as a function of time, then a general dynamical model would provide a better way to compare experimental measurements with the more basic elements of our faceting model, including our initial assumption of a spherical ECS.
Creating such a dynamical model using the approximations and formalism described above is straightforward, as we simply need to evaluate the various growth velocities and then propagate the crystal forward in time, taking advantage of the relatively simple dynamics of the hexagonal prism morphology.
For prism geometry in Figure 1, the relation
\[R_{tip}=\frac{2}{\sqrt{3}}R_{facet}+\Big{(}1-\frac{2}{\sqrt{3}}\Big{)}r_{corner} \tag{37}\]
ties the parameters together, giving \(dr_{corner}/dt\) from the calculated \(v_{tip}\) and \(v_{facet}\). The model could easily be extended to include nonzero basal growth as well. With such a numerical model, one could drop the static-growth assumption we made above to examine a variety of time-dependent aspects of prism growth dynamics in detail.
Unfortunately, this model is limited by our approximate treatment of thermal and particle diffusion, which relied on analytic solutions to the spherical growth problem. Using a full 3D finite-element diffusion model is feasible for the relatively simple hexagonal-prism morphology [2001Woo], but do so is beyond the scope of this paper. Our main objective here is the generally simpler task of quantifying prism faceting behaviors over a broad range of growth conditions.
## 4 Comparison with Ice-growth Experiments
Looking through our own ice-growth data archives, we have several prior experiments that have observed approximately stable simple prism growth over a range of conditions, which can be compared directly with the stable-growth model presented above. Moreover, there are earlier results in the literature that also lend themselves to possible reinterpretation using this model. As mentioned at the outset of this paper, our overarching goal is to develop a comprehensive picture of how facets develop
in snow crystal growth as a function of temperature and other growth conditions, while better understanding the connections between slowly growing crystals and the equilibrium crystal shape.
## 3 Faceting below -SC
Numerous researchers have documented the formation of sharply faceted ice prisms in near-vacuum conditions at temperatures below -5 C [1982], and Figure 4 shows some representative examples. While the image resolution here is not sufficient to measure \(R_{\textit{facet}}/r_{\textit{corner}}\) accurately, suffice it to say that these crystals exhibit a simple hexagonal-prism morphology with pronounced faceting and little rounding of the edges and corners. This growth behavior is easily explained from the model curves in Figure 2.
## 4 Low-Pressure Observations with T \(\geq\) -2C
At temperatures of -2 C and above, the model curves in Figure 2 predict smaller values of \(R_{\textit{facet}}/r_{\textit{corner}}\) compared to crystals grown at lower temperatures, mainly resulting from the reduced nucleation barrier on prism facets at the higher temperatures. At -2 C, for example, Figure 5 shows clear prism faceting, as we expect from Figure 2, but now the edges and corners exhibit significant rounding brought about from the Gibbs-Thomson effect.
Figure 4: A representative sample of ice crystals growing at -7 C from water vapor in near vacuum conditions with an air pressure of 50 Torr. Each square image box is 50 \(\upmu\)m on a side, and the crystals grew on a temperature-controlled sapphire substrate. Growth times were about 60 seconds with growth rates of about 150 nm/sec. Robust faceting appears under these conditions, exhibiting sharp edges and corners. The VPG apparatus used to grow these crystals is described in [2021].
Figure 5: These images show a pair of ice crystals growing at -2C in a near-vacuum environment at 20 Torr on a sapphire substrate. (a) The overall size of this crystal is \((R,H)=\left(R_{\textit{facet}},R_{\textit{thick}}\right)=\left(20\ \upmu\text{m}\right)\), 37 \(\upmu\)m) with a prism growth velocity of about 50 nm/sec. (b) This crystal has (R,H) = (19 \(\upmu\)m, 18 \(\upmu\)m) with a prism growth velocity of about 100 nm/sec. Both crystals grew from initially columnar seed crystals, and the red illumination is from a laser used to interferometrically measure the prism growth rates. Both crystals exhibit clear faceting at -2 C, but with some rounding of the corners. The VIG apparatus used to grow these crystals is described in [2021].
Figure 6: These images show a pair of example crystals growing at -1C in a near-vacuum environment on a sapphire substrate. The crystal in (a) has (R,H) = (29 \(\upmu\)m, 21 \(\upmu\)m) with a prism growth velocity of about 200 nm/sec, while (b) has (R,H) = (25 \(\upmu\)m, 24 \(\upmu\)m) with a prism growth velocity of about 100 nm/sec. Enhanced growth along the substrate in (b) yielded a form somewhat flatter than isometric, probably reducing the effects of latent heating. Both these crystals exhibit clear faceting at -1 C, but with some rounding of the corners. The VIG apparatus used to grow these crystals is described in [2021].
Figure 6 shows similar growth morphologies at -1 C, showing quite clearly that prism faceting can be quite prevalent at this temperature. The minor differences between the crystals in Figures 5 and 6 should not be taken too seriously at this point because the growth conditions and other parameters varied somewhat from crystal to crystal. As described in the figure captions, the crystals have different sizes and growth velocities, plus the initial conditions were not carefully noted at the time. We believe that these crystals provide representative examples of nearly stable growth forms, but better experiments are needed to fully document the growth and faceting behaviors as a function of time.
Figure 7 shows additional examples from a separate ice-growth experiment, and again we see that better targeted experiments will be needed to fully understand the subtle changes in faceting behaviors. Our model suggests stronger faceting for the crystals in Figures 7a and 7b, but it may be that rounding of the basal/prism edges may be obscuring the faceting somewhat in the images, owing to the smaller values of \(R_{thick}\). In Figures 7c and 7d, our model suggests that the much higher growth rates for these crystals produced thermal effects that greatly diminished \(R_{\textit{facet}}\)/ \(\tau_{\textit{corner}}\) (as seen in Figure 2) and yielded crystal growth with essentially no observed prism faceting. In all cases, however, basal faceting is clearly seen, as expected.
### Growth in Air with T \(\geq\)-2c
Figure 8 shows a nice example of a thin plate growing in air at -2C, and the observed faceting in this crystal can be roughly explained by our model. In the first few images of the growth series, a small circular seed crystal initially develops a roughly circular plate-on-pedestal
Figure 8: The four face-on crystals in the above composite image show four stages in the growth of a single plate-on-pedestal crystal in air at -2C and a pressure of one bar in the VPG apparatus [2021Lib]. The total growth time for this crystal was about 14 minutes, the effective radius of the final hexagonal plate was about 50 \(\upmu\)m, and the final growth velocity was about 100 \(\upmu\)sec. The other crystal image in the composite shows a side view from the same set that appears to roughly correspond to the third image in the face-on set. The final face-on image shows a thin hexagonal plate with sharp prism facets growing out from a stout pedestal, with \(R_{\textit{facet}}\)/\(\tau_{\textit{corner}}\) agreeing roughly with model predictions. The VPG apparatus used to grow these crystals is described in [2021Lib].
Figure 7: These images show additional examples of ice crystals growing at -2C in a near-vacuum environment on a sapphire substrate. Crystal (a) has (R,H) = (37 \(\upmu\)m, 10 \(\upmu\)m) with a prism growth velocity of about 100 nm/sec, while (b) has (R,H) = (23 \(\upmu\)m, 7 \(\upmu\)m) with a prism growth velocity of about 75 nm/sec. Rounding of the basal/prism edges obscures the prism facets somewhat in (a) and (b), because of the small H values. Crystal (c) has (R,H) = (28 \(\upmu\)m,18 \(\upmu\)m) with a prism growth rate of 500 nm/sec, while (d) has (R,H) = (27 \(\upmu\)m,18 \(\upmu\)m) with a prism growth rate of 650 nm/sec. At these faster growth rates, latent heating produces little faceting in (c) and (d), while basal faceting remains strong in all four crystals. The VPG apparatus used to grow these crystals is described in [2021Lib].
structure, and the plate appears to have rounded corners simply because it takes time for the corners to grow out as the plate edges becomes thinner.
After reaching its stable growth morphology, our model predicts \(R_{facet}\)/\(r_{corner}\approx 20\), which is somewhat lower than seen in the final image. As described above, however, our model likely underestimates the value of \(R_{facet}\)/\(r_{corner}\) when the growth is strongly limited by particle diffusion, as is certainly the case here showing growth in air at a pressure of one atmosphere.
Quantitative targeted experiments examining this kind of faceting transition in more detail could yield additional insights into the prism faceting process in air at these higher temperatures. Examining timeseries observations at different temperatures, supersaturations, and air pressures would likely improve our understanding of the attachment kinetics at high temperatures, provided computational growth models could adequately deal with the particle-diffusion problem.
Figure 9a presents another example illustrating that pronounced prism faceting can develop even at growth temperatures as high as -0.15 C. Our model cannot make detailed predictions of \(R_{facet}\)/\(r_{corner}\) for this plate, as we have essentially no measurements of the attachment kinetics at such high temperatures. Turning this around, however, quantitative measurements like this could place interesting limits on \(\alpha_{facet}\) for prism facets in this hard-to-observe growth regime.
Figure 9b illustrates another example of plate-on-needle growth in air at -1C, this time yielding a thicker plate. Observations like this reveal a remarkably rich diversity of growth morphologies as the temperature and supersaturation are varied [2021Lib2]. The biggest challenge in interpreting the observations lies in producing computational models that are capable of accurately handling diffusion-limited growth in the presence of highly anisotropic attachment kinetics [2021Lib].
In contrast to the thin snow-crystal plates growing from water vapor seen in Figures 8 and 9, Figure 10 shows a nice example of a circular disk growing from slightly supercooled liquid water. This well-known phenomenon [2003Shi, 2005Shi] indicates that the prism attachment kinetics at the ice/water interface is highly isotropic. In contrast, the prism attachment kinetics at the ice/vapor interface clearly retains some anisotropy as one approaches 0C.
explain prism faceting in these two systems near the triple point.
## 11 Surface Roughening?
The investigation described here was substantially motivated by Elbaum's paper [1991Elb] describing a surface roughening transition on the prism facet of ice, so we next examine this result in some detail. The crystals described in that paper were quite large, growing in near vacuum conditions, so we consider a specific example of an isometric crystal with \(R_{\textit{facet}}=500\)\(\upmu\)m and \(v_{\textit{facet}}=5\) nm/sec, as these parameters approximately correspond to the primary example described in Figure 3 in [1991Elb].
Applying our stable-growth model to crystals of this size gives the results in Figure 3 above, where we see that latent heating is expected to have a large effect on prism faceting. For crystals growing at a fixed velocity of 5 nm/sec, our model predicts a rather abrupt transition in faceting behavior at a temperature around -2 C. From the red curves in Figure 3, we see that \(R_{\textit{facet}}/r_{\textit{corner}}\) transitions from large values at -7 C to \(R_{\textit{facet}}/r_{\textit{corner}}\approx 1\) at temperatures above -2 C. In contrast to [1991Elb], however, we find that this transitional behavior can be explained by latent heating along with the surface attachment kinetics on prism surfaces, as these factors both change substantially with temperature.
Carrying this further, we suggest that latent heating may also explain the "domed" structure of the prism facets described in [1991Elb]. This slight deviation in flatness could arise from the same thermal gradients causing the overall rounding of the crystal morphology, although modeling such a result would require more information about the growing crystals and their environment. Our main conclusion here is that latent heating is an important factor affecting growth and faceting, and this factor was not carefully considered in [1991Elb].
Note also that the nearly flat (domed) facets described in [1991Elb] present a dynamical quandary at a more fundamental level. Because the facet surface was growing upward uniformly at 5 nm/sec, we must have \(v_{\textit{facet}}=v_{\textit{vicinal}}\), where \(v_{\textit{facet}}\) describes the growth of the top prism terrace and \(v_{\textit{vicinal}}\) describes the surrounding vicinal surfaces. Applying Equation 1 then yields \(\alpha_{\textit{facet}}\sigma_{\textit{facet}}=\alpha_{\textit{vicinal}} \sigma_{\textit{vicinal}}\), and any sensible model of the attachment kinetics gives \(\alpha_{\textit{facet}}<\alpha_{\textit{vicinal}}\), thus giving \(\sigma_{\textit{facet}}>\alpha_{\textit{vicinal}}\). This latter inequality is hard to avoid in any dynamical analysis of a growing "domed" surface, but it is easily explained by latent heating effects (which heat the corners more than the facet centers).
All these considerations cast doubt on Elbaum's scientific conclusion of a surface roughening transition. Our dynamical growth model provides a natural explanation of the observations, with reasonable model inputs, even while assuming an isotropic surface energy.
Looking at the bigger picture, we note that our growth model incorporates a decreasing
Figure 10: This photo shows a 2-mm-diameter disk of ice growing outward on the surface of a thin film of slightly supercooled water covering a glass plate. The c-axis of the oriented ice crystal is aligned perpendicular to the glass surface. The large dark regions are copper support arms glued to the glass, while dark specks are dust particles in the water film.
step energy on the prism facets with increasing temperature, which could be interpreted as a gradual roughening transition (because a rough surface is equivalent to a surface with vanishing step energy). There is an important distinction to be made, however, in that our model assumes from the outset that the ECS is spherical at all temperatures. Put another way, our model assumes that the surface energy anisotropy is negligible, so it cannot be responsible for producing faceted forms. Instead, the changing step energy affects the _dynamics_ of crystal growth via terrace nucleation, and this brings about faceted growth forms. A roughening transition usually refers to the _equilibrium_ structure of the crystal surface.
## Observing the equilibrium crystal shape
Part of this discussion must deal with the problem of how difficult it is to observe the ice ECS in practical experiments. Based on measurements of the terrace step energies on basal and prism surfaces as a function of temperature [2013Lib, 2021Lib], we have argued that the available evidence suggests that the ice ECS is nearly spherical at temperatures above -15 C [2012Lib2, 2021Lib]. If one therefore assumes that the ECS is spherical, it quickly becomes apparent that observing this morphology in equilibrium is a challenging experimental task.
If one begins with a faceted growth form, then relaxing to the ECS would require that ice sublimate from the faceted corners and deposit on the facet surfaces until the spherical ECS is obtained. This process is greatly suppressed, however, by the extremely slow attachment kinetics on faceted surfaces at low supersaturations, as modeled in Equation 3. As quantified in Equation 36, the time needed to complete this equilibration to the ECS can be far longer than any experiment performed to date.
In both [1985Coll] and [1991Elb], the authors described measurements of the ice ECS based on slowly growing crystals, assuming that the experimental wait times were sufficient to achieve the ECS. Our new model suggests, however, that achieving the ECS using slowly growing ice crystals may be nearly impossible if the true ECS is spherical. Referring to Figures 2 and 3, we see that growth forms remain faceted even at extremely low growth velocities, simply because \(\sigma_{\mathbf{facet}}\) goes to zero rapidly when \(\sigma_{\mathbf{surf}}\ll\sigma_{0}\). Given the experimental uncertainties in [1985Coll, 1991Elb], we believe that the observations could easily be explained from our dynamical model with a spherical ECS. Moreover, we feel that no experiment to date has definitively observed the ice ECS.
## An ECS instability
Even at a fundamental theoretical level, it would not have been possible to observe the true ice ECS in any experiment performed to date. In all prior experiments, test crystals were grown in an environment with some \(\sigma_{\infty}\) specified as a far-away boundary conditions, and no ECS can stably exist in such conditions.
To see this, consider a spherical crystal with some radius \(R\) within such a growth chamber. The crystal would be in equilibrium (neither growing nor sublimating) provided \(\sigma_{\infty}=d_{\text{s}\nu}\kappa\), as indicated in Equation 5. But this equilibrium state is not a stable state. If one perturbs the crystal to slightly increase \(R\), then the equilibrium condition would not be met, and the crystal would begin growing. And it would continue growing indefinitely thereafter. Alternatively, perturbing the crystal to slightly decrease \(R\) would cause sublimation that would continue until the crystal sublimated away completely.
What this shows is that no ECS can stably exist when a fixed outer boundary of \(\sigma_{\infty}\) is maintained. The only way to produce a truly stable ECS is to isolate a single crystal in an otherwise empty environment, as then the background supersaturation will adjust to come into equilibrium with the ECS.
Reflecting on this discussion suggests that creating an isolated void in a single-crystal ice block would likely be the best approach to observing the ice ECS in the lab. A vacuum pump attached to a capillary needle could create a small void, and an applied temperature gradient could be used to move the void away from the capillary tip. Once isolated, a uniform temperature environment could be applied to allow the crystal to reach the ECS.
If the ECS were spherical, or nearly so, then an initially faceted void (the growth form of the void) [1965Kni, 1993Fur] would quickly evolve toward the ECS, as this evolution would not be hindered by any nucleation barriers. Moreover, applying a quadrupolar temperature profile would distort the shape of the void, thus allowing a measurement of the ice surface energy as a function of temperature. Realizing such an experiment is a task left for another day, but clearly there is substantial opportunity for improving our understanding of the ice surface energy, surface energy anisotropy, and the ice ECS.
## 8 Conclusions
In summary, we have developed comprehensive dynamical model describing the growth of faceted prisms with rounded edges and corners. Our input model assumptions were guided by recent ice-growth measurements, including: 1) we assumed an isotropic surface energy and therefore a spherical ECS, 2) we assumed strong basal faceting and negligible basal growth rates for slowly growing crystals in a near-vacuum environment, and 3) we assumed prism faceting governed by a terrace-nucleation model, with nucleation parameters derived from growth measurements.
Our model uses approximate calculations for particle and heat diffusion to yield analytic expressions for growth morphologies in a stable-growth limit, as this approach allows reasonable estimates of faceting behaviors over a broad range of growth conditions. Dropping the stable-growth assumption, numerical modeling could be used to examine time-dependent morphological changes for comparison with targeted ice-growth experiments. A full 3D computation model describing particle and heat diffusion in the presence of strongly anisotropic attachment kinetics remains a challenging problem, not addressed in this paper.
Our scientific conclusions based on model calculations include:
\(\bullet\) For ice crystals grown on a substrate in a near-vacuum environment, our model shows that latent heat diffusion can strongly affect growth rates and faceting behavior. These effects are especially strong with large crystals, at high temperature, and at high growth rates, as shown in Figures 2 and 3.
\(\bullet\) Our relatively simple analytic model likely overestimates the value of \(R_{\textit{facet}}/r_{\textit{corner}}\) when heat diffusion plays a major role, while it underestimates the value of \(R_{\textit{facet}}/r_{\textit{corner}}\) when particle diffusion limits growth. Heat diffusion (for a faceted prism growing on a substrate in a near-vacuum environment) tends to result in the highest crystal temperatures at positions farthest from the substrate, yielding rounded corners and lower values of \(R_{\textit{facet}}/r_{\textit{corner}}\). Particle diffusion tends to sharpen corners via the Mullins-Sekerka instability [1964Mul], thus yielding higher \(R_{\textit{facet}}/r_{\textit{corner}}\) values. Incorporating these higher-order diffusion effects would require full 3D diffusion modeling.
\(\bullet\) For large prisms (\(R_{\textit{facet}}\approx 500\)\(\mu\)m) growing at roughly 1-10 nm/sec, our model predicts an abrupt transition from sharply faceted prisms (\(R_{\textit{facet}}/r_{\textit{corner}}\gg 1\)) at temperatures below about -2 C to rounded forms (\(R_{\textit{facet}}/r_{\textit{corner}}\approx 1\)) at higher temperatures. Elbaum interpreted this faceting behavior as a roughening transition of the prism surface near -2 C [1991Elb], but we believe that our dynamical model provides a better explanation. In our picture, there is no
roughening transition, and the ice ECS is essentially spherical at all temperatures above - 15 C.
\(\bullet\) Our model indicates that strong faceting (defined by large \(R_{facet}/r_{corner}\) values) persists down to remarkably low growth rates, especially at low temps, as seen in Figures 2 and 3. This result suggests that the faceting behaviors described in [1985Col] could be explained reasonably well as a dynamical growth phenomenon. The result also suggests that it can be exceedingly difficult to observe the ECS using growing crystals, casting doubt on the conclusions described in [1985Col].
\(\bullet\) Our model suggests that a strong anisotropy in the ice surface energy is not required to explain observations of faceted ice growth. In nearly all cases, the formation of ice-crystal facets appears to result from the strong anisotropy in the surface attachment kinetics.
\(\bullet\) Using data from different ice-growth experiments, we find that all our existing observations of simple faceted forms are generally consistent with the growth model described above, which incorporates the comprehensive basal and prism attachment kinetics model described in [2021Lib]. From this we continue to build a self-consistent picture of the attachment kinetics and of snow crystal growth that can reasonably explain the most reliable experimental data. This evolving paradigm also serves to suggest targeted experimental investigations that can further influence and refine our broader understanding of the structure and molecular dynamics of the ice surface.
\(\bullet\) There is much potential for making additional progress in understanding the dynamics of ice crystal growth using precision experiments measuring ice growth rates and morphological behaviors in different environments. Unfortunately, such investigations are substantially hampered at present by the lack of adequate computational techniques that can model crystal growth in the presence of strongly anisotropic attachment kinetics in combination with particle and/or latent-heat diffusion. As these computational tools become available, they will enable much improved comparisons between theory and experiment that will undoubtedly yield further insights into the physical processes underlying ice crystal growth dynamics.
We gratefully acknowledge support from the Cambridge-Caltech Exchange Program and the Summer Undergraduate Research Fellowship program at Caltech.
|
2303.10831 | Bridging Deliberative Democracy and Deployment of Societal-Scale
Technology | This position paper encourages the Human-Computer Interaction (HCI) community
to focus on designing deliberative processes to inform and coordinate
technology and policy design for large language models (LLMs) -- a
`societal-scale technology'. First, I propose a definition for societal-scale
technology and locate LLMs within this definition. Next, I argue that existing
processes to ensure the safety of LLMs are insufficient and do not give the
systems democratic legitimacy. Instead, we require processes of deliberation
amongst users and other stakeholders on questions about the safety of outputs
and deployment contexts. This shift in AI safety research and practice will
require the design of corporate and public policies that determine how to enact
deliberation and the design of interfaces and technical features to translate
the outcomes of deliberation into technical development processes. To conclude,
I propose roles for the HCI community to ensure deliberative processes inform
technology and policy design for LLMs and other societal-scale technology. | Ned Cooper | 2023-03-20T02:27:52Z | http://arxiv.org/abs/2303.10831v2 | # Bridging Deliberative Democracy and Deployment of Societal-Scale Technology
###### Abstract.
This position paper encourages the Human-Computer Interaction (HCI) community to focus on designing deliberative processes to inform and coordinate technology and policy design for large language models (LLMs)--a'societal-scale technology'. First, I propose a definition for societal-scale technology and locate LLMs within this definition. Next, I argue that existing processes to ensure the safety of LLMs are insufficient and do not give the systems democratic legitimacy. Instead, we require processes of deliberation amongst users and other stakeholders on questions about the safety of outputs and deployment contexts. This shift in AI safety research and practice will require the design of corporate and public policies that determine how to enact deliberation and the design of interfaces and technical features to translate the outcomes of deliberation into technical development processes. To conclude, I propose roles for the HCI community to ensure deliberative processes inform technology and policy design for LLMs and other societal-scale technology.
+
Footnote †: journal: ACM
+
Footnote †: journal: ACM
## 1. SOCETAL-SCALE Technology
In this position paper, I define'societal-scale technology' as an artefact or system that is:
* developed based on interactions with society, or
* impacts society once deployed.
I define society in this position paper as groupings of people across political, economic, geographical, and cultural boundaries. For example, I consider LLMs to be societal-scale technology, as LLMs interact with training data that represent groupings of people across such boundaries, and, at least in recent deployment cases (_e.g._, the ChatGPT research preview), LLMs have impacted groups of people across such boundaries. Within such societies, ChatGPT has impacted direct stakeholders (those who interact directly with ChatGPT, such as school students) and indirect stakeholders (those who may or may not have interacted directly with ChatGPT but are affected by the interaction of direct stakeholders with ChatGPT, such as school teachers) (Bahdan et al., 2017).
## 2. Processes of deliberation for LLMs
While LLMs may interact with representations of societies during development in the form of training datasets, the group of people actively developing LLMs rarely, if ever, reflect the boundary-spanning nature of the societies in which LLMs are deployed. For example, the workforce of OpenAI is not (and could never be) as diverse as the societies in which they deployed ChatGPT. The values and preferences of those groups of people actively developing an LLM are reflected in any system they develop, regardless of the representation of diverse values and preferences in a training dataset--through the specification of system behaviour once deployed. In the case of ChatGPT, for example, this was achieved through content filters, among other mechanisms. The content filters specified specific values and preferences that OpenAI intended ChatGPT to reflect. However, I contend that the specification of values and preferences by one group of developers is insufficient to ensure the safety of LLMs and does not give such systems democratic legitimacy.
A small group of expert developers cannot adequately foresee the safety risks of technology deployed across societies--composed of multiple professions and stakeholders--let alone speak for those stakeholders during development and deployment (Belle et al., 2017).
Instead, I encourage those considering safe development and deployment strategies for LLMs to review the emphasis of political science literature over recent decades on deliberation as the essence of democratic legitimacy (Belle et al., 2016; Belle et al., 2017; Belle et al., 2018). Deliberative democracy involves an association of members who govern their affairs by deliberation among the members (Belle et al., 2017). In other words, deliberative democracy is about making collective decisions determined by those subject to the decisions: not only their preferences, interests, and choices but also their reasoning (Belle et al., 2018). If algorithmic fairness is primarily a political task, as Wong (Wong, 2018) argues, rather than solely a technical task, we must consider how to resolve issues politically rather than technically. In the societal-scale context of LLMs, instead of groups of developers within individual organisations resolving such political questions for us, I argue that we must design processes that provide people using a system (_i.e._, direct stakeholders) or people affected by a system (_i.e._, indirect stakeholders) with the opportunity to deliberate with others (including developers) on how the system functions, and in what contexts to deploy those systems.
## 3. Roles for HCI in deliberative AI
OpenAI accompanied the release of ChatGPT with a request for feedback from users on the outputs of the system to improve its safety. As stated in the blog post announcing the release of ChatGPT:
_"We are interested in feedback regarding harmful outputs that could occur in real-world, non-adversarial conditions, as well as feedback that helps us uncover and understand novel risks and possible mitigations."_(Belle et al., 2017)
Yet, the scope of feedback accepted through the ChatGPT interface is severely limited--to indications of approval or disapproval of an individual output of the system and some indication by the user of what an 'ideal' answer might have been. This feedback process remains tightly controlled by those deploying the system and focuses on aligning outputs to individual user preferences. It does not facilitate deliberation _amongst_ stakeholders regarding the proper outputs of LLMs nor the appropriate deployment contexts for LLMs.
HCI researchers and practitioners are well-placed to build deliberative capacity into AI safety research and practice--the HCI community takes users' preferences seriously and interrogates the methods through which we elicit user preferences. To this end, I envisage three roles for the HCI community for the safe development and deployment of LLMs:
1. Designing corporate and public policies that define criteria for and conditions of membership of associations for deliberation
2. Developing platforms or interfaces that facilitate bidirectional communication among developers and members of associations, and multi-directional communication amongst the members of an association
3. Designing processes for documentation of collective decisions, and research on how to link documented decisions to technical development processes (_e.g._, how could the processes of Reinforcement Learning from Human Feedback expand to include deliberation amongst stakeholders or to facilitate feedback from collectives of users?). |
2308.05828 | DiLogics: Creating Web Automation Programs With Diverse Logics | Knowledge workers frequently encounter repetitive web data entry tasks, like
updating records or placing orders. Web automation increases productivity, but
translating tasks to web actions accurately and extending to new specifications
is challenging. Existing tools can automate tasks that perform the same logical
trace of UI actions (e.g., input text in each field in order), but do not
support tasks requiring different executions based on varied input conditions.
We present DiLogics, a programming-by-demonstration system that utilizes NLP to
assist users in creating web automation programs that handle diverse
specifications. DiLogics first semantically segments input data to structured
task steps. By recording user demonstrations for each step, DiLogics
generalizes the web macros to novel but semantically similar task requirements.
Our evaluation showed that non-experts can effectively use DiLogics to create
automation programs that fulfill diverse input instructions. DiLogics provides
an efficient, intuitive, and expressive method for developing web automation
programs satisfying diverse specifications. | Kevin Pu, Jim Yang, Angel Yuan, Minyi Ma, Rui Dong, Xinyu Wang, Yan Chen, Tovi Grossman | 2023-08-10T19:01:30Z | http://arxiv.org/abs/2308.05828v2 | # DiLogics: Creating Web Automation Programs With Diverse Logics
###### Abstract
Knowledge workers frequently encounter repetitive web data entry tasks, like updating records or placing orders. Web automation increases productivity, but translating tasks to web actions accurately and extending to new specifications is challenging. Existing tools can automate tasks that perform the same logical trace of UI actions (e.g., input text in each field in order), but do not support tasks requiring different executions based on varied input conditions. We present DiLogics, a programming-by-demonstration system that utilizes NLP to assist users in creating web automation programs that handle diverse specifications. DiLogics first semantically segments input data to structured task steps. By recording user demonstrations for each step, DiLogics generalizes the web macros to novel but semantically similar task requirements. Our evaluation showed that non-experts can effectively use DiLogics to create automation programs that fulfill diverse input instructions. DiLogics provides an efficient, intuitive, and expressive method for developing web automation programs satisfying diverse specifications.
2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 23 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 232 2023 2023 2023 23 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 232 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 223 2023 2023 2023 2023 2023 2023 2023 2323 2023 2023 2023 2023 2023 23 2023 2023 232 2023 2023 23 2023 23 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 23 2023 232
## 1. Introduction
Interacting with web pages to complete routine data entry and migration tasks is a daily part of many occupations, from receptionists to researchers. But these tasks often require repetitive work that can be time-consuming and unfulfilling. Frequently performing these tasks manually can result in human mistakes (e.g., duplicate or missed entries) or frustration (Krishnan et al., 2017). In contrast to the manual effort, web automation uses programs to simulate human interaction, creating a faster and more accurate way to complete mundane tasks. But there is a barrier to creating web automation programs for users without expertise. Through a formative analysis of web automation requests in online platforms such as StackOverflow, we found that non-experts are experiencing difficulty in creating automation programs tailored to their need. To lower the barriers in the program creation process, existing programming-by-demonstration (PBD) systems, such as SemanticOn and Rousillon (2017, 2019, 2020, 2021) allow users to manually perform a part of the task and construct an automation program based on the demonstrations.
However, while these tools can handle structured repetitive tasks that follow predetermined, uniform program logic, they are difficult to generalize when the task contains varied input data that require different page actions to fulfill. Consider the scenario where an event planner handles employee information from a spreadsheet for booking. They want to enter employee ID into a web form for every colleague traveling to a conference. While the data entry step is constant for every employee, inputting ID to the same field (i.e. uniform program logic), the subsequent steps could require different actions targeting different UI elements. For example, the planner may also need to enter information about dietary restrictions, seating preferences, and planned attendance into multiple data systems and make different selections for every employee (i.e. diverse program logic based on input). Another illustrative example is when a coordinator is placing a group lunch order for a social gathering. On the food ordering website, they need to conduct repetitive steps to search for the restaurant, click on the food item, and add it to cart. Existing tools can automate this uniform logic based on the website structure (e.g. adding each item in the order they appear), but cannot accommodate when requests don't follow such structural order. For example, two requests might order from different restaurants where items are organized differently, and one request requires side dish options while the other notes a dietary restriction. These diverse specifications would likely require the automation program to interact with distinct UI elements in different sequences, leading to a need for the program to execute a diverse set of logics depending on the input data. But the presence and content of these different types of requests might differ for each input. Every local specification (e.g. specify attendance on an event page) might require near identical automation steps, but holistically the different requests are combined and scaled with increasing size of input data to create a hard problem, demanding system intelligence to disambiguate different requests and perform the actions accordingly. This necessitates the program to be flexible in its choice of execution in order to automate a large variety of steps based on the input data. To add to the problem, the input data, which describes task requirements, contains enormous heterogeneity in expression (e.g. multiple steps, different phrasing). A study on user commands for web actions revealed that people employ various language phenomena, often involving high-level goal description or reasoning (Krishnan et al., 2017). While the user could include additional program logic in their automation script to account for diversity in input data and website UI, this extra configuration process can become laborious and error-prone. The resulting program is also task-specific, needs to be maintained, and not scalable.
In this work, we present DiLogics1, a PBD system built upon a program synthesizer (Krishnan et al., 2017) that assists non-experts in creating web automation programs with diverse and generalizable programming logics. The completed program can execute consistent actions for every data input; it also goes beyond symbolic inferences and dynamically executes different UI actions based on the semantic understanding of the task input and web content. To create a scalable automation program, DiLogics first semantically segments the input data to decompose the task into more tractable steps (Step 1, Fig.1). The system represents these steps in a table with a carousel widget (Fig.1.c) that informs users of the task progress. Then DiLogics elicits web demonstration for each step (Step 2, Fig.1), mapping the sequence of UI actions to the description of the step. At every step, DiLogics leverages natural language processing (NLP) models to scrape web content and locate the most relevant web page content via statistical learning. This way, UI actions (e.g., clicks and selections) are dynamically associated with semantically similar elements on the page. In addition, DiLogics employs program synthesis techniques to record the user's actions and their symbolic relationships in the web DOM structure. After a few demonstrations, the system detects the pattern in the action trace and generates an automation program.
Footnote 1: DiLogics is an acronym for Diverse Logics
As the user demonstrates each step, DiLogics builds a catalog of task steps to UI sequence mappings. Upon entering automation, the system matches each encountered task step to the semantically similar step in the demonstrated catalog and extends the same program logic to fulfill the current condition. When the new step is not meaningfully similar to any previous ones, DiLogics asks the user to demonstrate a new set of UI actions, and adds this step to the catalog for future generalization (Step 3, Fig.1). This approach enables flexibility in the execution of the automation program, as it will always employ the most fitting program logic based on semantic similarity, and perform the actions on the relevant element on the current page. Combining NLP models and program synthesis techniques, DiLogics can generate an automation program that consists of both rule-based structural repetitions, as well as diverse program logics based on the different input data semantics.
We evaluate DiLogics's usability with 10 participants using four common UI automation data entry tasks. All participants had no prior experience using web automation tools. We showed that users of DiLogics can successfully create automation programs that satisfy the input requests for every task. Despite being novices, participants were able to efficiently construct diverse programming logics by demonstrating different semantic steps. Participants also reported that unlike performing manual actions such as copy-and-paste in data entry tasks, mappings task steps to a set of UI actions via demonstration is efficient, generalizable, and reduces mental effort. Overall, they found DiLogics intuitive to use and effectively covers diverse scenarios by learning the user demonstrations. In the final section, we discuss the implications of DiLogics' design and future works that can adapt our approach to other interactive collaborations between the human and the intelligent system.
This paper contributes the following:
* A PBD approach that assists users in creating web automation programs with diverse programming logics.
* The technique of semantically categorizing input data and mapping to generalizable UI demonstrations.
* The DiLogics system implementation and user evaluation results assess its effectiveness and usability.
## 2. Related Work
Our work relates to primarily two fields in the PL and HCI communities: web automation and human-AI collaboration. In this section, we identify gaps in existing solutions and draw our design inspirations from these two areas.
### Web Automation
The concept of web automation refers to the use of bots to perform tedious and recurring web tasks such as data entry and extraction by simulating human interactions. It is common for knowledge workers to use web automation in order to accomplish their respective tasks (Sakul et al., 2017; Sakul et al., 2018; Sakul et al., 2019; Sakul et al., 2019). For example, data entry workers may need to automate entering data into a digital system for routine tasks such as processing orders or extracting data.
Many tools have been developed to help users to create automation programs. For instance, tools like Puppeeter, Selenium, Scrapy, and Beautiful Soup allow developers to select elements and define actions to automate. Research tools like Sikuli (Sakul et al., 2019) allow users to identify a GUI element (e.g., an icon or a toolbar button) by taking its screenshot. Using computer vision techniques, it analyzes patterns in the screenshots to locate the appropriate elements when automating GUI interactions. Although these tools help lower the effort of creating programs, they all require programming knowledge and cannot disambiguate similar elements or text information.
Even for professional developers, creating automation programs is a non-trivial task. A study showed that experienced programmers have difficulty writing web macros using common web automation frameworks (Sakul et al., 2019). Participants pointed out that a primary hurdle was the labor of checking syntactical element selectors to create their programs, causing inefficiency and errors. In addition, the program might not generalize to cross-webpage selections where the elements don't have syntactic similarities. With our work, users can specify the mappings between a task and its corresponding UI actions via demonstrations. This saves effort on checking selectors to create a program and enables generalization of UI actions for unseen steps beyond structural similarity.
Alternatively, researchers have leveraged large datasets of UI to computationally summarize a mobile screen into a coherent natural language phrase (Sakul et al., 2019), enabling conversational interaction using large-language-models (LLMs) (Sakul et al., 2019), and to ground instructions to UI action sequences (Sakul et al., 2019). Commercial LLM applications also employ fine-tuned neural models for downstream activity such as UI automation (Beng et al., 2017; Chen et al., 2018). These tools allow users to prompt the model with high-level natural language intents, which are translated into GUI actions. However, users have limited control outside of describing the task using prompts, and cannot modify the output automation program easily. DiLogics provides a complete pipeline of web automation workflow, from processing input data, to tailoring the program to user demonstration, to refining and handling errors in automation.
### Specifying Diverse Programming Logics
Prior works developed techniques that support users to easily express program logics to satisfy task specifications. Systems like SemanticOn and PUMICE allow users to specify conditions by demonstrating examples that are (dis)similar to a given specification (e.g., images of two people interacting, weather is hot) (Sakul et al., 2019; Sakul et al., 2019). However, they are designed to handle uniform logic - a binary conditional that determines action or no action applied universally to all content and input. Examples include downloading an image when it contains key objects, or running a macro when the weather is above a certain temperature. In this case, users need to recreate a program when there are multiple specifications that correspond to different UI actions. Commercial tools like UiPath (Chen et al., 2018) and iMacros (Beng et al., 2019) allow users to set conditional actions on specific page elements via programming, which requires expertise. But, they also lack task understanding to generalize the conditional outside of the symbolic element (i.e. HTML tag) and could not execute different actions based on input data specifications.
Alternatively, researchers designed neurosymbolic languages with both neural and symbolic elements to create programs that satisfy new specifications via approximation. Neurosymbolic programming is a generalization of classical program synthesis, bridging the gap between deep learning and program synthesis. Unlike deep learning, neurosymbolic programs can often represent long-horizon, procedural tasks that are difficult to execute using deep networks, and they are also generally easier to interpret and formalize than neural networks (Sakul et al., 2019; Sakul et al., 2019). In contrast to symbolic approaches, neurosymbolic programming does not require all specifications to be hard logical constraints.
However, this approach has been little explored in the context of web automation. For many years, ML researchers have promoted a "hybrid model" that combines the best of both worlds. As an example, WebQA developed a neurosymbolic system with domain-specific language (DSL) for extracting desired information from datasets that have similar contents but differ in the underlying structures (e.g. DOM structures) (Sakul et al., 2019). It omitted, however, user actions during upstream activities (e.g., data collection), limiting it to tasks involving a particular dataset (e.g., data extraction). For
data collection, SemanticOn was able to bridge the communication gap between users' abstract level intent (semantic conditions) and a symbolic system using neural components without defining a DSL (Sutton et al., 2017). However, while these systems are capable of conditional behavior, their program logic is limited to binary decisions between action and no action, unable to handle diverse specifications with different corresponding actions. Therefore, current neurosymbolic approaches are either restricted to uniform program logic tasks or require the development of a domain-specific language (DSL) to encode neural model output into symbolic systems (or vice versa), making these approaches not scalable.
DiLogics also employs a neurosymbolic approach for creating programs to automate data entry tasks that involve diverse logics. Rather than following the same program logic throughout execution, DiLogics learns from user demonstration and uses statistical learning to automate the current step using the most fitting program logic based on task semantics. By categorizing users' actions into semantic steps, DiLogics learns action patterns and logic-to-demonstration associations using both symbolic inferences and neural network approximation. The resulting programs extend beyond the uniform program logic that cannot satisfy diverse specifications due to task nature. Through this construct, DiLogics reduces the level of expertise needed by system designers in other areas to build neurosymbolic programming approaches for their tasks.
### Programming by Demonstration
A programming by demonstration (PBD) approach has been adopted by many tools in order to further reduce the expertise required, since users only have to interact with the target applications rather than write code (Han et al., 2015; Goyal et al., 2016; Goyal et al., 2017; Goyal et al., 2018; Goyal et al., 2019). Among these application domains are text manipulation (Goyal et al., 2016; Goyal et al., 2017; Goyal et al., 2018; Goyal et al., 2019), image or video editing (Goyal et al., 2017; Goyal et al., 2019; Goyal et al., 2019), and GUI synthesis (Han et al., 2015; Goyal et al., 2019; Goyal et al., 2019). For web applications, PBD helps build automation programs without requiring users to understand browser internals or reverse-engineer target pages manually. CoScripter (Goyal et al., 2017), Vegemite (Vegemite, 2018), Rousillon (Goyal et al., 2018), UiPath (Goyal et al., 2018), and iMacros (Bianchi et al., 2018) are examples of the PBD approach to web automation. The resulting programs from user demonstration are represented in visual formats such as a workflow chart (Goyal et al., 2018), a for-loop (Goyal et al., 2018), or in DSL code (Bianchi et al., 2018). These representations require programming expertise, and users have to manually edit program logic which is often nested and convoluted.
Effectively communicating user intent is a major challenge in these PBD systems, and many systems have proposed bridging the gap between user intent and system understanding. Systems like PLOW (Goyal et al., 2018) and PUMICE (Goyal et al., 2019) allow users to express concepts (e.g., hot weather) in natural language and then learn the concepts to generalize the automation. ParamMacros (ParamMacros, 2018) allows users to first generalize a concrete natural language question with potential values to identified parameters, and then create a demonstration of how to answer the question on the website of interest. Scout (Scout, 2018), Designscape (Scout et al., 2018), and Iconate (Iconate, 2019) allow users to iteratively refine their intent by dimethyl manipulating the AI-generated artifacts. SOVITE (Goyal et al., 2019) allows users to correct system misunderstanding via direct manipulation of the highlighted entity on the GUI. Another work, APPINITE (Goyal et al., 2019), also encapsulates the user's intent in natural language instructions and clarifies the intention in a back-and-forth conversation with the AI.
Despite promises, specifying intents to cover every case can be tedious. Furthermore, users may not know all the cases in the first place. This suggests that tools need to elicit users to better formulate their intent before creating automation programs. DiLogics addresses this challenge by parsing input data and allowing users to refine their intent continuously during automation by coordinating with our system.
## 3. Background: WebRobot System
In this section, we provide necessary information for WebRobot (Goyal et al., 2017), a program synthesizer that constructs a part of the DiLogics system. WebRobot utilizes only web actions and requires no programming expertise, which is consistent with our design goals.
WebRobot utilizes a no-code approach to synthesize web automation programs based on user demonstration. To create a web automation program for a data entry or scraping task, the user first starts recording their actions (Fig. 2.a) and optionally uploads a JSON file (Fig. 2.b) if they need to input data. Then, they start demonstrating how to perform the task by choosing an appropriate action type (e.g., Scrape text) in the action panel (Fig. 2.c) followed by actually performing actions (e.g., clicking the desired text data on the website). After each scraping action, the output panel displays the extracted data (Fig. 2.d). Behind the scenes, WebRobot records every user action with its associated action type. At a very high level, WebRobot infers the user intent by generalizing a trace \(A\) of user-demonstrated actions to a program \(P\) with loops. This generalization is done by "rerolling" actions in \(A\) into loops in \(P\) - specifically, it infers inner loops first and gradually infers outer loops. In particular, \(P\) is guaranteed to not only _reproduce_ the actions in \(A\) but also _generalize_ beyond \(A\). In other words, \(P\) performs more actions after \(A\). This typically means \(P\) is a _loopy program_ which "folds" actions from \(A\) into a loop that can execute for multiple iterations, essentially generalizing user-demonstrated actions based on the same program logic. Finally, WebRobot executes \(P\) to
Figure 2. A screenshot of the WebRobot system UI.
automate the rest of the task, without users manually performing any actions. For more details on the program synthesis algorithm, please refer to the original WebRobot paper (Krishnan et al., 2017).
## 4. Formative study and design goals
### Online Web Automation Requests
To understand the needs and barriers of web automation users, we conducted a formative study analyzing online web automation requests and derive our design goals from the results. We collected posts from developer forums like StackOverflow and Sub-Reddit communities (e.g. r/automate), as well as commercial tool platforms like UiPath and iMacros forums (Bartos et al., 2017; Krizhevsky et al., 2017). We used BeautifulSoup4 (Bartos et al., 2017) and the available APIs (Bartos et al., 2017) to scrape the title, content, comments/replies, and the URL of web posts. To identify relevant posts and discussions about web automation, we filtered the forums by keywords like "_UI automation_", "_workflow automation_", and "_web-scraping_" and ranked the results by popularity.
After data cleaning, we collected a total of 847 posts. We conducted keyword analysis within post content and identified 53% of posts as being written by non-experts without web automation or programming expertise, containing phrases like "_new to [specific tool]_" or "_beginner_". We also found that 61% of posts were inquiries about how to perform a specific function or approach a task using existing tools. This indicates a potential barrier to usage in existing tools as they require specific domain knowledge and experience to utilize. Combined with a large number of non-expert requests, the learning curve for beginners to accomplish their automation tasks is challenging to overcome.
Our analysis also discovered examples of posts that illustrate conditional specifications. One example is when a user wanted the automation script to iterate through a list of users on a website, and conduct different actions depending on whether the user status is online (Bartos et al., 2017). In another example, the user intended to automate different UI actions based on a text element value (Bartos et al., 2017). Although the element can be easily located by the human, the user expressed difficulty in pragmatically accomplishing this behavior. In addition, we found that some users desire a simpler way to create and refine web automation programs. For example, one user expressed a need to record web macros and modify them to automate web actions that are generalizable (Bartos et al., 2017).
Based on the results of our formative analysis, we identified a barrier for novice users to create web automation programs tailored to their needs. Existing tools require domain knowledge, and cannot fully satisfy conditional automation or generalize the web actions based on the page content. We also argue that current online requests are limited by the capabilities of existing tools. With higher system intelligence, users could express the need to create more generalizable web automation programs for more complicated tasks that involve conditional steps.
### Website UI and Content Analysis
We also carried out an informal analysis to identify the common UI action sequences for completing common data entry tasks on websites. To do so, we analyzed 40 popular websites across 7 genres, including food, shopping, health, entertainment, travel, communication, and scheduling. These genres are identified by prior study (Sandel et al., 2017) and extracted from real user requests on forums such as iMacros (Bartos et al., 2017) and Stack Overflow (Krishnan et al., 2017) that discuss the creation of web automation programs for data entry tasks. We scraped the UI widget types and analyzed the types of UI action sequences needed to complete tasks for these websites. This led us to identify 8 recurring widget categories: buttons with text (appeared in 100.0% of inspected websites), drop-down means (77.5%), checkbox/radio buttons beside a text label (72.5%), input box for memo or special instructions (27.5%), calendar widget (20.0%), plus and minus quantity widget (20.0%), and seat map (10.0%). This investigation helped us determine which are the most common UI widgets that an automation program could encounter. We also found that websites utilize consistent GUI elements and interactions for the same categories of functionalities across all pages (e.g. navigation is often associated with buttons or links, search is often associated with an input box). Thus, we design DiLogics to automate GUI tasks under the same web domain where similar UI interactions fulfill the same semantic task. This design scopes the system generalization and assumes that a semantically similar task can be automated via the same UI actions, enhancing the accuracy within the task website domain.
### Design goals
Based on our formative analyses and prior works, we devised three design goals to help construct our system supporting users in creating web automation programs for data entry tasks with diverse program logics.
* **DG1:** Generalizable specification of the diverse mapping between task steps and user actions.
* **DG2:** Intuitive and natural interaction that constructs automation from demonstration
* **DG3:** Error-handling capability to modify automation and refine step-action mappings accessibly.
## 5. DiLogics
### The DiLogics User Experience
Emma, a corporation clerk, is responsible for processing food orders for all employees at a team-building event. A spreadsheet of everyone's food orders and requests is collected through a survey. Emma could manually enter the selections for each order, but that would be time-consuming and error-prone. Instead, Emma uses DiLogics to efficiently demonstrate common categories of task steps, record her web UI actions, and synthesize an automation program that automatically completes the task for her. To begin, Emma opens the DiLogics browser extension and uploads the data sheet, displayed as a table with segmented steps for restaurant, dish, ingredient, and dietary restrictions (Fig.3.a).
#### 5.1.1. Manual Demonstration
Emma begins the PBD process and moves her attention to the carousel widget on the target web page (Fig.3.b), displaying the steps to complete the current food order (i.e. data table row). Emma follows the carousel and first searches the restaurant by inputting the name into the search bar and clicking "_Search_" (Fig.3.c). Then, Emma moves to the next carousel step, which is to select the dish. DiLogics semantically searches the relevant text content on the page. Emma follows the highlight and finds the best fitting dish, then she clicks to navigate to the
food details page (Fig.3.d). So far, the demonstrated actions are consistent for every food order, which can be handled by existing PBD tools. But they are limited when every order contains different specifications that require different program logics to fulfill, as we see in later steps.
Again advancing the carousel, the current step is a segmented user request to order "_a side of soup_", however, no highlight is shown as the side item menu is not expanded. Emma opens the drop-down menu, DiLogics detects a page state change, and highlights the "_Soup_" menu item (Fig.3.e). This sequence of three actions (open menu, search, and click) should not be executed for orders that don't include a side, but existing PDB tools will require configuring a conditional on the symbolic element (i.e. page contains an HTML element with tag "_Side_") to handle this case. Instead, DiLogics dynamically apply the best fitting program logic by storing this action trace under the category of "_a side of soup_" for generalization. Emma keeps demonstrating each task step following the carousel progression, until arriving at the last slide, which prompts her to complete any remaining action in this row. Emma clicks "_Add To Order_" to complete this order request and clicks "_Next Row_" on the carousel. DiLogics then generates the carousel steps for the next input data row (Fig.4.e).
#### 5.1.2. Semi-automation
After Emma demonstrated the second row following the carousel, DiLogics synthesizes an automation program based on user action pattern and enters the semi-automation mode. In this stage, the system predicts the next step of action (e.g. "_Next step is clicking on the highlighted element_") and prompts the user to review on the carousel widget. Emma can click "_Confirm_" to allow DiLogics to automate this step, or click "_Cancel_" for incorrect predictions and manually demonstrate the correct step. After Emma authorizes the system predictions to fulfill the third-row order, DiLogics enters full automation with the synthesized program.
#### 5.1.3. Full Automation
In full automation mode, DiLogics completes the remaining orders row-by-row in the input table, and step-by-step in each row's table cells. DiLogics automates actions that are consistent for every order, such as inputting the restaurant name for search or clicking "_Add to Order_". Moreover, DiLogics constructs different program logics to handle different combinations of task steps and generalize to new steps. Emma is pleased to find that DiLogics is able to correctly perform UI actions for "_add a daily soup_" even though it is a new condition on a different restaurant page. DiLogics achieves this by semantically matching the step to "_a side of soup_", which was previously demonstrated. It can then perform the same UI action sequence but on the "_daily soup_" option on the current web page, despite structural differences.
Figure 3. An example workflow of DiLogics’ user demonstration. Upon uploading the input file, the carousel widget displays all task steps for the current row. To demonstrate, the user first drags the restaurant name to the search bar and navigates to the intended page. Then, the user moves the carousel to the next slide. DiLogics semantically searches for the dish name and the user clicks on the highlighted result to enter the detail page. On this page, the user demonstrates each remaining specification. The first request is adding soup as a side item. DiLogics initially does not find any relevant option, so the user demonstrates by first clicking on the drop-down menu for sides. The system then highlights the relevant option, and the user clicks on the corresponding radio button. The user then moves on to the next specification until the end of the data row.
#### 5.1.4. Refine & Repair
Occasionally, when DiLogics encounters a new step that does not semantically match with any previous categories (e.g. "_select barbeque sauce_"), the system pauses and prompts Emma to demonstrate. When DiLogics makes a mistake by highlighting or selecting the wrong element (e.g. highlight "_Sauce_" menu heading before the user reveals sauce options in the drop-down), Emma pauses the program to manually cancel the highlight (Fig.4.f), expand the menu, and record new demonstrations to account for new conditions. This manual effort decreases as DiLogics learns and expands its knowledge of the task semantics. Existing PBD tools require users to program the automation for conditional behavior, and could not continuously refine or repair as it executes. Using DiLogics, Emma did not have to create any conditionals to configure the automation program to handle each task step, DiLogics is able to acquire task understanding and generalize demonstrations based on the step specifications. Moreover, Emma has the ability to refine program logic and repair errors at any point during the workflow. Eventually, the program is able to efficiently execute UI actions based on this large datasheet. Emma checks the order list and the shopping cart to verify that the task has been completed, and purchases to confirm the order.
### DiLogics' Design Rationale and Iteration
We iteratively designed the DiLogics system based on the feedback from a 10-participant usability evaluation using the prototype. In the initial iteration, DiLogics required users to interact with the extension page to control the flow of demonstration recording, resulting in frequent attention switches between the task website and our tool. Users also needed to manually trigger a semantic search in their action trace which was inefficient. To address these issues, we made significant improvements to the user workflow and interaction process. The final version of DiLogics features a carousel widget (Fig.4.e) overlaid on the target website to guide the task progression, displaying the past, present, and next task steps, following Norman's visibility principle (Norman, 2019). The widget affords and constrains movement back and forth, giving users a sense of direction and task progress. In addition, by Horvitz's mixed-initiative UI principles (Horvitz, 2019), we anchor the carousel on the task web page to alleviate effort and reduce context switches. In addition, DiLogics now actively searches for semantically relevant page content at every step and website state change, increasing system intelligence to simplify user's workflow. As the task goes on, the data table provides color-coded highlighting to signal completion status. This is guided by Norman's feedback principle (Norman, 2019) to help users understand DiLogics's response and confirm their actions. Moreover,
Figure 4. DiLogics UI Overview. Left is DiLogics’ extension window. After users upload an input file 1, the data is semantically segmented into task steps 2 and rendered into a table. Users can modify the inaccurately segmented step 3. They can also control the flow of the task, and pause automation 4. Right is the target website, with an overlaid carousel displaying the progress of the current data row 4. Steps that have been completed are marked green on the data table and carousel, and the current step is marked yellow. Users follow the carousel to start the task demonstration. DiLogics semantically searches web page and highlight the most relevant text. If the highlight is incorrect, users can cancel it 5, then edit the step 3 or navigate the page to reveal relevant content (e.g. expand the drop-down menu). After demonstrating the current step, users can advance to the next slide on the carousel. Users can click “_Next Row_” at the end to move on.
the semi-automation mode after demonstration and before full automation corresponds to Horvitz's principle of minimizing the cost of poor system guesses (Horovitz, 2018). By walking through one iteration of automation with the user, DiLogics narrows the gulf of evaluation and allows users to validate system actions.
### Design and Implementation
We implemented DiLogics as a Chrome browser extension, building upon the core program synthesis engine from WebRobot (Horovitz, 2018). Primarily, it uses plain JavaScript for recording front-end interactions on the web page. For task step categorization and semantic similarity matching, we adopted two off-the-shelf machine learning models: Sentence-BERT (Sutskever et al., 2017), a pre-trained network that derives semantically meaningful sentence embeddings that can be compared using cosine-similarity, and SpaCy (Sutskever et al., 2018), a trained natural language processing pipeline. The system design and implementation can be separated into three parts, detailed below.
#### 5.3.1. Step 1: Data Input and Specification Parsing
To process the input data into tractable steps representing different specifications, users can first upload an JSON input file to DiLogics which renders a data table (Step 1 Fig.1.a, Fig.4.a,b). While currently only supporting JSON, the input data can be easily extended to other file formats such as CSV, Excel workbook, etc. The task file could have inherent structures (i.e. columns and rows of information), but DiLogics further parses the input texts and automatically segments them into semantic steps using SpaCy (Sutskever et al., 2018). The data row with the highest number of identified steps will be ranked first. This is to place most of the manual demonstration efforts at the start of the task, allowing users to record actions for most semantic categories upfront, reducing interruption in later automation. Users can inspect and edit the data if they find incorrectly parsed or ambiguous steps at any point during the demonstration or in automation by pausing the program (Fig.1.b, Fig.4.c).
The segmented task steps could also contain specifications that are too abstract or too detailed, which might be misinterpreted by the NLP model and fail to connect to web page content. For example, a user might note that they are "_lactose intolerant_" in the specification, but the web page only contains a "_No cheese_" option. A semantic search of the original step using yields no match on the page (Fig.5.a). The user can then manually rephrase this condition by editing it to "_remove dairy products_" (Fig.5.b), which DiLogics understands and highlights for demonstration (Fig.5.c). Throughout the program creation process, users have the agency to repair and refine task specifications with the help of DiLogics.
#### 5.3.2. Step 2: Demonstrations and Mapping
To create an automation program, users start the web recording (Fig.4.d) and demonstrate actions for each task step from the start of the task table. DiLogics' carousel widget organizes the current row's steps into ordered slides, guiding the users to interact with the website to fulfill the current step as if completing the task manually (Fig.4.e).
Every web macro will be recorded and used to synthesize a repeatable program (e.g. a for-loop). The program synthesis engine based on WebRobot (Horovitz, 2018) enables inferences based on input data and website structures, such as sending each table cell data into a list of input fields in order. However, it could not generalize the automation to perform different sequences of actions based on the step specification and the page content. DiLogics extends WebRobot's functionality by incorporating semantic search in its automation execution. After every user action, DiLogics scrapes the web page and highlights the page element that is most semantically relevant to the task step description, as determined by the cosine similarity of the two text phrases (Sutskever et al., 2017) (e.g. step "_remove dairy products_" relates to page option "_no cheese_"). This intelligent search feature alleviates users' mental effort to process page information. If the highlight is accurate, users can continue to demonstrate (e.g. click the check box on "_no cheese_"), or they can cancel the highlight (Fig.4.f) to correct the system by editing the task step or guide the system to highlight the desired region by revealing more relevant page content (e.g. expand a menu to reveal more selections). DiLogics records this entire sequence of UI actions and maps the task step to the list of macros as a key-value pair.
As users demonstrate different steps, DiLogics constructs a catalog of step-to-UI action mappings. In later automation, to perform each task step, DiLogics first inspects the catalog to find the most similar demonstrated step via semantic matching (e.g. new step "_no meat_" is similar to "_remove dairy products_" in meaning). Then DiLogics generalizes the stored UI action sequence to the current step and automates based on the current highlighted content (e.g. highlight "_No chicken_" and click the checkbox next to the option). Note that since DiLogics constantly searches for and locates the most semantically similar element, the automation can execute macros on the correct UI regardless of website DOM structure, going beyond structural inferences and layout constraints of the task website. By recording mappings between diverse specifications and UI actions DiLogics constructs automation programs with malleable programming logic by inserting the proper UI actions for each step in real time to fulfill diverse task specifications.
#### 5.3.3. Step 3: Automation, Refinement, and Error-handling
Users follow the progression of the carousel widget to record the manual demonstration for each step in a data table row, which counts as one iteration of the task (e.g. completing one person's food order). After demonstrating for two iterations (i.e. two rows), DiLogics detects the repetitive pattern in the user action trace and generates an automation program (Horovitz, 2018). The system then enters a semi-automation stage for the third iteration, where it prompts users with the predicted action for the current step. Users can either confirm to authorize the automation of this step, or cancel in case of incorrect prediction and redo the demonstration for this step. After confirming the synthesized program's predicted actions in the third row, DiLogics enters full automation.
During full automation, DiLogics iterates the remaining rows of input data and perform corresponding actions based on generalization from the previous demonstration. The carousel advances with the progress of the automation, and each data table cell is marked green when that step is executed. When encountering novel cases that do not match with any steps in the catalog (Fig.5.a), DiLogics pauses the automation and elicits a demonstration to fulfill the new specification. After users demonstrate, the system appends a new step-to-macros mapping to the catalog. Through this process, the
automation is refined with added categories of demonstration, and the system enhances its capability to handle diverse specifications.
In the event of a system error during automation (e.g. highlights the wrong element or executes wrong macros), users can pause the program (Fig.4.d) to manually inspect and fix the error. They can also edit the data table cell if the step specification is vague (Fig.4.c), or re-record the incorrectly executed step with new demonstrations. DiLogics provides different error-handling techniques to address input data misinterpretations and system logic errors. Users can gradually transition from manual demonstrations, to evaluating system predictions, and finally to full automation, but always preserve the control to refine and repair the program at every stage.
## 6. System Evaluation
In order to evaluate DiLogics's general usability in assisting users with diverse program logic data entry tasks across different domains and websites, we conducted an in-person user study. We used the usage evaluation strategy in the HCI toolkit to guide our study (Zhu et al., 2017). The study recruited 10 undergraduate students (6F4M, average age 21.1 y.o., average coding experience 2.3 years, denoted P1-P10) from a large public university. None of the participants had prior experience with web automation tools.
Since we are implementing a new PBD approach, a within-subject experiment would be difficult as there is no clear baseline to compare to DiLogics in solving automation tasks with diverse specifications. However, our study reveals findings on the system's usability, coordination with AI, and error-handling in continuous programs, all of which can provide insights into future system designs.
### Study Design
Upon signing the consent form, each participant first watched a tutorial video of DiLogics's interface and features. Then participants performed four different task scenarios using DiLogics. For each task, an input file and a task description were provided. Each input file contains 10 rows of different requests, and each request requires multiple steps with diverse program logics that need to be accounted for by the participants. Additionally, the specifications were intentionally designed to have varying levels of abstraction and ambiguity. This helps examine DiLogics's refinement features for handling unseen request steps. The participants could call for the experimenter's assistance at any time during the session. After the participants completed the tasks, we conducted a short interview with them regarding their experience. Additionally, participants filled out an exit survey with Likert-scale and short-answer questions on system effectiveness, usability, and mental effort (Zhu et al., 2017). Each participant was compensated $25 for their time. Each session took 60 minutes and was conducted in person on our machine. All sessions were screen- and audio-recorded. Our study is approved by the ethics review board at our organization.
### Tasks
To design realistic tasks for users with limited experience with automation tools, we take inspiration from prior studies on common categories of web tasks, UI interactions to accomplish those tasks, and natural language commands to describe the tasks.
Based on QAWeb (Zhu et al., 2017), a benchmarking study that collected more than 500 website templates and sequences of GUI actions via crowdsourcing, we designed our tasks to require common UI interactions such as search, text entry, drop-down, and click. We also determined our task domains from the common website template categories, such as dining, entertainment, and shopping (Zhu et al., 2017). Then we constructed four tasks around popular websites in these domains that participants are likely to be familiar with and do repetitive work in, including UberEats, Amazon, GoodRx, and Tickettmaster. Each task involves using DiLogics to create a web automation program that inputs a list of requests to the target website (e.g. food orders with different restaurants and dishes) and fulfills the specifications (e.g. order side, remove ingredient). We limited the number of requests to 10 for each task to standardize the difficulty.
Figure 5. Editing data table and specifying step. During step demonstrations, users might find that a request does not result in any semantic match on the web page content @. This could be due to incorrect parsing, ambiguity, too high or too low a level of specificity, or limitation of the NLP model. The user can manually edit the itemized step to match the page content more closely @, and demonstrate the UI actions for this new category of step @.
We compose the content of the task input data files following guidance from a dataset of more than 50,000 natural language commands to describe GUI actions on web elements (Kumar et al., 2017). The dataset summarizes common categories of language phenomena to express UI action goals in English. The dataset also reveals that many language commands collected from crowd workers go beyond ordinal or visual reasoning (e.g. "_click the top-most article_") and use semantic reasoning to describe the goal and target of the GUI actions (e.g. "_change website language_" - a clickable box with text "_English_") (Kumar et al., 2017). Following these outlines, our data files utilize five commonly classified language phenomena (Relational Reasoning, Substring Match, Paraphrase, Goal Description, and Image Target) (Kumar et al., 2017) to describe the task specifications. We structured each data file to contain at least 5 diverse steps to maintain the same level of complexity and effort for demonstration. We provide the task input data files so participants can focus on experiencing the full features (e.g. condition failure, new demonstration) of DiLogics within the time constraint of the lab study. Therefore, we didn't let participants specify task input as their instructions may not encompass all DiLogics use cases. Please see the appendix for the specification of each task (Appendix, Fig.7).
### Results
#### 6.3.1. Time and Accuracy
The user study recorded 40 task completions in total (10 participants x 4 tasks). Table 1 displays analysis for each task, including the average and standard deviation of completion time (in minutes:second), task accuracy, and number of attempts to complete this task. Task accuracy is based on each row, deemed as one request (e.g. one food order with multiple specifications). Task accuracy is defined as the percentage of data rows that perfectly satisfied the specification after UI automation (e.g. selecting all the correct options in a food order). If users record a demonstration incorrectly, and the system fails to generalize this step in another data row without the user's repair, that data row is counted as incorrect. The overall average duration to complete a task is 08:01, and the overall average task accuracy is 91.2%. In 29 of the 40 recorded task completions during the study, participants created automation programs that perfectly satisfied all the specifications in the input files. However, there are cases when users' demonstration fails to generate a program due to human error (e.g. misclick, double click, unfamiliar with task website), reflected in the number of attempts. But errors and retries become less frequent as users learn the tool.
Since our study does not compare DiLogics' approach to a baseline, we keep the task order consistent for every participant and did not conduct with-participant comparisons. However, we do observe participants requiring more attempts and longer time to complete task 1, indicating a learning curve. Some participants also noted that a "_high level of attention_" (P9) is required at the start of the study to "_be careful about the order of the clicks_" (P7, P9). However, after completing four tasks, participants rated themselves as successful in accomplishing each task on a seven-point Likert scale (_mean=5.9, SD=0.74_, 7 is very successful, Fig.6). Participants also thought they did not need to work very hard to achieve the performance (_mean=5.4, SD=1.26_, 7 is no hard effort at all).
#### 6.3.2. Effectiveness and Usability
The participants also rated DiLogics' ease of use and efficiency from "1 - very negative" to "7 - very positive", detailed in Fig.6. They found their experience positive when using DiLogics to semantically search for content (_mean=5.7, SD=0.82_), recording UI demonstration for a step (_mean=6.2, SD=0.42_), and specifying logical intent for different specifications (_mean=6.1, SD=0.57_). Participants commented that DiLogics is helpful (P4, P5, P8), interesting (P1, P5, P7), and effective for handling tasks with batch requests (P3, P5, P8). P5 believed that DiLogics is "_good for repetitive tasks, [where] human might misclick or select wrong content due to large load [of requests]_". The workflow from manual demonstration, to semi-automation, and eventually full-automation was also thought to be intuitive, keeping the human in the loop to aid the system's learning process (P1, P4, P5, P7, P8). P4 commented that DiLogics is "_very intuitive, [with] easy to follow instructions, only takes two trial runs and [DiLogics] knows how to do the rest_". Six out of ten participants noted that DiLogics is powerful at information searching and interpretation, automating UI steps across different conditions. P9 expressed that "_semantic matching works for...websites [that] have different layout and structure_". Overall, participants recognized the system's ability to learn a variety of GUI actions associated with the task specifications and to accurately reproduce desired interactions.
#### 6.3.3. Coordination and Error-handling
Participants found DiLogics relatively easy to coordinate and straightforward, especially for the initial demonstrations to specify program logics (P4, P5, P7). Many participants (P2, P3, P4, P5, P8, P9) thought the interaction with the caroused widget provided them with a sense of control (_mean=6.4, SD=0.70_) and helped them understand the progress of the task steps (_mean=6.3, SD=0.95_). P6 also noted that the caroused, combined with the semantic highlight, can inform users of the web page content, alleviating the effort of navigating and processing the entire web page. In addition, once in automation, participants found the execution smooth (P1, P5, P7, P9). However, during the transitions between user demonstration and automation (i.e. semi-automation, or system pause to demonstrate a new step), half of the participants found themselves sometimes unsure whether it was the users' turn to intervene or the system's turn to automate. Therefore, they desired clearer guidance on the stage of automation and turn-taking. Participants also commented that sometimes the web macro execution response is not synchronized with the caroused progress and table status coloring, which caused confusion (P1, P3, P5, P6, P7). This is due to the fact that some UI actions are not grouped into any task step. For example, users might need to click "_Add to Order_" at the end of each request, but this implicit action is not categorized in any task description. DiLogics can improve by
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Task** & \begin{tabular}{c} **Time** \\ **(mm:ss)** \\ \end{tabular} & \begin{tabular}{c} **Accuracy** \\ \end{tabular} &
\begin{tabular}{c} **\# of** \\ **Attempts** \\ \end{tabular} \\ \hline
1-UberEats & 08:53 (02:17) & 88.9\% (16.6\%) & 1.8 (0.79) \\
2-Amazon & 07:32 (02:54) & 100\% (0.00\%) & 1.6 (0.52) \\
3-GoodRx & 08:33 (02:21) & 94.0\% (8.43\%) & 1.7 (0.48) \\
4-Ticketmaster & 07:06 (02:33) & 82.0\% (31.6\%) & 1.5 (0.53) \\ \hline \hline \end{tabular}
\end{table}
Table 1. User study results expressed in average (SD) format.
representing the synthesized program with more context, visualizing past and future predicted actions in addition to the current step.
In terms of error handling, participants utilized DiLogics' error repair techniques at every stage. Nine out of ten users edited the input data table to rephrase segmented steps to the appropriate detail (e.g. "_lactose intolerant_" to "_remove dairy products_"). Seven participants rewound the carousel and re-recorded step demonstration in the event of human error (e.g. misclick or double click). In addition, eight participants manually repaired UI action errors (e.g. a wrong option is highlighted or selected) and four participants paused during automation to inspect system behavior. We observed some instances of participants noticing an error but not fixing it. In the interview, participants expressed that the automation continued (e.g. navigated to a different page) before they could take action to pause and repair. P8 and P9 suggested that a redo or undo option in the system workflow would further lower the user's effort to repair errors. Future works can make the error-handling interaction more accessible (e.g. more salient and editable execution trace) and provide more processing time or opportunities for users to react to undesired behaviors (e.g. summary of results at the end of task).
#### 6.3.4. Mental Effort and Trust for AI System
Overall, participants rated relatively low mental effort (_mean=5.0, SD=0.82_, 7 is not mentally demanding at all) and very low level of stress (_mean=6.3, SD=0.82_) during the study (Fig.6). Six participants rated the initial demonstration effort to be high. P7 noted that "_[demonstration] is a bit heavy as [need] to worry about clicking on something wrong, and to be careful about order the clicks_". As the program shifts to automation, eight out of ten participants reported decreasing mental effort. However, P5 and P8 believed the semi-automation required the most effort, as the users needed to process and react to the system prompts instead of doing intuitive manual work.
In terms of trust, participants reported a relatively high level of trust when DiLogics starts executing the program in automation (_mean=5.4, SD=0.97_). P4 suggested that "_would trust the system with more learning [of the tool] and familiarity of the website_" while P3 mentioned that "_when the automation seems correct, [they] don't need to watch the system_." Participants expressed that the stake of the tasks (P2, P4) and familiarity with the input (P1, P6) are important factors for their trust towards DiLogics. From the evaluation, DiLogics requires low mental effort after the demonstration phase. And the system elicits a general level of trust from the users. Researchers can focus on providing more guiding feedback and trust cues to lower mental effort and aid users' trust.
## 7. Discussion
### Task to Program Logic Mapping
To handle diverse program logics, DiLogics creates mappings from each task step's natural language description to its corresponding UI action sequence. This approach encapsulates the UI behaviors inside a natural language label that can be easily compared and generalized. Users define different programming logics for each category of requests. Generalization is built upon the assumption that steps similar in meaning will require similar actions on UI elements with similar affordances. From the system evaluation, participants found the task step generalizations interesting (P8), accurate (P7), and even mind-blowing (P1) in terms of capability. P5 notes that the demonstration process is important as "_[the user] teach[es] the system the rules to automate the steps... helpful to keep human in the loop_." DiLogics applies the same semantic intelligence to processing web page content. Once synthesized an automation program, DiLogics generalizes the UI actions to the most relevant UI elements on the current page, despite layout and content differences from the original page user demonstrated on. P1 expressed that they simply needed to "_let the program search information on the website [and] do series of actions_", increasing efficiency as users do not need to spend excess time to find the same information.
Figure 6. Survey Response. For usability (left) and trust, 1 is very negative, and 7 is very positive. For NASA-Task load index [21] (right), 1 is very high mental demand, effort, stress, feelings of being hurried, and very unsuccessful.
However, one limitation to the mapping between task steps to UI macros is that the task does not explicitly specify every required action. For example, while users specify their dietary restriction in the food order, they would not specify the need to click on "_Add to Order_" when ordering is done, as it is implied. DiLogics captures and automates this type of action with uniform program logic (i.e. does not change based on input) via demonstration. But participants sometimes lose track of the progress as these actions are not represented in the input data table nor the step carousel. Therefore, more signals of system state and current action could be added to improve usability.
### Neurosymbolic Web Automation
Web automation tasks often involve the repetition of GUI actions following certain rules on the website DOM or input data structure. In symbolic systems, users have a high degree of control to define the rules based on demonstration or programming instructions, if they acquire tool and/or programming expertise (Safar et al., 2016; Safar et al., 2016; Safar et al., 2016). While existing tools can establish symbolic patterns based on UI element properties and web page structures (Beng et al., 2016; Chen et al., 2016), they do not possess the task understanding of the input data nor web page content. With an increasing volume and diversity of content on the web, symbolic program constructions are limited as the conditions do not apply to the content semantics and do not match the abstraction level of user intent. Recent advancement of LLMs leads to a rising need for high-level system understanding to provide an easier method for users to describe their intent in less specific ways that do not require programming expertise. LLM-powered tools allow intent expression using natural language and can translate abstract intents into executable steps on the web-based on powerful content understanding. But current LLM tools offer limited control of the program construction and execution (Beng et al., 2016; Chen et al., 2016). Users can only specify intent using prompts and examples, making these tools more similar to an API that responds to individual requests rather than large-scale data. In addition, pure statistical-learning-based tools can be inconsistent in output generation, but current tools often provide very limited or no error-handling and refinement techniques.
Neurosymbolic systems offer a hybrid model that combines symbolic inferences and similarity-based predictions. One main contribution of DiLogics is enabling web automation programs to generalize execution steps based on both symbolic and semantic learning. Mappings are generated between symbolic GUI executions and task semantics to bridge high-level user intent and lower-level web macros. Compared to LLM-based web automation tools such as Adept AI, or Taxy AI (Beng et al., 2016; Chen et al., 2016), DiLogics provides an end-to-end pipeline from input data parsing to refinement and error repairs, making it a more complete and robust workflow for the downstream tasks of web automation. The implementation of DiLogics can adapt to evolving LLM models to harness the power of task and content understanding as the neurosymbolic approach to UI automation is generalizable.
We argue that this neurosymbolic model can be applied to other automation tasks that involve the semantic understanding of content, such as information organization, content transformation, and generative creation. Future works can explore how to ground statistical learning models, such as LLMs, in specific task frameworks (i.e. web automation) with general rules (i.e. structural inference on DOM), and how to provide users agency to tailor the process on top of editing prompts.
### System Scope and Limitations
The novelty of DiLogics' design is in leveraging semantic understanding to enhance UI automation through a mapping between natural language step categories and web macros. This mapping can be established for any web automation tasks where task descriptions can be connected to symbolic UI interactions. Additionally, the set of interactions for data segmentation, programming logic demonstration, refinement, and error repair can be generalized to any other PBD systems. While the system is implemented with an existing program synthesizer (Safar et al., 2016) and off-the-shelf NLP models (Safar et al., 2016; Safar et al., 2016), DiLogics is not dependent on any specific tool or model. For example, DiLogics could adopt the latest iteration of LLM to increase the system's semantic understanding capability and content-matching accuracy.
However, the current implementation of DiLogics is limited to understanding text web content and does not support other modalities such as images. This is because the system extracts text-to-element relationships from the web page's HTML to perform semantic search and UI automation. This restriction means that DiLogics cannot infer meaning from images or pure graphical UI (e.g. icons without alt-text), even though human users might express intent in relation to visual information (Safar et al., 2016).
Despite its generalizability to automate on websites with different DOM structures, DiLogics requires structured input data. The synthesized automation program needs to form a repeatable instruction set (i.e. a loopy program) grounded by symbolic structure on the input (i.e. consistent number of data columns and column ordering). This means that the input data cannot be completely unstructured like a natural language paragraph. In addition, DiLogics requires two iterations of demonstrations to form an automation program; users have to spend some manual effort to perform the first two rows at the beginning of the task. This means DiLogics cannot perform one-shot automation with a prompt, as can be done by LLM-based automation tools (Chen et al., 2016; Chen et al., 2016).
Finally, while DiLogics' step to UI actions mapping enables generalizability for similar task semantics, these mappings are restricted to one-to-one relationships. This was based on the assumption derived from our informal analysis of web UI and content, where for GUI tasks under the same web domain, the same semantic task is fulfilled by similar UI interactions. For tasks that span multiple domains and require different UI actions for the same semantic task, DiLogics could lead to high demonstration effort depending on the variety of UI actions and the task size. The main barrier to one-to-many mappings is the analysis of the webpage state and content. DiLogics is unable to support steps that require updating the system state and repeatedly performing actions to satisfy specifications. For example, a task step to "_remove all lactose intolerant option_" might map to a sequence of actions to search and check "_no cheese_", but DiLogics will deem the condition fulfilled after the UI actions are executed. This means that the system will not iterate through the web content, identify the state of the condition, and find all applicable options to remove (e.g. also need to select "_No
_milk_), as it requires holistic semantic understanding of the task requirement and the UI states.
As a result of these limitations, we scope DiLogics' capability to handle tasks where the input data is structured, the target website contains text content describing UI elements (e.g. text button, check-box with text), and the specifications do not require repeated check on website state to fulfill. In the next section, we provide potential directions for future work to overcome these identified hurdles. We also aim to highlight insights in DiLogics' neurosymbolic system design and implementation, as well as the interaction techniques to facilitate continuous human-AI collaboration.
### Future Work
Based on user feedback from the system evaluation, we provide several directions to aid future system designs. First, as discussed in DiLogics' limitations, the semantic task understanding is limited to textual web page content. Future works can leverage multi-modal neural networks, such as CLIP (Zhu et al., 2017), to understand the website's visual content as well, expanding the capability to handle user intents that involve visual reasoning (e.g. click on the dish with fish on the image). A parallel approach is to construct a knowledge graph of the website content, which could connect different contents in the form of text, image, video, and/or audio (Zhu et al., 2017). This can generate a holistic view of the web page and even relate different pages, increasing information searching capabilities during the automation. Future works could potentially achieve this using content summarization techniques to transform and embed all types of content in the same space and compare similarities. The additional high-level understanding of the web page content and the task goal can potentially expand the existing one-to-one task-UI mapping to a one-to-many relationship, enhancing generalizability.
Another limitation of DiLogics is that the automation can not perform logics that require constant analysis of the website state (e.g. check if all lactose intolerant options are chosen). Future systems can constantly analyze the website's state and task completion status after every UI action or DOM change. This requires storing the state of the website and understanding the status of UI elements at a semantic level (e.g. the "_No cheese_" option is selected, but the condition is not fulfilled as "_No milk_" is not selected).
Finally, to further reduce user effort and make web automation programs easier and more accessible to create, future works can potentially derive patterns of execution based on previous task completion. Since every task generates an automation program, there might exist many overlaps in steps and programming logics for tasks in the same domain. Researchers can leverage the history of these completed tasks to make predictions on a new task. If the task can be processed through past similar tasks, the user might not even need to perform initial demonstrations to start program generation; the past tasks may already be capable of predicting the execution of the current one.
## 8. Conclusion
To support creating web automation with diverse specifications, we designed and developed DiLogics, a PBD tool that assists users in segmenting task requests and synthesizes programs based on user demonstration of example steps. The steps are mapped to sequences of UI actions and can be generalized using both symbolic inferences and semantic similarity via statistical models. In a system evaluation, we found that participants can effectively use DiLogics to generate UI automation scripts and complete tasks with high accuracy. We propose a generalizable neurosymbolic approach that combines the advantages of rule-based systems and neural networks. Our work can offer insights into future system and interaction designs that leverage semantic understanding in traditionally symbolic automation systems.
###### Acknowledgements.
We thank all our participants and reviewers. This research was supported in part by the National Sciences and Engineering Research Council of Canada (NSERC) under Grant IRCPJ 545100 - 18, and the National Science Foundation under grant numbers CCF-2236233 and CCF-2123654.
|
2306.04724 | Prompter: Zero-shot Adaptive Prefixes for Dialogue State Tracking Domain
Adaptation | A challenge in the Dialogue State Tracking (DST) field is adapting models to
new domains without using any supervised data, zero-shot domain adaptation.
Parameter-Efficient Transfer Learning (PETL) has the potential to address this
problem due to its robustness. However, it has yet to be applied to the
zero-shot scenarios, as it is not clear how to apply it unsupervisedly.
Our method, Prompter, uses descriptions of target domain slots to generate
dynamic prefixes that are concatenated to the key and values at each layer's
self-attention mechanism. This allows for the use of prefix-tuning in
zero-shot. Prompter outperforms previous methods on both the MultiWOZ and SGD
benchmarks. In generating prefixes, our analyses find that Prompter not only
utilizes the semantics of slot descriptions but also how often the slots appear
together in conversation. Moreover, Prompter's gains are due to its improved
ability to distinguish "none"-valued dialogue slots, compared against
baselines. | Taha Aksu, Min-Yen Kan, Nancy F. Chen | 2023-06-07T18:39:57Z | http://arxiv.org/abs/2306.04724v1 | # Prompter: Zero-shot Adaptive Prefixes for Dialogue State Tracking Domain Adaptation
###### Abstract
A challenge in the Dialogue State Tracking (DST) field is adapting models to new domains without using any supervised data -- zero-shot domain adaptation. Parameter-Efficient Transfer Learning (PETL) has the potential to address this problem due to its robustness. However, it has yet to be applied to the zero-shot scenarios, as it is not clear how to apply it unsupervisedly.
Our method, Prompter, uses descriptions of target domain slots to generate dynamic prefixes that are concatenated to the key and values at each layer's self-attention mechanism. This allows for the use of prefix-tuning in zero-shot. Prompter outperforms previous methods on both the MultiWOZ and SGD benchmarks. In generating prefixes, our analyses find that Prompter not only utilizes the semantics of slot descriptions but also how often the slots appear together in conversation. Moreover, Prompter's gains are due to its improved ability to distinguish "none"-valued dialogue slots, compared against baselines.
## 1 Introduction
Task-oriented dialogue (TOD) systems serve users through several tasks, such as booking a table in a restaurant or suggesting tourist attractions. One crucial component of these systems, Dialogue State Tracking (DST), is responsible for extracting users' preferences (_i.e._ slot-values) over key attributes (_i.e._ slot-labels) of their service Wu et al. (2019).
DST has a significant role in TOD systems as it ensures that both the action taken in the back-end and the responses returned to the users are aligned with the preferences that the users indicate.
A challenging task in this field is to adapt an existing DST model to a new domain it has not seen before without using any supervised data, _i.e._ in the zero-shot scenario. This is important, as in many new scenarios, it is hard to collect data, let alone annotate it. Yet it is still an essential need for a TOD system to appropriately answer such queries in new contexts. The challenge arises from the differences in dialogue context, slot values, and slot labels among different domains. For example, a model could be trained on the 'taxi-booking' domain and thus capable of extracting the destination for a taxi; but when deployed to the 'train-booking' domain, the range of slot-values changes, resulting in a higher probability of a mistaken inference. We show an example (Figure 1), where due to the superficial connections a baseline T5 model forms, it
Figure 1: Zero-shot domain adaptation. The model is trained on four source domains and tested on the train-booking domain without any supervised training. Bottom-left: T5 baseline predictions, Bottom-right: Prompter predictions. (Correct, incorrect) predictions are colored (green, red), respectively.
incorrectly predicts 'Ashley Hotel' as the train destination (bottom left). In many dialogue contexts, a large number of slots are unspecified. These are known as "none"-valued slots. In cases where the model is adapting to a new domain without any prior training, it often incorrectly predicts none values. This makes it even more important to address the problem of domain shift.
Lin et al. (2021) proposed to address this domain shift challenge via the language model's intrinsic ability to reason over prompts. Specifically, they concatenate the description of each slot as a hard prompt into the dialogue context and then generate the answers using the T5 model. While it does well for a naive baseline, it makes mistakes due to its superficial understanding of slot labels.
Meanwhile, another line of study has shown that Parameter-efficient Transfer Learning (PETL) methods are effective training methods to address domain shift. Due to the small number of parameters it introduces per task/instance, it overcomes overfitting in few-shot scenarios, outperforming earlier baselines. There have been various attempts to use these methods for DST tasks within a few-shot, continual learning setting Zhu et al. (2022); Madotto et al. (2021). However, a significant barrier to adopting PETL is that such methods cannot be directly applied in zero-shot, as they all require some form of supervised training.
In this study, we propose a new method to use prefix-tuning under a zero-shot scenario to benefit from the gains it brings for robustness,
even without supervised data. Rather than fine-tuning the prefixes during training, we add a new mechanism into the T5 architecture called Prompter1. Prompter simply takes the description of the slot and then generates the prefixes on the fly. We then append these prefixes at each layer of the encoder to represent the dialogue from the perspective of the subject slot label. This method makes minimal changes to LM parameters while generating unsupervised prefixes. This ensures both the preservation of general-purpose traits and extrapolation to new domains.
Footnote 1: Implementation available at [https://github.com/cuthalionn/Prompter](https://github.com/cuthalionn/Prompter)
We conduct experiments with the MultiWOZ 2.1 and SGD datasets.
Prompter improves average JGA results across domains by 1.7 for MultiWOZ, and 9.1 points for the SGD dataset (considering 4 domains reported in prior studies) compared to the strongest baseline. This shows that PETL methods' robustness advantage is also favorable for unsupervised domain adaptation scenarios. To the best of our knowledge, these are the highest results achieved so far using a small language model.
Through further analysis, we have discovered that Prompter not only considers the semantic similarities of slot descriptions but also the frequencies in which slots co-appear in the dialogue context. Furthermore, Prompter proves to be more effective in identifying slots that have no value within a conversation in comparison to previous methods.
## 2 Related Work
Dialogue State Tracking.DST has a long history of models working with a static, ontology-based problem definition (_i.e._ slot-values are fixed) Balaraman et al. (2021). The static-ontology DST is a simplified classification problem where the model selects a value from each slot's value pool Zhang et al. (2020); Lee et al. (2019); Rastogi et al. (2017); Zhong et al. (2018). Recently interest in _dynamic_ ontologies have received attention, adding flexibility at inference time Wu et al. (2019); Rastogi et al. (2019); Heck et al. (2020).
Low-resource Domain Adaptation.Dynamic ontology introduces slot-value level flexibility, but its ability to work with new slot-labels is limited. Domain adaptation of DST systems aims to make the model adaptable to new domains/slot-labels. Few studies have attempted to utilize language models' intrinsic reasoning abilities by mapping DST as a question-answering task
Lin et al. (2020); Zhou and Small (2019). Shin et al. (2022), on the other hand, map DST to a dialogue summarization task, and Xie et al. (2022) map it to a structured-knowledge grounding task. Many use data augmentation to address the lack of supervision in the target domain Qiu et al. (2022); Mi et al. (2021); Gritta et al. (2021); Aksu et al. (2022); Li et al. (2020). Finally, remaining studies focus on improving the model's architecture and training strategies for robustness toward domain changes. Feng et al. (2022); Balaraman and Magnini (2020); Madotto and Liu (2020); Huang et al. (2020); Coope et al. (2020); Wu et al. (2019); Lei et al. (2018); Lin et al. (2021); Yang et al. (2022). Wang et al. (2022) have a similar goal as our own, but they use a different method. They create cross-slot dependency by combining mul
tiple slot prompts to create a final prompt, which encourages the model to apply what it has learned in one slot to other slots.
PETL for DST Domain Adaptation.Parameter Efficient Transfer Learning (PETL) is a recently trending set of methods that aims to adapt models more efficiently by significantly reducing the number of parameters that need to be fine-tuned. Pfeiffer et al. (2020); Lester et al. (2021); Liu et al. (2022); Li and Liang (2021); Houlsby et al. (2019). Many studies have found that PETL is advantageous for low-resource domain adaptation settings due to its efficient parameter training scheme. This scheme minimizes changes in LM parameters and thus believed to prevent over-fitting Li and Liang (2021); Liu et al. (2022). However, He et al. (2022) argues that tuning the entire language model does not negatively impact its robustness advantage. Researchers in the DST field have also utilized PETL methods for their robust capabilities. In their work, Zhu et al. (2022) employed soft prompts and fine-tuned them for each domain in a continual learning setting, utilizing validation sets from target domains to decide which previous prompts to use for initialization. Madotto et al. (2021) also tackled the problem of continual learning, using unique adapters for each domain and relying on a classifier to select which adapter to use during inference. Both studies only explored the use of PETL methods for DST with few-shot availability. In contrast, this study aims to investigate a well-known PETL method, prefix-tuning Li and Liang (2021), for zero-shot domain adaptation of DST models.
## 3 Background
### Dialogue State Tracking Task
A task-oriented dialogue consists of a number of consecutive system and user utterances, together referred to as a turn, \(t_{i}=(s_{i},u_{i})\). Each turn is annotated with a belief state that shows the user's preferences over a number of attributes from various domains up to and including that turn, \(B_{i}=(D_{0},D_{1},...,D_{K})\) where \(D_{j}\) is the belief state for domain \(j\), and \(K\) is the total number of domains. The belief state for each domain is made up of a list of slot-label (_e.g._'restaurant-area') and slot-value pairs (_e.g._ 'center'), \(D_{j}=\{s_{0}:v_{0},s_{1}:v_{1},...,s_{N}:v_{N}\}\), where \(N\) is the number of slots within domain \(j\). Each \(s_{i}\) is further annotated with a description that explains the attribute in the context of the domain (_e.g._'restaurant-area':'The area of the city where the restaurant is located.'). For each \(v_{i}\), if \(s_{i}\) is not discussed in the dialogue context, \(v_{i}\) is set to 'none'. Otherwise, \(v_{i}\) is a sequence of tokens. The task of DST is to predict the belief state \(B_{i}\) for a given dialogue context \(DC\), _i.e._ dialogue turn history up to and including turn \(i\), \(DC=(t_{0},t_{1},...,t_{i})\).
### Prefix-Tuning
Prefix-tuning is a parameter-efficient alternative to fine-tuning which optimizes a small continuous task-specific vector called the prefix for each new task. These tunable prefix vectors are prepended to the keys and values of the multi-head attention at every layer of the transformer Li and Liang (2021); He et al. (2021). Li and Liang (2021) also report that prefix-tuning also improves extrapolation to unseen tasks in few-shot settings. However, there is no straightforward way to use this method for the zero-shot setting, as it requires supervision to fine-tune the prefixes.
## 4 Method
We propose to add a new mechanism into the T5 architecture Raffel et al. (2019), called Prompter, to take advantage of prefix-tuning's extrapolation capabilities without requiring supervision. Instead of fine-tuning the prefixes with source domain data, we generate them on the fly for each slot. However, we need a way to condition Prompter for a new domain without any supervised data. Task-oriented dialogue schemas provide a solution by annotating the slot descriptions for each slot-label. Using these slot descriptions Prompter can generate domain-specific prefixes which allow it to adapt to any domain without the need for supervised data. We can summarize the Prompter pipeline in three key parts: (1) Slot Prompt Generation, (2) Prefix Generation, and (3) Multi-head Self Attention.
Slot Prompt Generation.is responsible for generating a prompt that is specific to each slot, using its unique description. Previous approaches to this problem, such as simply concatenating the description to the input, result in only a superficial understanding of the slots in zero-shot settings Lin et al. (2021). Additionally, using slot embeddings as soft prompts can cause unstable training and hinder zero-shot adaptation due to changes in the descriptions. Instead, we propose using a global prompt that is modified according to each slot's
description. This modification is applied through a cross-attention mechanism that attends the global prompt to the slot description's embedding, _c.f._ Figure 2a. This approach ensures that each slot prompt shares the same initialization addressing unstable training, and the modifications reflect changes in the slot-label addressing domain shift. It also has the advantage of making the final prompt's length fixed, regardless of the length of the description. The slot prompt is calculated as follows:
\[S=((GW_{q})(EW_{k})^{\top})(EW_{v}) \tag{1}\]
where \(W_{q}\),\(W_{k}\), and \(W_{v}\in\mathbb{R}^{d\times d}\) are query, key, and value weights for the cross attention mechanism and d is the model dimension, \(G\in\mathbb{R}^{N\times d}\) is the global prompt2, \(E\in\mathbb{R}^{K\times d}\) is the slot embedding, \(K\) is the length of slot description, and \(S\in\mathbb{R}^{N\times d}\) is the slot prompt.
Footnote 2: For N we try different values from [1,100] range and empirically found 10 to work best. Thus we set N=10 throughout conducted experiments.
Prefix generation.For the DST task, the dialogue context can make up the majority of the language model input (_i.e._ 100-400 tokens long dialogue context compared to 10-15 tokens long slot description), this results in challenges with the prompt-tuning method because the prompt's impact can vanish easily before the decoding starts. This is why we opt for prefix-tuning because it ingests prompts at each layer and thus the generated value will have higher exposure to the prompt.
So following the generation of slot prompts the next step is to generate key and value prefixes for each layer. For this step, we have tried several different architectural designs such as a simple MLP or a whole transformer block. We empirically observed that while the former lags behind due to the small number of parameters the latter results in overfitting. Thus, inspired by He et al. (2022) we use a sequence of down and up projections separated by an activation function as prefix generators, _c.f._ Figure2b. Note that each transformer layer has a pair of dedicated prefix generators to generate key and value prefixes:
\[K_{i}=RELU(SWk_{down_{i}})Wk_{up_{i}} \tag{2}\]
\[V_{i}=RELU(SWv_{down_{i}})Wv_{up_{i}} \tag{3}\]
where \(K_{i}\), and \(V_{i}\in\mathbb{R}^{N\times d}\) are key and value prefixes for the \(i^{th}\) layer; \(Wk_{down_{i}}\), \(Wv_{down_{i}}\in\mathbb{R}^{d\times r}\), \(Wk_{up_{i}}\) and \(Wv_{up_{i}}\in\mathbb{R}^{r\times d}\) are the respective down and up projectors for the \(i^{th}\) layer; \(r\) is the bottleneck dimension. \(r\) is set to \(d/4\) throughout our experiments.
Multi-head Self Attention.After we get \(K_{i}\) and \(V_{i}\) for each layer \(i\) we split them to \(N_{h}\) head vectors \(K_{i}^{j}\) and \(V_{i}^{j}\in\mathbb{R}^{N\times d_{h}}\) for each head \(j\), where
Figure 2: The architecture of our proposed method, Prompter. Prompter leverages the prefix-tuning method to enable zero-shot learning without the need for supervised data and it is composed of three parts: (a) Slot Prompt Generation where the information from the description is fused with some global prompt to generate slot-specific prompts, (b) Prefix Generation which feeds slot prompts across two linear layers and an activation function to generate per-layer key and value prefixes, (c) Finally these prefixes are concatenated to keys and values at every layer of the T5 encoder.
\(d_{h}=d/N_{h}\) is the dimension per head. Finally, we concatenate these key and value prefixes into the self-attention mechanism at each layer of the transformer encoder completing our modifications to the original T5 architecture, _c.f._ Figure 2c.
\[head^{j}_{i}=(h_{i}W^{j}_{q_{i}}[K^{j}_{i},h_{i}W^{j}_{k_{i}}]^{\top})[V^{j}_{i}, h_{i}W^{j}_{v_{i}}] \tag{4}\]
where \(head^{j}_{i}\) is the output from the \(j^{th}\) head of self-attention mechanism at layer \(i\); \(W^{j}_{q_{i}}\), \(W^{j}_{k_{i}}\), and \(W^{j}_{v_{i}}\in\mathbb{R}^{d\times d_{h}}\) are query, key and value weight matrices of the \(j^{th}\) head in the \(i\)th layer; and \(h_{i}\) is the input to the \(i^{th}\) layer.
The final output of the multi-head self-attention at layer \(i\) is calculated as:
\[MSA(h,i)=[head^{0}_{i},head^{1}_{i},...,head^{N_{h}}_{i}]W_{o_{i}} \tag{5}\]
where \(W_{o_{i}}\in\mathbb{R}^{d\times d}\).
## 5 Experimental Setup
### Datasets
We conduct experiments with two well-known DST benchmarks: MultiWOZ and SGD (Budzianowski et al., 2018; Rastogi et al., 2019). MultiWOZ is a task-oriented dialogue dataset collected in a wizard of oz setting using human speakers. It has 10k dialogues that span over 7 domains. It provides turn-level annotations and descriptions of each slot label. In line with previous studies, we limited our experiments to only 5 domains because the police and hospital domains do not have a sufficient number of examples in the test set. We use MultiWOZ version 2.1 which addresses the noisy state annotations within the original dataset (Eric et al., 2020). Similar to MultiWOZ, the SGD dataset also has turn-level annotations and descriptions, _i.e._ schema, for each domain and slot. It has over 20k annotated conversations between a human and a virtual assistant. These span over 20 domains. Besides, the SGD dataset has unseen domains in the test set specifically formed to evaluate zero-shot performance.
### Baseline Models
We compare our method with a range of DST models from the past as well as the recent state of the art. The only models we utilize that do not depend on a language model are **TRADE**(Wu et al., 2019) and **MA-DST**(Kumar et al., 2020). The former introduces the copy mechanism to ease predicting slots not seen during training, whereas the latter adds cross-attention to model relationships between the context and slots at different semantic levels and self-attention to resolve cross-domain coreferences to a base RNN layer. **SUMBT** by Lee et al. (2019) is built with BERT and again uses an attention mechanism to learn relations between domains and slots. **SGD-baseline**(Rastogi et al., 2019) feeds slots, domains, and value embeddings into a BERT encoder to create schema embedding and uses it to predict dialog state in the target domain under zero-shot. **Seq2seq-DU**(Feng et al., 2021) formalizes DST as a sequence-to-sequence task where the dialog history is transformed directly into semantic frames. Li et al. (2021) on the other hand use GPT-2 and define DST as a generative question-answering approach. **TransferQA** builds on a similar motivation but combines both extractive and multi-choice QA enabling tracking categorical and non-categorical slots simultaneously (Lin et al., 2021). **T5DST**(Lin et al., 2021) and **Wang et al. (2022)** both use the T5 architecture. The former concatenates slot descriptions with dialogue context and generates slot values in an auto-regressive manner. Whereas the latter proposes a unique design that models cross-slot dependency by composing multiple slots as the final prompt so that the model is forced to learn the relations among each slot.
### Training Details
For all experiments, we used a Tesla-V100 GPU. We use the small-sized PPTOD (Su et al., 2022) built on the T5 architecture for the T5DST baseline and our own Prompter. We empirically found PPTOD to be more suitable for prompt-tuning tasks most probably due to the nature of its pretraining tasks. We set the batch size to 8 with gradient accumulation every 8 steps. We use AdamW optimizer (Loshchilov and Hutter, 2017) for training and set the initial learning rate to \(1e-4\).
Semi-frozen Training SchemeContrary to what is typically recommended for limited data scenarios by traditional PETL techniques, we discovered that freezing LM parameters does not improve performance in the zero-shot scenario. This is in line with what He et al. (2022) suggests. However, we also find that tuning all parameters is imperfect. In search for a better strategy we experiment with different combinations of frozen layers and compare the results for zero-shot train domain performance. We found that the best strategy is a semi-frozen
(S.F.) training scheme, where all LM parameters are trained for 1k steps and then all layers of the T5 model are frozen except the first and last layers of the encoder and decoder (_c.f._ Appendix B for more details). Thus for the experiments conducted in this section, we employ this strategy to train the models.
### Evaluation
We evaluate the performance of all models using Joint Goal Accuracy (JGA) following prior studies. For MultiWOZ, a zero-shot setting is used where training occurs on four domains and the remaining domain is used for testing. For SGD, results are reported on domains that are not included in both the training and validation sets, as they have already been included in the PPTOD pretraining. We modified the official SGD evaluation script to reflect this change. Therefore, in our evaluation settings, unseen domains refer only to domains in the test data, contrary to the original definition by Rastogi et al. (2019) which considers domains only showing up in the validation data unseen as well.
## 6 Results and Analysis
In MultiWOZ (Table 1), our addition of Prompter shows improvements in all domains except Hotel, boosting the average JGA by 1.7 points, compared to the state-of-the-art model by Wang et al. (2022). We believe the lack of improvements in the hotel domain for Prompter is due to it having many unique slots (_i.e._ 'hotel-internet', 'hotel-parking', 'hotel-type', _etc._). This makes it harder to take advantage of earlier domains as they lack similar slots. This is also in line with the results from Wang et al. (2022), as their cross-slot dependency design also lags behind for hotel domain results.
We also present the results on the SGD dataset in Table 2, where Prompter shows improvements on average. We share results over 6 representative domains along with results for official unseen domain performance.
Once more, Prompter demonstrates superior performance on average in unfamiliar domains. Compared to the results reported in the original paper by Wang et al. (2022) for four domains (Columns 1 through 4 of Table Table 2), Prompter shows an
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Model &
\begin{tabular}{c} Lang. \\ Model \\ \end{tabular} & Attraction & Hotel & Restaurant & Taxi & Train & Avg \\ \hline TRADE & - & 20.06 & 14.20 & 12.59 & 59.21 & 22.39 & 25.69 \\ \hline MA-DST & - & 22.46 & 16.28 & 13.56 & 59.27 & 22.76 & 26.87 \\ \hline SUMBT & BERT-b & 22.60 & 19.08 & 16.50 & 59.50 & 22.50 & 28.18 \\ \hline Li et al. & GPT2 & 23.67 & 18.54 & 21.05 & 59.1 & 24.34 & 29.34 \\ \hline T5DST & T5-s & 31.92 & **20.72** & 20.09 & 64.12 & 28.83 & 33.56 \\ \hline Wang et al. & T5-s & 33.92 & 18.85 & 20.75 & 66.25 & 36.96 & 35.55 \\ \hline T5DST\({}^{*}\) & PPTOD-s & \(35.5_{\pm 1.7}\) & \(20_{\pm 0.9}\) & \(25.3_{\pm 0.8}\) & \(65.6_{\pm 0.6}\) & \(35.3_{\pm 1.0}\) & \(36.4_{\pm 6.9}\) \\ \hline Prompter\({}^{*}\) & PPTOD-s & **35.8\({}_{\pm 0.7}\)** & \(19.2_{\pm 0.8}\) & **26\({}_{\pm 0.7}\)** & **66.3\({}_{\pm 0.2}\)** & **39\({}_{\pm 0.5}\)** & **37.2\({}_{\pm 7}\)** \\ \hline \end{tabular}
\end{table}
Table 1: Zero-shot join-goal accuracy(%) results on MultiWOZ 2.1 dataset. Results for all baselines are reported from original papers. Models with * trained using the semi-frozen training scheme. For our trained models the results are averaged over three runs. The best results on each column are **bold**.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline JGA & Buses & Messaging & Trains & Payment & Media & Events & Unseen \\ \hline SGD-baseline & 9.7 & 10.2 & 13.6 & 11.5 & 18.0 & 23.5 & - \\ \hline Seq2seq-DU & 16.8 & 4.9 & 16.8 & 7.2 & - & - & - \\ \hline Transfer-QA & 15.9 & 13.3 & 17.4 & **24.7** & - & - & - \\ \hline Wang et al. & 43.9 & 36.6 & 46.7 & 16.5 & - & - & - \\ \hline T5DST\({}^{*}\) & \(46.8_{\pm 2.2}\) & \(54_{\pm 2.8}\) & **53\({}_{\pm 0.4}\)** & \(23.3_{\pm 3.8}\) & \(55.5_{\pm 3.3}\) & \(48.8_{\pm 2.5}\) & \(48.0_{\pm 0.8}\) \\ \hline Prompter\({}^{*}\) & **48.4\({}_{\pm 2.1}\)** & **59.2\({}_{\pm 1.3}\)** & \(50.8_{\pm 0.9}\) & \(21.9_{\pm 4.6}\) & **65.3\({}_{\pm 3.8}\)** & **51.5\({}_{\pm 0.4}\)** & **49.4\({}_{\pm 0.4}\)** \\ \hline \end{tabular}
\end{table}
Table 2: Zero-shot joint-goal accuracy (%) results on SGD dataset. Results for all baselines are reported from original papers. Models with * trained using the semi-frozen training scheme. For our trained models the results are averaged over three runs. The final column shows the average JGA on all unseen slots. The best results on each column are **bold**.
average improvement of \(9.1\) in JGA. The Alarm domain is excluded from the comparison as PPTOD has been pretrained on it.
### Ablation Study
We further conducted ablation to analyze the contribution of Promprter's components (Table 3).
Adding the S.F. training scheme (second row) to the T5DST baseline introduces performance increase across all domains. This demonstrates that this training scheme plays a significant role in the robustness of the model. If we switch the pre-trained model from T5 to PPTOD (third row), we see another round of improvement but it is inconsistent across domains.
Finally, it is evident from the final row that adding the Promprter increases the results by another margin, clearly showing its contribution.
### Fine Grained Analysis
How does Promprter improve results?We define two new metrics to better understand Promprter's improvements: _Miss-prediction_ (MP), where the model fails to correctly identify a gold slot-label, mistakenly labeling it as 'none' instead; and _Over-prediction_ (OP), where the model incorrectly predicts a 'none' valued slot-label as something else. We then combine these metrics in _None Accuracy_, a metric that measures the accuracy of the model's predictions regarding the "activeness" of a slot-label. In other words, it measures how often the model correctly predicts whether a slot-label has the value 'none' or not. The results over all 5 domains can be found in Table 4. It is evident that Promprter's improvement comes from the None accuracy measure as its results are in line with the change in JGA (_i.e._ improvements across all domains except the Hotel domain). Moreover, we find that this is mostly due to the reduction of over-prediction mistakes -- Promprter decreases this class of error in every domain.
How does Promprter connect slots?To better understand the benefits of using Promprter, we look at how it connects target domain slots with source domain slots. This is done by aggregating the key prefixes across each layer and attention head for every slot and then comparing them to the source domain slot prefixes from the training set using cosine similarity.
Figure 3 highlights important similarities among some of the taxi and train domain slots (_c.f._ Appendix A for a comprehensive version that includes all domains and slots). Figure 2(a) shows that 'train-destination' has a high similarity with 'taxi-departure' and 'destination', as well as the 'attraction-name' slots. The first two connections are expected, but the latter is also relevant because the 'attraction-name' often appears as the 'taxi-destination' in training. This indicates that the model finds that the 'destination' slots can often contain named entities (such as locations) within the dialogue. For 'train-arriveby', the most similar slot is also the semantically closest: 'taxi-arriveby'. Finally, for the 'train-bookpeople' slot, the most similar slots are those related to booking from the hotel and restaurant domains, which makes sense as these often co-occur in the training data.
Figure 2(b) shows the results of adapting in the taxi domain. The similarity between the 'taxi
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Model & Train & Rest & Hotel & Taxi & Attr \\ \hline T5DST & 28.83 & 20.09 & 20.72 & 64.12 & 31.92 \\ \hline + S.F. & 29.3 & 24.4 & **22.3** & 65.6 & 34.76 \\ \hline + PPTOD & 35.3 & 25.3 & 20 & 65.6 & 35.5 \\ \hline + Promoter & **39** & **26** & 19.2 & **66.3** & **35.8** \\ \hline \end{tabular}
\end{table}
Table 3: Ablation results on the test set of MultiWOZ 2.1. We cumulatively add semi-frozen (S.F.) training, PPTOD, and Promprter to the T5DST baseline and report results. The best results along each column are **bold**.
Figure 3: Heatmaps depicting the similarity of selected source and target domain slots. The generated prefixes are aggregated and compared with cosine similarity, where darker colors indicate higher similarity.
arriveby' slot and its train domain counterpart, 'train-arriveby', is high as expected. Moreover, for the 'taxi-departure' slot, the generated prefixes are most similar to slots for attraction, restaurant, and hotel names. This is likely because the 'train-departure' slot also has named entities as values.
The findings show that Prompter not only utilizes slots with similar descriptions to create prefixes, but also accounts for other slots that co-occur in the same conversation with a similar source slot. This is important as slots may have different descriptions but exhibit significant semantic overlap (e.g., 'taxi-departure' and 'hotel-name' having location named entities as values).
### Case study
We use three dialogues from the MultiWOZ test set to demonstrate some of the phenomena observed in previous analysis studies (Table 5). The first example shows how the T5DST baseline is susceptible to overgeneralization from training data. When the T5DST model encounters a hotel name during zero-shot inference on the train domain, it mistakenly assumes that the hotel is the departure for the train because it has been trained to associate location names with taxi departure/destination. Prompter avoids this mistake through its deeper understanding of cross-slot relations. In the second case, the model has made predictions for the hotel type and area even though the dialogue does not mention a hotel. This happens because the model has learned to predict the same type of slots for the attraction domain and has overfitted them during training. In contrast, Prompter ameliorates this form of overprediction (SS6.2).
Our model has a weakness when it comes to
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{2}{|c|}{Attraction} & \multicolumn{2}{|c|}{Hotel} & \multicolumn{2}{|c|}{Restaurant} & \multicolumn{2}{|c|}{Taxi} & \multicolumn{2}{|c|}{Train} & \multicolumn{2}{|c|}{} \\ \cline{2-13} & MP\(\downarrow\) & \(\text{O}\)\(\downarrow\) & None & MP\(\downarrow\) & OP\(\downarrow\) & None & \(\text{Acc}\uparrow\) & MP\(\downarrow\) & OP\(\downarrow\) & None & MP\(\downarrow\) & OP\(\downarrow\) & None & \(\text{Acc}\uparrow\) & MP\(\downarrow\) & OP\(\downarrow\) & None \\ \hline T5DST & 76.58 & 13.78 & 65.96 & **59.51** & 24.03 & **76.30** & 35.98 & 14.25 & 79.15 & **53.57** & 10.96 & 83.94 & **3.93** & 12.81 & 90.91 \\ \hline T5DST+ & **69.80** & **11.16** & **68.81** & 65.77 & **21.66** & 75.34 & **29.14** & **13.40** & **82.07** & 54.40 & **8.92** & **84.05** & 7.66 & **9.08** & **91.50** \\ \hline \end{tabular}
\end{table}
Table 4: Fine Grained Analysis over MultiWOZ 2.1 dataset. MP and OP stand for miss-prediction and overprediction respectively. Down arrow (\(\downarrow\)) means lower the better, whereas up arrow (\(\uparrow\)) means higher the better. The best results among each column are **bold**.
\begin{table}
\begin{tabular}{|l|l|l|} \hline Dial & \multicolumn{2}{|c|}{Conversation Details} \\ \hline & U0 & I am looking for a train that leaves on Wednesday, going to Bishops Stortford. \\ & S1 & tr4404 departs **Cambridge** at 05:29 and arrives at 06:07 in Bishops Stortford. Would you like me to book it? \\ & U1 & Thats pretty early. Would there be a later train that arrives by 15:00? \\ & S2 & tr384 departs Cambridge Wednesday at 13:29 and arrives in Bishops Stortford by 14:07. Would you like to reserve a seat? \\ & U2 & No. How long will the train take? \\ & S3 & That train ride will take approximately 38 minutes. \\ & U3 & Thanks. I also need a particular hotel. Its name is **Ashley hotel**. \\ \hline & GT & \multicolumn{2}{|c|}{train-destination: Bishops Stortford, train-day: Wednesday, train-arriveby: 15:00, train-departure: **Cambridge**} \\ \hline & T5DST+ & \multicolumn{2}{|c|}{train-destination: Bishops Stortford, train-day: Wednesday, train-departure: **Ashley hotel**, train-arriveby: 15:00} \\ & T5DST+ & \multicolumn{2}{|c|}{train-destination: Bishops Stortford, train-day: Wednesday, train-arriveby: 15:00, train-departure: **Cambridge**} \\ \hline & U0 & I am coming to Cambridge and would like to see some **architecture**. Do you have any located in the **centre**? \\ & S1 & Yes, there are 5 places located in the centre. I recommend the All Saint Church on Jesus Lane. \\ \hline & U1 & Thanks! What is the entrance fee? \\ \hline & S2 & \multicolumn{2}{|c|}{...} \\ \hline & GT & \multicolumn{2}{|c|}{\{}} \\ \cline{2-7} & T5DST+ & \multicolumn{2}{|c|}{(hotel-type: **architecture**, hotel-area: **centre**)} \\ \hline & U0 & Hello, I am looking for places to go in the centre? \\ \hline & S1 & There are many attractions in the centre like museums, architecture, boating, and concert halls. What are you interested in? \\ \hline & U1 & How about a boating attraction? \\ \hline & S2 & There are 2 in the centre of town. Scudamores punting co., and the cambridge punter. Would either of those interest you? \\ \cline{2-7} & U2 & Could you give me the address for the Cambridge punter, please? I also need a place to stay, preferably somewhere **cheap**. \\ \cline{2-7} & GT & \multicolumn{2}{|c|}{(hotel-preicrange: **cheap**)} \\ \cline{2-7} & T5DST+ & \multicolumn{2}{|c|}{(hotel-preicrange: **cheap**)} \\ \cline{2-7} & T5DST+ & \multicolumn{2}{|c|}{Prompter} & \multicolumn{2}{|c|}{**cheap**, hotel-type: **cheap**, hotel-internet: **cheap**} \\ \hline \end{tabular}
\end{table}
Table 5: Three example dialogues from the MultiWOZ 2.1 test set. Each dialogue consists of user and system turns, ground truth dialogue state (GT). We show a pair of predictions by the T5DST baseline, and our Prompter.
dealing with slots that are unique and do not have similar slots in the source domain. In the third case, the model struggles to accurately predict the 'hotel-type' and 'hotel-internet' slots because they are dissimilar to all slots in the source domain.
### Why Prefix-Tuning?
We also try implementing Prompter using soft prompt-tuning rather than prefix-tuning. Under this setting, the learned prompts are fed directly at the input layer instead of as prefixes to the attention mechanism at each layer. We compare the performance of this method with the baseline T5DST, using T5-small as the language model. We find that prompt-tuning is not even comparable to the fine-tuning baseline let alone to prefix-tuning, _c.f._ Table 6. We believe this difference is due to the fact that prompts fed in the initial layer of the transformer have a diminishing effect on the output of the decoder. This is also evident in the original prefix-tuning paper where Li and Liang (2021) claim it performs better compared to prompt-tuning when it comes to generation tasks.
## 7 Conclusion
Parameter Efficient Transfer Learning methods have been frequently used for their strong robust features under a low-resource setting. However, there is no straightforward way to take advantage of these features under a zero-shot setting because they require at least some supervised data during adaptation. The dialogue state tracking (DST) task, on the other hand, has just the right annotation for this scenario as it contains schema annotations with slot label descriptions. We propose Prompter, which uses these descriptions to enable prefix-tuning, a well-known PETL method, for use under a zero-shot domain adaptation setting.
We show through experiments that this method improves the JGA metric for the two most common DST benchmarks. We further explain through analyses and a case study that the reason behind the Prompter's power is two-fold. (1) It has better capability to distinguish 'none' valued slots within the dialogue and (2) it can digest the frequency of slots co-occurrences within the dialogue context into the prefix generation process. We believe that this study shows PETL's hidden potential for DST domain adaptation under a zero-shot setting.
## 8 Acknowledgements
This research was supported by the SINGA scholarship from A*STAR. We would like to thank anonymous reviewers for their insightful feedback on how to improve the paper.
## 9 Limitations
One limitation of our study is that we only evaluated our method on the T5 architecture. Further experiments on other architectures could be useful to determine the generalizability of our findings. Additionally, as in previous SOTA, our model also did not produce better results for the hotel domain, even though it did improve performance in general. We have attempted to explain why this domain is more difficult, but more research is needed to fully understand the reasons for this variability and to create methods that can improve performance across all domains.
|
2307.13171 | Remarks on projected solutions for generalized Nash games | In this work, we focus on the concept of projected solutions for generalized
Nash equilibrium problems. We present new existence results by considering sets
of strategies that are not necessarily compact. The relationship between
projected solutions and Nash equilibria is studied for the generalized Nash
game proposed by Rosen. Finally, we demonstrate that every projected solution
of a game is associated with a Nash equilibrium, but in a different game. | Calderón Carlos, Cotrina John | 2023-07-24T23:36:33Z | http://arxiv.org/abs/2307.13171v1 | # Remarks on projected solutions for generalized Nash games
###### Abstract
In this work, we focus on the concept of projected solutions for generalized Nash equilibrium problems. We present new existence results by considering sets of strategies that are not necessarily compact. The relationship between projected solutions and Nash equilibria is studied for the generalized Nash game proposed by Rosen. Finally, we demonstrate that every projected solution of a game is associated with a Nash equilibrium, but in a different game.
**Keywords: Generalized Nash games, Shared constraints, Projected solution**
**MSC (2010)**: 91A10, 91B50, 91A99
## 1 Introduction
Nash games [27] focus on the strategic interaction between two or more players, where each player chooses a strategy and seeks to maximize their own outcome, taking into account the choices of the other players. A Nash equilibrium is reached when no player can achieve a better outcome by changing their strategy, provided that the other players maintain their strategies. Thus, the importance of finding suitable assumptions in order to guarantee the existence of Nash equilibria was taken into account for many researchers, see for instance [17, 26, 28, 29, 30] and their references.
On the other hand, the generalized Nash equilibrium problem was first proposed by Arrow and Debreu [2], who referred to it as an "abstract economy." These generalized games extend the focus of Nash games by considering more general scenarios where players can have different sets of available strategies and different preferences over outcomes. In these games, the objective is to find a generalized equilibrium that takes into account the constraints and individual preferences of the players. The concept of generalized equilibrium has been fundamental in economic theory in analyzing the efficient allocation of resources and the maximization of social welfare [12, 18, 22, 23, 32]. A particular generalized Nash game was introduced by Rosen in [31], but it was in the 1990s that many authors began addressing these generalized games in order to establish sufficient conditions to guarantee the existence of generalized Nash equilibria [3, 4, 5, 6, 11, 14, 15, 20, 21].
Currently, generalized Nash games are being used by researchers to model electricity markets, as seen in [5, 25]. However, in 2016, Aussel, Sultana, and Vetrivel [5] showed that these electricity market problems may not have generalized Nash equilibria because the strategies of each player depend on the strategies of their rivals, but these strategies do not necessarily fall within a fixed set of strategies. This absence of generalized equilibria has led to the introduction of a new concept called projected solution for generalized Nash equilibrium problems. In addition, the authors in [5] reformulated the generalized Nash game as a quasi-variational inequality problem to obtain projected solutions, assuming convexity and differentiability. Later, Cotrina and Zuniga [16] extended the result given in [5] by considering continuity instead of differentiability, by reformulating these generalized games as quasi-equilibrium problems. Similarly, Castellani et al. [10] extended the main result in [16] by relaxing the compactness assumption of each strategy set. However, all the above results require the convexity assumption. Recently, Bueno and Cotrina [8] established an existence result on projected solutions in the setting of quasi-convexity, considering a weak notion of continuity. Moreover, in [8] the authors showed that both the quasi-variational inequality problem and the quasi-equilibrium problem can be reformulated as a certain generalized Nash equilibrium problem. The aim of this manuscript is to prove the existence of projected solutions for generalized Nash games in the setting of quasi-convexity, which is independent of the one given in [8]. Moreover, we aim to study the relationship between projected solutions and generalized Nash equilibria, first for the generalized Nash game proposed by Rosen and later for the general case.
The paper is organized as follows. In Section 2, we provide some definitions and notations. Section 3 is divided into three subsections. In the first subsection, we establish two existence results on projected solutions. In the second subsection, we focus on the generalized Nash game proposed by Rosen. Finally, in the third subsection, we reformulate the problem of finding projected solutions as a certain generalized Nash game.
## 2 Preliminaries
From now on \(\|\cdot\|\) denotes a norm in \(\mathbb{R}^{n}\). Given a subset \(A\) of \(\mathbb{R}^{n}\), we denote by \(\mathrm{co}(A)\) the convex hull of \(A\) and by \(\overline{A}\) the closure of \(A\). For each \(z\in\mathbb{R}^{n}\), we denote by \(P_{A}(z)\) the projection of \(z\) onto \(A\), that is
\[P_{A}(z)=\{w\in A:\;\|z-w\|\leq\|z-x\|\text{ for all }x\in A\}.\]
We now recall continuity notions for set-valued maps. Let \(T:X\rightrightarrows Y\) be a set-valued map with \(X\) and \(Y\) two topological spaces. The map \(T\) is said to be _closed_, when \(\mathrm{gra}(T):=\big{\{}(x,y)\in X\times Y\;:\;y\in T(x)\big{\}}\). is a closed subset of \(X\times Y\). Moreover, the map \(T\) is _lower semicontinuous_ at \(x\in X\) if for each open set \(V\) such that \(T(x_{0})\cap V\neq\emptyset\) there exists \(\mathscr{V}_{x}\) neighbourhood of \(x\) such that \(T(x^{\prime})\cap V\neq\emptyset\) for every \(x^{\prime}\in\mathscr{V}_{x}\); it is _upper semicontinuous_ at \(x\in X\) if for each open set \(V\), with \(T(x)\subset V\), there exists \(\mathscr{V}_{x}\) neighbourhood of \(x\) such that \(T(x^{\prime})\subset V\) for every \(x^{\prime}\in\mathscr{V}_{x}\). Finally, we say that the map \(T\) is _continuous_ when it is upper and lower semicontinuous.
It is known that the projection onto \(A\subset\mathbb{R}^{n}\), \(P_{A}\), defines a set-valued map from \(\mathbb{R}^{n}\) onto \(A\).
We recall the notion of pseudo-continuity [26] for functions. A real-valued function \(f:X\to\mathbb{R}\), where \(X\) is a topological space, is said to be _upper pseudo-continuous_
if, for any \(x,y\in X\) such that \(f(x)<f(y)\), there exists a neighbourhood \(\mathscr{V}_{x}\) of \(x\) satisfying
\[f(x^{\prime})<f(y),\text{ for all }x^{\prime}\in V_{x}.\]
Moreover, the function \(f\) is _lower pseudo-continuous_ if, \(-f\) is upper pseudo-continuous. Finally, \(f\) is said to be _pseudo-continuous_ if, it is lower and upper pseudo-continuous.
It is important to notice that any upper semi-continuous function is upper pseudo-continuous, but the converse is not true in general, see [13] and its references for more details on pseudo-continuity.
## 3 The generalized Nash equilibrium problem
The Nash equilibrium problem (NEP in short) [27] consists of a finite number of players, where each player has a strategy set and an objective function depending not only on his/her decision but also on the decision of his/her rivals. Formally, let \(N\) be the set of players which is any finite and non-empty set. Let us assume that each player \(\nu\in N\) chooses a strategy \(x^{\nu}\) in a strategy set \(K_{\nu}\), which is a subset of \(\mathbb{R}^{n_{\nu}}\). We denote by \(\mathbb{R}^{n}\), \(K\) and \(K_{-\nu}\) the Cartesian products of \(\prod_{\nu\in N}\mathbb{R}^{n_{\nu}}\), \(\prod_{\nu\in N}K_{\nu}\) and \(\prod_{\mu\in N\setminus\{\nu\}}K_{\mu}\), respectively. We can write \(x=(x^{\nu},x^{-\nu})\in K\) in order to emphasize the strategy of player \(\nu\), \(x^{\nu}\in K_{\nu}\), and the strategy of the other players \(x^{-\nu}\in K_{-\nu}\).
Given the strategy the players except of player \(\nu\), \(x^{-\nu}\), the player \(\nu\) chooses a strategy \(x^{\nu}\) such that it solves the following optimization problem
\[\min\theta_{\nu}(z^{\nu},x^{-\nu}),\text{ subject to }\ z^{\nu}\in K_{\nu}, \tag{1}\]
where \(\theta_{\nu}:\mathbb{R}^{n}\to\mathbb{R}\) is a real-valued function and \(\theta_{\nu}(x^{\nu},x^{-\nu})\) denotes the loss of the player \(\nu\) suffers when the rival players have chosen the strategy \(x^{-\nu}\) and he/she takes \(x^{\nu}\). Thus, a _Nash equilibrium_ is a vector \(\hat{x}\in K\) such that \(\hat{x}^{\nu}\) solves (1) when the rival players take the strategy \(\hat{x}^{-\nu}\), for any \(\nu\). We denote by \(\operatorname{NEP}(\{\theta_{\nu},K_{\nu}\}_{\nu\in N})\) the set of Nash equilibria.
Arrow and Debreu [2] dealt with a more complex situation where the strategy set of each player also depends on the decision of his/her rivals. Nowadays, these kind of games are called the generalized Nash equilibrium problem (GNEP in short). Thus, in a GNEP each player \(\nu\) has a strategy must belong to a set \(X_{\nu}(x)\subset K_{\nu}\) that depends of all strategies. The aim of player \(\nu\), given the others players' strategies \(x^{-\nu}\), is to choose a strategy \(x^{\nu}\) such that it solves the next minimization problem
\[\min\theta_{\nu}(z^{\nu},x^{-\nu}),\text{ subject to }\ z^{\nu}\in X_{\nu}(x). \tag{2}\]
Thus, a vector \(\hat{x}\in K\) is a _generalized Nash equilibrium_ if, \(\hat{x}^{\nu}\) solves (2) when the rival players take the strategy \(\hat{x}^{-\nu}\), for any \(\nu\). We denote by \(\operatorname{GNEP}(\{\theta_{\nu},X_{\nu}\}_{\nu\in N})\) the set of generalized Nash equilibria.
It is clear that \(\hat{x}\in\operatorname{GNEP}(\{\theta_{\nu},X_{\nu}\}_{\nu\in N})\) if, and only if, \(\hat{x}\in\operatorname{NEP}(\{\theta_{\nu},X_{\nu}(\hat{x})\}_{\nu\in N})\). Furthermore, observe that for a GNEP, the constraint maps \(X_{\nu}:K\rightrightarrows K_{\nu}\) induce a set-valued map \(\mathcal{X}:K\rightrightarrows K\) defined as
\[\mathcal{X}(x)=\prod_{\nu\in N}X_{\nu}(x),\]
which is a self-map, i.e. \(\mathcal{X}(K)\subset K\). Consequently, any generalized Nash equilibrium is a fixed point of \(\mathcal{X}\).
Aussel _et al._[5] considered the more general situation, they assume that each constraint map \(X_{\nu}\) is defined from \(K\) onto \(\mathbb{R}^{n_{\nu}}\) instead \(K_{\nu}\). In this case, a vector \(\hat{x}\in K\) is said to be a projected solution if, there exists \(\hat{y}\in\mathcal{X}(\hat{x})\) such that the following two conditions hold:
1. \(\hat{x}\) is a projection of \(\hat{y}\) onto \(K\);
2. \(\hat{y}\in\mathrm{NEP}(\{\theta_{\nu},X_{\nu}(\hat{x})\}_{\nu\in N})\).
Clearly, any generalized Nash equilibrium is a projected solution, but the converse is not true, see Remark 3.1, part 2 in [8].
We divide this section in three parts, the first one is related to the existence of projected solutions, the second one is concerning to the generalized Nash game proposed by Rosen, and finally the third subsection consists to reformulate the problem of finding projected solutions to a particular GNEP.
### Existence result
Before to establish our first result we need the following proposition, which is a consequence of the maximum theorem.
**Proposition 3.1**.: _Let \(X,Y,Z\) be three topological spaces, \(T:X\rightrightarrows Y\) be a set-valued map and \(f:Y\times Z\to\mathbb{R}\) be a function. If \(f\) is pseudo-continuous and \(T\) is continuous with compact and non-empty values; then the map \(M:X\times Z\rightrightarrows Y\) defined as_
\[M(x,z)=\{y\in T(x):\;f(y,z)\leq f(w,z)\;\text{for all}\;w\in T(x)\}\]
_is upper semicontinuous with compact and non-empty values._
Proof.: By considering \(\hat{T}:X\times Z\rightrightarrows Y\) and \(\hat{f}:(X\times Z)\times Y\to\mathbb{R}\) defined as
\[\hat{T}(x,z)=T(x)\;\text{and}\;\hat{f}(x,z,y)=f(y,z).\]
Clearly \(\hat{f}\) is a pseudo-continuous, and \(\hat{T}\) is continuous with compact and non-empty values. Thus, by Theorem 3.4 in [13], the map \(M\) is upper semicontinuous with compact and non-empty values.
Now, we are in position to state our first existence result, which generalizes Theorem 4.2 in [5].
**Theorem 3.2**.: _Assume any norm in \(\mathbb{R}^{n}\), and moreover for each player \(\nu\in N\):_
1. \(K_{\nu}\) _is convex, compact and non-empty subset of_ \(\mathbb{R}^{n_{\nu}}\)_,_
2. \(X_{\nu}\) _is continuous with convex, compact and non-empty values,_
3. \(\theta_{\nu}\) _is pseudo-continuous and_
4. \(\theta_{\nu}(\cdot,x^{-\nu})\) _is quasi-convex, for all_ \(x^{-\nu}\)_;_
_then there exists a projected solution._
Proof.: The projection map \(P_{K}\) is upper semicontinuous with compact, convex and non-empty values, see [19].
For each \(\nu\in N\), consider the sets \(D_{\nu}=\operatorname{co}(X_{\nu}(K))\). In addition, we also consider the sets \(D=\prod_{\nu\in N}D_{\nu}\) and \(C=\operatorname{co}(P_{K}(D))\), which are convex, compact and non-empty. For each \(\nu\in N\), we define the map \(M_{\nu}:C\times D\rightrightarrows D_{\nu}\) as
\[M_{\nu}(x,y)=\{z^{\nu}\in X_{\nu}(x):\;\theta_{\nu}(z^{\nu},y^{-\nu})\leq \theta_{\nu}(w^{\nu},y^{-\nu})\text{ for all }w^{\nu}\in X_{\nu}(x)\},\]
which is upper semicontinuous with compact and non-empty values, due to Proposition 3.1. Moreover, \(M_{\nu}\) is convex-valued because \(\theta_{\nu}\) is quasiconvex concerning to its player's variable. Thus, the map \(M:C\times D\rightrightarrows D\) defined as
\[M(x,y)=\prod_{\nu\in N}M_{\nu}(x,y)\]
is upper semicontinuous with convex, compact and non-empty values. On the other hand, by considering the map \(R:D\rightrightarrows C\times D\) defined as
\[R(y)=P_{K}(y)\times\{y\},\]
which is clearly upper semicontinuous with convex, compact and non-empty values. Consequently the map \(M\circ R:D\rightrightarrows D\) is Kakutani's factorizable, and by Lassonde's fixed point theorem [24], that means there exists \(\hat{y}\in D\) such that \(\hat{y}\in M\circ R(\hat{y})\). Thus, there exists \(\hat{x}\in C\) such that \(\hat{x}\in P_{K}(\hat{y})\) and \(\hat{y}\in M(\hat{x},\hat{y})\). Now, \(\hat{y}\in M(\hat{x},\hat{y})\) if, and only if, for each \(\nu\), we have
\[\theta_{\nu}(\hat{y})\leq\theta_{\nu}(w^{\nu},\hat{y}^{-\nu})\text{ for all }w^{\nu}\in X_{\nu}(\hat{x}).\]
Therefore, \(\hat{x}\) is a projected solution.
As a direct consequence of the previous result we have the following corollary, which is slight modification of Arrow and Debreu result [2].
**Corollary 3.3**.: _Assume that for each player \(\nu\in N\):_
1. \(K_{\nu}\) _is convex, compact and non-empty subset of_ \(\mathbb{R}^{n_{\nu}}\)_,_
2. \(X_{\nu}:K\rightrightarrows K_{\nu}\) _is continuous with convex, compact and non-empty values,_
3. \(\theta_{\nu}\) _is pseudo-continuous and_
4. \(\theta_{\nu}(\cdot,x^{-\nu})\) _is quasi-convex, for all_ \(x^{-\nu}\)_;_
_then the set \(\operatorname{GREP}(\{\theta_{\nu},X_{\nu}\}_{\nu\in N})\) is non-empty._
The following example says that Theorem 3.2 is not a direct consequence of the one given by Bueno and Cotrina in [8].
**Example 3.4**.: Consider \(K_{1}=K_{2}=[0,1]\) and the maps \(X_{1},X_{2}:K\rightrightarrows\mathbb{R}\) defined as
\[K_{1}(x,y)=[x+1,y+2]\text{ and }K_{2}(x,y)=[y+1,x+2].\]
Consequently, the map \(\mathcal{X}:[0,1]^{2}\rightrightarrows\mathbb{R}^{2}\) is given by
\[\mathcal{X}(x,y)=[x+1,y+2]\times[y+1,x+2].\]
It is clear that \(\mathcal{X}\) is not a self-map. Moreover, it does not have fixed point and consequently the GNEP associated to any two functions does not have solutions.
On the other hand, consider the functions \(\theta_{1},\theta_{2}:\mathbb{R}^{2}\to\mathbb{R}\) defined as
\[\theta_{1}(x,y)=x^{3}-y\text{ and }\theta_{2}(x,y)=x+y^{3}.\]
We define the maps \(M_{1},M_{2}:[0,1]^{2}\rightrightarrows\mathbb{R}^{2}\) as
\[M_{1}(x,y)=\{z\in[x+1,y+2]:\;\theta_{1}(z,y)\leq\theta_{2}(w,y),\;\text{for all }w\in[x+1,y+2]\}\]
and
\[M_{2}(x,y)=\{z\in[y+1,x+2]:\;\theta_{2}(x,z)\leq\theta_{2}(x,w),\;\text{for all }w\in[y+1,x+2]\}.\]
Clearly \(M_{1}(x,y)=\{x+1\}\) and \(M_{2}(x,y)=\{y+1\}\). Thus, by considering the Euclidean norm in \(\mathbb{R}^{2}\), we can see that \((1,1)\) is the only projected solution for the GNEP. Theorem 3.2 guarantees the existence of such a projected solution, contrary to Theorem 3.1 in [8], because the constraint maps are also dependent on its own player's strategy. Moreover, we cannot apply Theorem 9 in [16].
Using the same idea proposed by Castellani _et al._ in [10], we can relax the compactness of each strategy set \(K_{\nu}\). However, we need to consider a particular norm.
**Theorem 3.5**.: _Assume the Euclidean norm in \(\mathbb{R}^{n}\), and moreover for each player \(\nu\in N\):_
1. \(K_{\nu}\) _is convex, closed and non-empty subset of_ \(\mathbb{R}^{n_{\nu}}\)_,_
2. \(X_{\nu}\) _is continuous with convex, compact and non-empty values,_
3. \(X_{\nu}(K)\) _is bounded,_
4. \(\theta_{\nu}\) _is pseudo-continuous and_
5. \(\theta_{\nu}(\cdot,x^{-\nu})\) _is quasi-convex, for all_ \(x^{-\nu}\)_;_
_then there exists a projected solution._
Proof.: The projection map \(P_{K}\) is single-valued and consequently it is continuous, see [19]. By considering the sets \(D\) and \(C\), and the map \(M\) in the proof of Theorem 3.2. Now we define the map \(S:D\rightrightarrows D\) as
\[S(y)=M(P_{K}(y),y)\]
which is clearly upper semicontinuous, due to Theorem in [1]. Moreover, it has convex, compact and non-empty values. Consequently, by Kakutani's theorem there exists a fixed point of \(S\). Thus, it is enough to show this fixed point produces a projected solution. Indeed, let \(\hat{y}\) be a fixed point of \(S\). Thus, \(\hat{y}^{\nu}\in M_{\nu}(\hat{x},\hat{y})\) for all \(\nu\in N\), where \(\hat{x}=P_{K}(\hat{y})\). This is equivalent to \(\hat{y}\in\mathrm{NEP}(\{\theta_{\nu},X_{\nu}(\hat{x})\}_{\nu\in N})\). Therefore, \(\hat{x}\) is a projected solution.
_Remark 3.6_.: In the above result, we can consider any norm such that the projection map \(P_{K}\) is single-valued and continuous.
### The jointly convex case
An important instance of generalized Nash equilibrium problem was presented by Rosen in [31]. More specifically, given a convex and non-empty subset \(X\) of \(\mathbb{R}^{n}\), the aim of player \(\nu\in N\) is to find \(x^{\nu}\), given the strategy of rival players \(x^{-\nu}\), such that it solves the problem
\[\min_{x^{\nu}}\theta_{\nu}(x^{\nu},x^{-\nu}),\ \ \text{subject to}\ \ (x^{\nu},x^{-\nu})\in X. \tag{3}\]
A vector \(\hat{x}\in X\) is a _generalized Nash equilibrium in the sense of Rosen_ if, for each player \(\nu\in N\), \(\hat{x}^{\nu}\) is a solution of the problem (3) associated to \(\hat{x}^{-\nu}\).
After the seminal paper of Rosen [31], the authors in [3] extended his existence result to the case of semi strict quasi-convexity, Bueno _et al._[7] dealt with the quasi-convexity case, and recently Calderon and Cotrina in [9] consider the noncompact case.
Now, for each player \(\nu\in N\), we consider the set \(K_{\nu}\) as the projection of \(X\) onto \(\mathbb{R}^{n_{\nu}}\). Additionaly, for each \(x\in X\) we consider the set
\[X_{\nu}(x):=\{y^{\nu}\in\mathbb{R}^{n_{\nu}}:\ (y^{\nu},x^{-\nu})\in X\}.\]
Thus, each \(X_{\nu}\) is defined on \(X\) onto \(K_{\nu}\). This allows us to define the map \(\mathcal{X}:X\rightrightarrows\mathbb{R}^{n}\) by
\[\mathcal{X}(x)=\prod_{\nu\in I_{p}}X_{\nu}(x),\]
which is not a self-map in general. Thus, a natural question arises: is a classical solution any projected solution? Or in other words, we want to know if there exist \(\hat{x}\in X\) and \(\hat{y}\in\mathcal{X}(\hat{x})\setminus\{\hat{x}\}\) such that
* \(\hat{x}\) is the projection of \(\hat{y}\) on \(X\), and
* \(\hat{y}\in\mathrm{NEP}(\{\theta_{\nu},X_{\nu}(\hat{x})\}_{\nu\in N})\).
We are ready for our main result of this subsection, which gives a positive answer to our question.
**Proposition 3.7**.: _Let \(\|\cdot\|\) be a norm in \(\mathbb{R}^{n}\) and \(p\) be the number of players. Then any projected solution is a classical solution._
Proof.: Let \(\hat{x}\in X\) be a projected solution, that means there exists \(\hat{y}\in\mathcal{X}(\hat{x})\) such that
* \(\|\hat{y}-\hat{x}\|\leq\|\hat{y}-x\|\), for all \(x\in X\) and
* \(\theta_{\nu}(\hat{y})\leq\theta_{\nu}(y^{\nu},\hat{y}^{-\nu})\) for all \(y^{\nu}\in X_{\nu}(\hat{x})\).
First, notice that
\[\hat{y}=\sum_{\nu=1}^{p}\left((\hat{y}^{\nu},\hat{x}^{-\nu})-(0,\hat{x}^{-\nu })\right)\]
Now, for any \(t\in\mathbb{R}\) we have
\[t\hat{x}+(1-t)\hat{y} =t\hat{x}+(1-t)\sum_{\nu=1}^{p}\left((\hat{y}^{\nu},\hat{x}^{-\nu})- (0,\hat{x}^{-\nu})\right)\] \[=t\hat{x}+(1-t)\sum_{\nu=1}^{p}(\hat{y}^{\nu},\hat{x}^{-\nu})-(1-t )(p-1)\hat{x}\] \[=(t-(1-t)(p-1))\hat{x}+(1-t)\sum_{\nu=1}^{p}(\hat{y}^{\nu},\hat{x }^{-\nu}).\]
Clearly \(t-(1-t)(p-1)+\sum_{\nu=1}^{p}(1-t)=1\). Thus, for any \(1>t>\frac{p-1}{p}>0\), we deduce that \(z_{t}=t\hat{x}+(1-t)\hat{y}\in X\), due to this point belongs to the convex hull of \(\hat{x},(\hat{y}^{1},\hat{x}^{-1}),\ldots,(\hat{y}^{p},\hat{x}^{-p})\in X.\) Consequently
\[\|\hat{y}-z_{t}\|\leq t\|\hat{y}-\hat{x}\|\]
which in turn implies \(\|\hat{y}-\hat{x}\|=0\). Hence, \(\hat{y}=\hat{x}\).
### Equivalence between Nash equilibrium theorems
It was showed in [8] the existence of projected solutions for GNEPs which are not generalized Nash equilibria. However, we will show that the problem of finding projected solutions for GNEPs can be associated to a particular GNEP by adding a new player.
Assume that \(N=\{1,2,\cdots,p\}\) and consider \(M=N\cup\{p+1\}\). Thus, for each \(\nu\in M\) we consider the sets \(\hat{K}_{\nu}\) defined by
\[\hat{K}_{\nu}=\begin{cases}\mathrm{co}(K_{\nu}\cup X_{\nu}(K)),&\text{if }\nu \in N;\\ K,&\text{if }\nu=p+1\end{cases}\]
As usual we write \(\mathbf{x}=(\mathbf{x}^{\nu},\mathbf{x}^{-\nu})\in\hat{K}=\prod_{\nu\in M} \hat{K}_{\nu}\) in order to emphasize the strategy of player \(\nu\). Moreover, we write \(\mathbf{x}_{0}\) instead \(\mathbf{x}^{-(p+1)}\). It is important to notice that for each \(\nu\in N\)
\[\mathbf{x}^{\nu}=\mathbf{x}_{0}^{\nu}.\]
Let us define the map \(\hat{X}_{\nu}:\hat{K}\rightrightarrows\hat{K}_{\nu}\) and the function \(\hat{\theta}_{\nu}:\mathbb{R}^{n}\times\mathbb{R}^{n}\rightarrow\mathbb{R}\), respectively, as
\[\hat{X}_{\nu}(\mathbf{x})=\begin{cases}X_{\nu}(\mathbf{x}^{p+1}),&\text{if }\nu \in N\\ K,&\text{if }\nu=p+1\end{cases}\text{and }\hat{\theta}_{\nu}(\mathbf{x})= \begin{cases}\theta_{\nu}(\mathbf{x}_{0}),&\text{if }\nu\in N\\ \|\mathbf{x}_{0}-\mathbf{x}^{p+1}\|,&\text{if }\nu=p+1.\end{cases}\]
We denote the set of projected solutions by \(\mathrm{PSGNEP}(\{\theta_{\nu},X_{\nu}\}_{\nu\in N})\), and we establish the relationship between the sets \(\mathrm{GNEP}(\{\hat{\theta}_{\nu},\hat{X}_{\nu}\}_{\nu\in M})\) and \(\mathrm{PSGNEP}(\{\theta_{\nu},X_{\nu}\}_{\nu\in N})\).
**Proposition 3.8**.: _The following implications hold:_
1. _If_ \(\hat{\mathbf{x}}\in\mathrm{GNEP}(\{\hat{\theta}_{\nu},\hat{X}_{\nu}\}_{\nu\in M})\)_, then_ \(\hat{\mathbf{x}}^{p+1}\in\mathrm{PSGNEP}(\{\theta_{\nu},X_{\nu}\}_{\nu\in N})\)_._
2. _If_ \(\hat{x}\in\mathrm{PSGNEP}(\{\theta_{\nu},X_{\nu}\}_{\nu\in N})\)_, then there exists_ \(\hat{y}\in\mathbb{R}^{n}\) _such that the vector_ \(\hat{\mathbf{x}}=(\hat{y},\hat{x})\in\mathrm{GNEP}(\{\hat{\theta}_{\nu},\hat{X }_{\nu}\}_{\nu\in M})\)_._
Proof.:
1. If \(\hat{\mathbf{x}}\in\mathrm{GNEP}(\{\hat{\theta}_{\nu},\hat{X}_{\nu}\}_{\nu\in M})\), then for each \(\nu\in M\) \[\hat{\mathbf{x}}^{\nu}\in\operatorname*{arg\,min}_{\hat{X}_{\nu}(\hat{\mathbf{ x}})}\hat{\theta}_{\nu}(\cdot,\hat{\mathbf{x}}^{-\nu})\] (4) The previous relation (4) is equivalent to \[\hat{\mathbf{x}}_{0}^{\nu}\in\operatorname*{arg\,min}_{X_{\nu}(\hat{\mathbf{x}} ^{p+1})}\theta_{\nu}(\cdot,\hat{\mathbf{x}}_{0}^{-\nu}),\text{ for all }\nu\in N;\] and for \(\nu=p+1\) \[\|\hat{\mathbf{x}}_{0}-\hat{\mathbf{x}}^{p+1}\|\leq\|\hat{\mathbf{x}}_{0}- \mathbf{x}^{p+1}\|,\text{ for all }\mathbf{x}^{p+1}\in K.\] Since \(P_{K}(\hat{\mathbf{x}}_{0})=\operatorname*{arg\,min}_{K}\|\cdot-\hat{ \mathbf{x}}_{0}\|\), this last inequality implies \(\hat{\mathbf{x}}^{p+1}\in P_{K}(\hat{\mathbf{x}}_{0})\). Therefore, \(\hat{\mathbf{x}}^{p+1}\in\mathrm{PSGNFP}(\{\theta_{\nu},X_{\nu}\}_{\nu\in N})\).
2. For \(\nu\in N\) is trivial, and for \(\nu=p+1\), the result follows from the fact that \(P_{K}(\hat{y})=\operatorname*{arg\,min}_{K}\|\cdot-\hat{y}\|\).
Now, we are in position to state the following result, which states that Theorem 3.2 can be deduced from Corollary 3.3.
**Theorem 3.9**.: _Corollary 3.3 implies Theorem 3.2._
Proof.: For each \(\nu\in N\) we have that
* the set \(\hat{K}_{\nu}\) is compact, convex and non-empty, due to the set \(K_{\nu}\) is compact and non-empty, and the map \(X_{\nu}\) is upper semicontinuous;
* the map \(\hat{X}_{\nu}\) is continuous with compact, convex and non-empty values, because the map \(X_{\nu}\) is so;
* the function \(\hat{\theta}_{\nu}\) is pseudo-continuous and quasiconvex in its own variable, because \(\theta_{\nu}\) is so.
For \(p+1\), we have that \(\hat{K}_{p+1}=K\), which is convex, compact and non-empty, the map \(\hat{X}_{p+1}\) is constant and consequently it is continuous with convex, compact and non-empty values. Furthermore, since the norm is continuous and convex we obtain that the function \(\hat{\theta}_{p+1}\) is continuous and convex. Thus, by Corollary 3.3 there exists at least one element of \(\mathrm{GNEP}(\{\hat{\theta}_{\nu},\hat{X}_{\nu}\}_{\nu\in M})\). Hence, the result follows from Proposition 3.8, part 1..
## Conclusions
In this manuscript, we improve some existence results on projected solutions for generalized Nash equilibrium problems. We establish that the concept of projected solution coincides with the classical notion of generalized Nash equilibrium for generalized Nash games proposed by Rosen. Finally, we reformulate the problem of finding projected solutions for GNEPs as another GNEP by adding an extra player. |
2304.05243 | r-softmax: Generalized Softmax with Controllable Sparsity Rate | Nowadays artificial neural network models achieve remarkable results in many
disciplines. Functions mapping the representation provided by the model to the
probability distribution are the inseparable aspect of deep learning solutions.
Although softmax is a commonly accepted probability mapping function in the
machine learning community, it cannot return sparse outputs and always spreads
the positive probability to all positions. In this paper, we propose r-softmax,
a modification of the softmax, outputting sparse probability distribution with
controllable sparsity rate. In contrast to the existing sparse probability
mapping functions, we provide an intuitive mechanism for controlling the output
sparsity level. We show on several multi-label datasets that r-softmax
outperforms other sparse alternatives to softmax and is highly competitive with
the original softmax. We also apply r-softmax to the self-attention module of a
pre-trained transformer language model and demonstrate that it leads to
improved performance when fine-tuning the model on different natural language
processing tasks. | Klaudia Bałazy, Łukasz Struski, Marek Śmieja, Jacek Tabor | 2023-04-11T14:28:29Z | http://arxiv.org/abs/2304.05243v3 | # r-softmax: Generalized Softmax with Controllable Sparsity Rate
###### Abstract
Nowadays artificial neural network models achieve remarkable results in many disciplines. Functions mapping the representation provided by the model to the probability distribution are the inseparable aspect of deep learning solutions. Although softmax is a commonly accepted probability mapping function in the machine learning community, it cannot return sparse outputs and always spreads the positive probability to all positions. In this paper, we propose r-softmax, a modification of the softmax, outputting sparse probability distribution with controllable sparsity rate. In contrast to the existing sparse probability mapping functions, we provide an intuitive mechanism for controlling the output sparsity level. We show on several multi-label datasets that r-softmax outperforms other sparse alternatives to softmax and is highly competitive with the original softmax. We also apply r-softmax to the self-attention module of a pre-trained transformer language model and demonstrate that it leads to improved performance when fine-tuning the model on different natural language processing tasks.
Keywords:Sparse probability function Controlling sparsity level Softmax alternative.
## 1 Introduction
Deep learning models achieve state-of-the-art results in various domains such as computer vision, natural language processing (NLP), chemical sciences, and many others. Transforming the numerical output, returned by a neural network into a probability distribution on a discrete set is an integral aspect of many machine learning models. In classification, it describes the probability over classes; in the attention mechanism for NLP, it indicates which words in a text are contextually relevant to other words. The generally accepted standard for probability mapping function is a softmax function [4, 14]. Softmax is easy to evaluate and differentiate as well as it can be transformed into convex a loss function, which is especially appealing in classification problems.
Although softmax is the most widely applied probability mapping function in machine learning, it cannot return sparse outputs. In other words, softmax assigns a non-zero probability to every component. The representation that allows for zero probabilities would be more natural and more interpretable as certain
elements could be clearly marked as insignificant. Since softmax always spreads the positive probability to all positions, it does not return the number of relevant labels, i.e. those with non-zero probabilities. In consequence, applying softmax function in multi-label classification involves defining a threshold below which the label is considered negative, which requires the hyperparameter selection process that generates additional computational overhead.
In this paper, we introduce r-softmax, a sparse alternative to softmax function, that eliminates the problem of non-zero probabilities and allows for the intuitive control of the sparsity rate. The sparsity rate \(r\), representing the fraction of desired zero values, can be specified by the user, as well as the model can be trained to select its appropriate value using a typical gradient descent procedure. In consequence, applying r-softmax in multi-label classification and training a model to predict appropriate \(r\), eliminates the need for defining an additional mechanism, e.g. a threshold, for deducing the number of positive labels, see Figure 1.
We evaluate r-softmax as a function determining probabilities of classes in a multi-label classification problem and as a function determining the significance probability of elements in the attention mechanism. In the multi-label classification scenario, r-softmax is benchmarked on various synthetic and real datasets. Our experiments demonstrate that the performance of r-softmax is significantly better than other sparse alternatives to softmax, like sparsemax [15] and sparsehourglass [12], and is competitive with the original softmax with a selected optimal threshold determining if the label is positive. In the case of the attention mechanism, we replace softmax mapping with r-softmax in the pre-trained transformer language model. We show that our modification can improve the performance of the fine-tuned model on various NLP tasks.
Our contribution can be summarized as follows:
* We introduce r-softmax, a sparse probability mapping function that is a generalization of the original softmax. The desired sparsity rate \(r\) can be defined by the user or learned by the model itself.
Figure 1: The difference between using softmax and r-softmax for multi-label classification. Both functions return the probability distribution over the specified classes based on the output provided by the neural network model. Since r-softmax is able to produce zero probabilities, we can consider them as an indication of a negative class. For softmax, we need to select an appropriate threshold below which a class will be classified as negative. Thus, the representation provided by r-softmax is more intuitive and more interpretable.
* We provide an extensive evaluation of r-softmax on the multi-label classification problem that demonstrates the benefits of using our method.
* We show that replacing softmax with r-softmax in the pretrained transformer language model improves the performance of a fine-tuned model on most of the considered NLP tasks.
## 2 Related Work
Functions mapping the output of an artificial neural network into the probability distribution are indispensable components in machine learning. They are useful, for example, in determining class membership in a classification problem or in assessing the significance of the elements under consideration.
#### 2.0.1 Softmax
Softmax is a commonly used function in machine learning, which parametrizes a probability distribution over a discrete set of outputs [4, 14]. Its application ranges from classification through attention mechanism [17] to reinforcement learning [16]. However, softmax cannot return sparse outputs with zero values at certain positions. In consequence, in multi-label classification, we need to find a threshold under which the label is considered negative.
However, classification models with softmax frequently return overconfident predictions, which exceed model accuracy resulting in uncalibrated models [7]. Moreover, softmax rarely spreads similar probability to a few positions, which is in particular inconvenient in multi-label classification, where more than one label per example may be correct. Another disadvantage is caused by the fact that softmax cannot return sparse outputs with zero values at certain positions. In consequence, in multi-label classification, we need to find a threshold under which the label is considered negative. Moreover, non-sparse outputs generate computational overhead in the case of high-dimensional outputs.
#### 2.0.2 Alternatives to softmax
Given the broad range of applications for probability mapping functions in machine learning, various alternatives to softmax have been developed, each with its own set of benefits and drawbacks depending on the particular use case. Noteworthy alternatives to softmax include the spherical softmax [3], multinomial probit [1], softmax approximations [2] or Gumbel-Softmax [9], which provides a continuous probability distribution that serves as an approximation of the discrete distribution produced by softmax. As our paper introduces a novel sparse alternative to softmax, below we focus on existing sparse probability mapping functions.
Sparsemax [15] is defined as a projection of the input vector onto the probability simplex. Since the projection is very likely to hit the boundary of the simplex, sparsemax returns sparse outputs. The authors also constructed a natural convex loss for sparsemax, making an analogy with a derivative of the cross-entropy loss applied to softmax. Although the derivation of the model is theoretically justified, its performance is usually inferior to softmax models.
In [12], the authors defined a general family of probability mapping functions, which includes many popular functions, such as softmax or sparsemax, as special cases. By adding a regularization term and component-wise transformation function to the sparsemax, they constructed a general formulation of probability mapping functions. They also proposed a general strategy of designing convex loss functions for their models, including an alternative loss for sparsemax, which increased its experimental performance. A theoretical contribution of the paper is further enriched by the formulating desirable properties for probability mapping functions.
## 3 Sparse version of softmax
In this section, we introduce r-softmax, a sparse probability mapping function with a controllable sparsity rate. First, we describe the motivation behind the use of the sparse mapping function. Next, we define the weighted softmax - a generalization of the classical softmax [4]. Finally, we introduce r-softmax, where the sparsity rate can be easily defined by the user.
#### 3.0.1 Problem motivation
Probability mapping function is a key component in typical deep learning applications. It allows for transforming a real-valued response \({x=(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}}\) of the neural network to the probability vector \({p=(p_{1},\ldots,p_{n})}\), where \(p_{i}\geq 0\) and \(\sum_{i=1}^{n}p_{i}=1\). To parameterize this probability, we usually use the softmax function:
\[\text{softmax}(x)=\Big{(}\frac{\exp(x_{1})}{\sum\limits_{i=1}^{n}\exp(x_{i})},\ldots,\frac{\exp(x_{n})}{\sum\limits_{i=1}^{n}\exp(x_{i})}\Big{)}.\]
Since softmax is in fact the normalized exponential function, it can be evaluated and differentiated efficiently, which makes it very appealing in training deep learning models. To discuss a specific softmax application, let us consider a classification problem. In this case, the component \(p_{i}\) describes the probability that the input example comes from the \(i\)-th class. If we know that every example has a single class label, then we return a class with maximal probability:
\[\text{class}(x)=\arg\max_{i}p_{i}.\]
If more than one class can be correct for a given example (multi-label classification), we return \(k\) classes with the highest probabilities. There appears a natural question of _how to select the number of classes \(k\) for a given input?_ Since the softmax function does not return zero probabilities, we cannot easily say what probability should be converted to a positive label and which should not. In consequence, we arrive at a problem of manually introducing a threshold below which the class label will be considered negative.
The above example illustrates the basic problem with softmax that it cannot return sparse outputs. If the probability mapping function would be able to zero
out probabilities, then we could interpret zero probabilities as negative labels and the remaining ones as positive labels. This requirement is also important for other machine learning problems. The main building block of recent transformer architecture [17] is a self-attention layer, which is responsible for selecting key information from a given representation. By applying softmax, we force the model to consider all components as relevant, which usually is not the case. The attention module should be able to ignore unnecessary information by assigning zero probability to selected components.
#### 3.2.1 The weighted softmax
Keeping the above motivation in mind, we focus on constructing an alternative to softmax mapping, which is capable of returning sparse output vectors. We first define the weighted softmax - a general form of the probability mapping function. By a proper parameterization of its weights, the weighted softmax can reduce to a typical softmax, or binary one-hot vector, in which the coordinate containing maximal probability is rounded to 1 and the remaining coordinates are clipped to 0. It can also parametrize sparse probability mapping functions, which lay between softmax and one-hot vectors.
Let \(x=(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}\) be a point, associated with vector of weights \(w=(w_{1},\ldots,w_{n})\in\mathbb{R}^{n}_{+}\), where \(\sum_{i=1}^{n}w_{i}>0\). We define a weighted softmax by the following formula:
\[\text{softmax}(x,w)=\Big{(}\tfrac{w_{1}\exp(x_{1})}{\sum\limits_{i=1}^{n}w_{ i}\exp(x_{i})},\ldots,\tfrac{w_{n}\exp(x_{n})}{\sum\limits_{i=1}^{n}w_{i}\exp(x_{ i})}\Big{)}.\]
All components of the weighted softmax are non-negative and sum to 1, which means that it is a proper parametrization of a discrete probability distribution. For a constant weight vector \(w\), the weighted softmax reduces to classical softmax. A crucial difference between softmax and weighted softmax is that the weighted softmax is able to return zeros at some coordinates. To zero out the \(i\)-th coordinate it is enough to set \(w_{i}=0\). In the extreme case, the weighted softmax can produce one-hot vectors by setting exactly one non-zero weight.
We are interested in such a parametrization of weights in the weighted softmax, which allows for a smooth transition between softmax and binary one-hot vectors. For this purpose, we construct t-softmax, in which all weights depends on a single parameter \(t>0\):
\[\text{t-softmax}(x,t)=\text{softmax}(x,w_{t}), \tag{1}\]
where \(w_{t}=(w_{t}^{1},\ldots,w_{t}^{n})\) and \(w_{t}^{i}=\text{ReLU}(x_{i}+t-\max(x))\). Clearly, all weights \(w_{i}\) are nonnegative and there is at least one positive weight, which is consistent with the definition of weighted softmax. We can observe that the \(i\)-th weight is zero if the absolute difference between \(x_{i}\) and the maximum value \(\max(x)\) is greater than or equal to \(t\).
The following examines how t-softmax changes with varying values of \(t\):
Theorem 3.1: _Let \(x\in\mathbb{R}^{n}\) be a data point and let \(t\in(0,\infty)\). Then_
* _the limit of_ \(\text{t-softmax}(x,t)\) _is_ \(\text{softmax}(x)\) _as_ \(t\) _approaches infinity,_
* _if_ \(x\) _reaches unique max at index_ \(k\)_, then_ \[\text{t-softmax}(x,t)=\text{onehot}(\operatorname*{arg\,max}_{i}(x)),\] (2) _for_ \(t\in(0,x_{k}-\max_{i\neq k}(x)]\)_, where_ \(\text{onehot}(i)\in\mathbb{R}^{n}\) _is a vector consisting of zeros everywhere except_ \(k\)_-th position where_ \(1\) _is located._
Proof.: The first property is a consequence of \(\text{t-softmax}(x,t)=\text{softmax}(x,\frac{w_{t}}{t})\), and if \(t\) approaches infinity then \(\frac{w_{t}}{t}\) goes to \(1\), leading to \(\text{softmax}(x,1)=\text{softmax}(x)\). The last property follows directly from the definition of \(\text{t-softmax}\).
In practice, we can treat \(t\) as a model parameter, which will be tuned together with the remaining parameters in a training phase. This strategy is especially useful in a multi-label classification because we cannot decide a priori what is the correct number of positive labels for a given example. In this case, the model predicts both the number of positive labels as well as the distribution over classes. Experimental results show that this strategy gives promising results.
#### 2.0.1 Controlling the number of non-zero values using r-softmax
Instead of learning the optimal value of \(t\) as discussed above, there are situations in which we would like to have the ability to explicitly decide how many components returned by t-softmax should be zero. For this purpose, we introduce a parameter \(r\in[0,1]\) that we call a _sparsity rate_. Sparsity rate \(r\) is an intuitive parameter that will represent the fraction of zero components we would like to obtain in the output probability distribution.
Recall that \(w_{i}^{t}=0\) for \(i=1,\ldots,n\) if \(|x_{i}-\max(x)|\geq t\), as defined in Equation (1). To control the number of non-zero weights, we can inspect the range \([\min(x),\max(x)]\) and select \(t\) such that \(x_{i}<t<x_{j}\), where \(x_{i}\) and \(x_{j}\) are two distinct elements in \(x_{1},\ldots,x_{n}\), in increasing order. This will zero out the \(i\)-th component while keeping the \(j\)-th component non-zero. We can use the quantile of the set of \(x\)'s coordinates \(x_{1},\ldots,x_{n}\) to implement this rule. The \(q\)-quantile quantile\((x,q)\) outputs the value \(v\) in \([\min(x),\max(x)]\) such that the probability of \(x_{i}:x_{i}\leq v\) equals \(q\). If the quantile lies between \(x_{i}\) and \(x_{j}\) with indices \(i\) and \(j\) in the sorted order, we use linear interpolation to compute the result as \(x_{i}+\alpha\cdot(x_{j}-x_{i})\), where \(\alpha\) is the fractional part of the computed quantile index. Setting \(q=0\) or \(q=1\) in quantile\((x,q)\) will return the lowest or highest value of \(x\), respectively.
Following the above motivation, we fix the sparsity rate \(r\in[0,1]\) to quantify the requested fraction of zeros in a probability mapping function. The \(r\)-softmax is defined by:
\[\text{r-softmax}\left(x,r\right)=\text{t-softmax}(x,t_{r}). \tag{3}\]
where
\[t_{r}=-\text{quantile}(x,r)+\max(x).\]
The above parameterization of \(t_{r}\) determines that the fraction of \(r\) components will be zero. In particular, applying \(\text{r-softmax}(x,r)\) on \(x=(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}\) and \(r=\frac{k}{n}\), for \(k\leq n\), will output a probability distribution with \(k\) zero coordinates.
Using r-softmax function allows to reduce the model complexity and eliminate less probable components. Experiments demonstrate that this mechanism is beneficial for example in the self-attention mechanism applied in NLP tasks.
#### 3.0.1 Summary
In summary, we propose a new function that maps an input to a sparse probability distribution. Our function has two versions (1) the t-softmax version (see Equation (1)), which produces an output with a sparsity level guided by the parameter \(t\) that can be learned automatically during model training through backpropagation (no need to select it manually), and (2) the r-softmax version (see Equation (3)), which introduces an intuitive parameter \(r\) that allows the user to specify the desired fraction of zero elements in the output. The parameter \(r\) may be learned through backpropagation (as we demonstrate in the multi-label classification experiments in Section 4.1) as well as it can be manually chosen by the user (as we show in the self-attention experiments in Section 4.2). It is worth to note that while the use of the \(r\) parameter in r-softmax offers interpretability and control over the model's behavior, it comes with an increased computational cost due to the need to calculate the \(t\) parameter using the \(quantile\) function, which requires sorting the input vector. Therefore, when computational complexity is a concern, the t-softmax version may be a more suitable option than the r-softmax version.
## 4 Experiments
In this section, we benchmark r-softmax function against the basic softmax and other sparse probability mapping functions such as sparsemax and sparsehouglass.
First, we consider the multi-label classification problem and show that r-softmax is in most cases the best probability mapping function. Next, we fine-tune a pre-trained language model with different functions applied in self-attention blocks and show that r-softmax is the most beneficial choice1.
Footnote 1: Code with r-softmax is available at [https://github.com/gmmm/rsoftmax](https://github.com/gmmm/rsoftmax)
### Alternative to softmax in multi-label classification
The multi-label classification problem is an important problem that arises in many domains. For example, the image classification problem, where describing an image by a single class is often not sufficient as it usually consists of objects belonging to different classes [6, 11, 13]. The last element of the architecture, in multi-label classification models, is typically a function that maps the output of the network to a vector representing the probability of belonging to different classes [19]. In many cases, this function is softmax [19], but many other functions are also investigated, such as those that introduce sparse probability distributions [12, 15].
#### 3.2.2 R-softmax for multi-label classification
To use r-softmax in multi-label classification, we need to select a proper loss function. Unfortunately, we cannot directly apply cross-entropy loss as r-softmax can return zeros for certain positions, which makes the log function undefined. To resolve this issue, we follow the reasoning used in [12]. For this purpose, let \(z\) denote the logits returned by a neural network for the input \(x\) and let \(\eta=y/\|y\|_{1}\) describe a probability distribution over the labels. Our loss function is defined as follows:
\[\mathcal{L}(z,y)= \|y\cdot\left(\text{r-softmax}(z,r)-\eta\right)\|_{2}^{2}+\sum_{y_ {i}=1,y_{j}=0}\max\left(0,\eta_{i}-(z_{i}-z_{j})\right), \tag{4}\]
where \(y_{i}\) is \(i\)-th coordinate of the vector \(y\) (similarly for \(z\) and \(\eta\)). The first term focuses on approximating the probability on positive labels \(\eta_{i}\) by \(\text{r-softmax}(z,r)_{i}\). The second term is responsible for pushing the logits of negative labels away from the positive ones by the margin \(\eta_{i}\).
#### 3.2.3 Datasets
As preliminary experiments, we study a multi-label classification problem on synthetic data generated similarly to [12] using the scikit-learn library2. We evaluate different probability mapping functions on varying average number of labels per sample (the document length is fixed at 2000) and on a different average document length which is the sum of the features per sample (in this case, the average number of labels is fixed at half the number of output classes). More specifically, these parameters are the expected values for Poisson distribution. Generated datasets consist of 5000 samples with 128 features, where 80% of the data is the training set and 20% is the validation set. We conducted experiments for 10, 20, and 30 possible output classes.
Footnote 2: [https://scikit-learn.org/stable/modules/generated/sklearn.datasets](https://scikit-learn.org/stable/modules/generated/sklearn.datasets).
make_multilabel_classification.html
Finally, we analyze the performance of considered functions on multi-label classification task on two popular real datasets: VOC 2007 [6] and COCO [13]. For these datasets, we resize the images to a height and width of 224, scale them to \([0,1]\), and then normalize each channel.
#### 3.2.4 Experimental setting
As a baseline, we consider multi-label classification model with probability mapping function given by other sparse softmax alternatives such as sparsemax [15], and sparsehourglass [12]. We assume that all non-zero values mean that the model predicted membership to the given class. For completeness, we also report the results of typical softmax [4]. Theoretically, it is impossible to get zero values using softmax function (in practice, this can happen due to floating point precision), so we perform a search through various thresholds \(p_{0}\) below which we consider the model to recognize class as negative.
For softmax function we use cross-entropy as a loss function, for sparsehourglass we use the cost function proposed by [12] and for sparsemax we test two functions, the one proposed originally by the authors [15] (sparsemax+huber) and the one proposed by [12] (sparsemax+hinge).
We use a simple two-layers neural network for synthetic datasets and pre-trained ResNet models [8] for real datasets (Resnet18 for VOC and Resnet101 for COCO) with an additional linear layer for classification followed by an evaluated activation function. We train the models with a learning rate \(\lambda=10^{-3}\) for synthetic datasets and with \(\lambda\in\{10^{-3},10^{-4},10^{-5}\}\) for VOC and COCO. For all scenarios, we use the Adam algorithm for gradient-based optimization [10].
Our r-softmax is parameterized by the sparsity rate \(r\), which corresponds to the desired fraction of zero labels in the multi-label experiment. To find its optimal value, we add an additional layer to the neural network which is responsible for predicting the sparsity rate that is later passed as an argument to r-softmax function. We supplied the multi-label classification cost function with the cross-entropy loss component responsible for evaluating the correctness of the number of labels indicated by the model.
Figure 2: Different probability mapping functions for multi-label classification on the various synthetic datasets for different possible output class number (10, 20, 30). For datasets with fewer output classes (plots on the left) all functions produce similar results. However, for datasets with larger number of output classes (graphs in the middle and right) r-softmax seems to be the most beneficial choice.
In all settings, we report the best results on the validation set after the models achieve stability in the results on the validation set. For synthetic datasets, we train models for 150 epochs, and on VOC and COCO datasets we train models for 100 epochs. We use the F1 score as the quality metric for the multi-label classification models as it operates on the returned classes rather than on target scores (e.g., mean average precision metric).
#### 4.1.1 Results on synthetic datasets
Figure 2 presents the performance of r-softmax function and its competitors (softmax, sparsemax, sparsehourglass) for multi-label classification experiments on the synthetic data validation set. For clarity of the graphs, we truncate the y-axis, omitting the notably lower results achieved by specific softmax versions with a particular \(p_{0}\).
In Figure 1(a) we compare the model behavior depending on the average number of positive labels. We can observe that all functions produce similar results for a small number of positive labels on average. However, for increasing the average number of positive labels, we may notice that our method produces the best results, especially when the dataset has a large number of possible output classes.
In Figure 1(b) we show the impact of the average document length in data. In these experiments, we can also observe the superior or comparable performance of r-softmax for most configurations. Similarly like previously, we may observe that for a small number of classes in the output, our method is comparable to other functions. However, for a larger number of possible output classes, our method obtains the best results. Please note that the results for softmax with
\begin{table}
\begin{tabular}{l c c} \hline \hline \multirow{2}{*}{Experimental setup} & VOC & COCO \\ & (F1) & (F1) \\ \hline Softmax (\(p_{0}\)=0.05) & 75.05 & 71.38 \\ Softmax (\(p_{0}\)=0.10) & 78.87 & 72.29 \\ Softmax (\(p_{0}\)=0.15) & **79.43** & 69.22 \\ Softmax (\(p_{0}\)=0.20) & 79.07 & 64.88 \\ Softmax (\(p_{0}\)=0.30) & 75.88 & 54.76 \\ \hline Sparsemax+huber & 66.84 & 52.30 \\ Sparsemax+hinge & 71.91 & 65.67 \\ Sparsehourglass & 71.35 & 64.85 \\ r-softmax & 77.90 & **72.56** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Effect of using different probability mapping functions for the multi-label classification problem for VOC and COCO validation datasets. Our function r-softmax (our) performs better than other tested sparse probability mapping functions (sparsemax and sparsehourglass) and it is also competitive to softmax itself, which requires the additional selection of a class indication threshold.
\(p_{0}\in\{0.1,0.2\}\) and output classes 20 and 30 are not included in the plots as they produce significantly worse results. This can be caused by the fact that for a larger number of output classes, probabilities are distributed over more components. This may lead to a situation where the output values are very small and it is more difficult to choose the appropriate threshold. Taking into consideration both of these experiments we conclude that in the investigated scenarios our method is the preferred choice as it generally provides the most benefits.
#### 4.2.2 Results on real datasets
We also evaluate r-softmax on real, multi-label classification datasets VOC and COCO, see Table 1. Our r-softmax outperforms other sparse softmax alternatives and is very competitive with the original softmax. Although the performance of r-softmax is comparable to specific parametrization of softmax, the model with softmax requires the selection of appropriate threshold \(p_{0}\) to indicate positive labels. In practice, such selection has to be performed on the validation set, which generates additional computational costs.
Additionally, in Figure 3 we report the F1 score learning curves for these experiments to observe how the model performance changes during learning depending on the considered probability mapping function. On the plots, we may observe that model with r-softmax is learning much better than models with other sparse alternatives. Some softmax versions with a particular threshold converge faster than r-softmax, but this most likely happens because the model has to learn the appropriate sparsity rate \(r\), which requires a little more time.
Figure 3: Learning process (F1 score) when using different probability mapping functions for multi-label classification on VOC and COCO validation datasets.
An advantage, however, is that there is no need to adjust any further thresholds afterward.
### Alternative to softmax in the self-attention block in transformer-based model
Nowadays, models based on the transformer architecture [17] are the foundation for many state-of-the-art solutions in different fields, including natural language processing (NLP). A core element of the transformer is the attention mechanism, which is responsible for identifying important information for the neural network. In general, an attention block produces output based on input vectors: queries, keys, and values. The output is a sum of weighted values, where each weight is determined based on a query and corresponding key. For efficient computations, sets of queries, keys, and values are combined into matrices Q, K, and V.
In more detail, each layer of the transformer contains a self-attention module, which is designed to indicate which tokens (parts of words in the text) in a sequence are contextually relevant to other tokens of the same sequence. Each of the self-attention blocks applies a softmax function that maps the resulting vector of the scaled dot product of queries \(Q\) and keys \(K\) of dimension \(d_{k}\) into probabilities that represent weights for all values \(V\), as shown below:
\[Attention(Q,K,V)=softmax(\frac{QK^{T}}{\sqrt{d_{k}}})V. \tag{5}\]
It is worth noting here that using softmax in this formula imposes an assignment of non-zero weight to each of the tokens in the sequence. In other words, every token, even insignificant one, has to be taken into account in further calculations.
In this section, we will demonstrate that replacing the softmax function with r-softmax that can return a sparse probability distribution is beneficial, as the model is able to ignore irrelevant tokens in the sequence.
#### 4.2.1 Experimental setting
In our experiments we use a pre-trained transformer language model BERT [5], in which we focus on the probability mapping function in each of the self-attention blocks while fine-tuning the model. We report the performance of the baseline scenario with softmax as well as with its replacements: sparsemax, sparsehourglass, and r-softmax. The implementation is based on the transformers library from Huggingface [20].
We evaluate BERT model versions on several GLUE benchmark classification tasks [18], namely MRPC, RTE, SST-2, QNLI and QQP. We fine-tune the model for 5 epochs for the MRPC task and for 3 epochs for the other tasks. We report the final score on the validation datasets. We test different values of a learning rates for all models \(\lambda\in\{10^{-5},2\cdot 10^{-5},5\cdot 10^{-5},10^{-4},5\cdot 10^{-4}\}\).
Since we would like to check several possible final sparsity rates \(r\) for r-softmax, we linearly increase the hyperparameter \(r\) during training from 0 (dense output) to the desired sparsity \(r\in\{0.05,0.1,0.15,0.2,0.5\}\). During preliminary experiments, we observed that linear increase of the zeros fraction has its benefits, as the model has time to adapt to a given sparsity rather than losing information all at once.
Results
Table 2 summarizes results for different GLUE downstream tasks obtained by the best run in the grid search described in the previous section. We may observe that in most cases, applying r-softmax instead of the softmax function improves the performance of the fine-tuned transformer-based model. Other sparse alternatives like sparsemax and sparsehourglass have demonstrated poor performance in this application.
We examined r-softmax performance for various final sparsity rates. We linearly increased the sparsity rate from \(r=0\) until it reached the desired value. The gradual incorporation of sparsity is intended to give the model time to adapt to the changes. We found that introducing only a small sparsity (small \(r\)) into the self-attention output produces the best results while enforcing too many zeros (large \(r\)) makes the results worse. The best performance for tasks QQP, MRPC, QNLI, RTE and SST-2 was achieved by \(r=0.1,0.15,0.15,0.2,0.2\) respectively. Results suggest that in general it is beneficial for the model to eliminate distracting elements that are irrelevant to the considered sample. However, excluding a larger number of elements (by zeroing their importance) is not advantageous because either the model loses too much context or because the gradient flow during learning becomes more challenging.
## 5 Conclusions
In this paper, we proposed r-softmax, a generalization of softmax, producing sparse probability distribution with a controllable sparsity rate. We applied r-softmax as an output layer in the multi-label classification problem and as a scoring function in the self-attention module used in NLP tasks. The obtained results confirm that in most cases r-softmax is highly competitive or superior to baseline softmax and other sparse probability mapping functions. Furthermore, r-softmax offers a more intuitive representation of the data, that is adjustable by one simple parameter determining what fraction of the data should be zero.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Experiment setup} & \multicolumn{2}{c}{MRPC RTE} & \multicolumn{2}{c}{SST-2 QNLI QQP} \\ & (Acc) & (Acc) & (Acc) & (Acc) & (Acc) \\ \hline Softmax & 84.56 & 68.95 & 92.32 & **91.76** & 91.12 \\ Sparsemax & 68.38 & 52.71 & 79.82 & 55.57 & 77.18 \\ Sparsehourglass & 68.38 & 52.71 & 79.24 & 70.99 & 76.04 \\ r-softmax & **85.54** & **71.84** & **92.89** & 91.73 & **91.13** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Using different probability mapping functions in self-attention blocks of pretrained BERT language model. We report results after finetuning a model on several GLUE benchmark tasks. Our r-softmax, introducing a specific sparsity level, outperforms other proposals.
#### Acknowledgements
The work of Klaudia Balazy and Lukasz Struski was supported by the National Centre of Science (Poland) Grant No. 2020/39/D/ST6/ 01332. The research of Jacek Tabor was carried out within the research project "Bio-inspired artificial neural network" (grant no. POIR.04.04.00-00-14DE/18-00) within the Team-Net program of the Foundation for Polish Science co-financed by the European Union under the European Regional Development Fund. The work of Marek Smieja was supported by the National Centre of Science (Poland) Grant No. 2022/45/B/ST6/01117. Klaudia Balazy is affiliated with Doctoral School of Exact and Natural Sciences at the Jagiellonian University.
|
2307.03362 | Adaptation and Communication in Human-Robot Teaming to Handle
Discrepancies in Agents' Beliefs about Plans | When agents collaborate on a task, it is important that they have some shared
mental model of the task routines -- the set of feasible plans towards
achieving the goals. However, in reality, situations often arise that such a
shared mental model cannot be guaranteed, such as in ad-hoc teams where agents
may follow different conventions or when contingent constraints arise that only
some agents are aware of. Previous work on human-robot teaming has assumed that
the team has a set of shared routines, which breaks down in these situations.
In this work, we leverage epistemic logic to enable agents to understand the
discrepancy in each other's beliefs about feasible plans and dynamically plan
their actions to adapt or communicate to resolve the discrepancy. We propose a
formalism that extends conditional doxastic logic to describe knowledge bases
in order to explicitly represent agents' nested beliefs on the feasible plans
and state of execution. We provide an online execution algorithm based on Monte
Carlo Tree Search for the agent to plan its action, including communication
actions to explain the feasibility of plans, announce intent, and ask
questions. Finally, we evaluate the success rate and scalability of the
algorithm and show that our agent is better equipped to work in teams without
the guarantee of a shared mental model. | Yuening Zhang, Brian C. Williams | 2023-07-07T03:05:34Z | http://arxiv.org/abs/2307.03362v1 | Adaptation and Communication in Human-Robot Teaming to Handle Discrepancies in Agents' Beliefs about Plans
###### Abstract
When agents collaborate on a task, it is important that they have some shared mental model of the task routines - the set of feasible plans towards achieving the goals. However, in reality, situations often arise that such a shared mental model cannot be guaranteed, such as in ad-hoc teams where agents may follow different conventions or when contingent constraints arise that only some agents are aware of. Previous work on human-robot teaming has assumed that the team has a set of shared routines, which breaks down in these situations. In this work, we leverage epistemic logic to enable agents to understand the discrepancy in each other's beliefs about feasible plans and dynamically plan their actions to adapt or communicate to resolve the discrepancy. We propose a formalism that extends conditional doxastic logic to describe knowledge bases in order to explicitly represent agents' nested beliefs on the feasible plans and state of execution. We provide an online execution algorithm based on Monte Carlo Tree Search for the agent to plan its action, including communication actions to explain the feasibility of plans, announce intent, and ask questions. Finally, we evaluate the success rate and scalability of the algorithm and show that our agent is better equipped to work in teams without the guarantee of a shared mental model.
## Introduction
When agents collaborate on a task, it is important that they have some shared mental model of the task routines - the set of feasible plans towards achieving the goals. However, in reality, situations often arise that such a shared mental model cannot be guaranteed. For example, in online multi-player games or search-and-rescue missions, people trained separately could form an ad-hoc team where they may follow different conventions. Even if the team has a set of shared routines, novel situations may still occur in which some contingent constraint that forbids certain plans to be taken becomes known only by some agents. In these situations, experienced teammates keep in mind what others know and what actions they may take, and communicate when necessary to make sure the team converges on a feasible plan of action.
Previous work on human-robot teaming, _Pike_(Levine and Williams, 2018), assumed that agents share common knowledge of the feasible plans for the task encoded in a knowledge base. Under an equal partner setting, each agent observes the actions taken by others and adapts their actions accordingly, only taking what is still feasible. This approach allows fluid human-robot interaction but breaks down when the common knowledge assumption no longer holds.
In this work, we generalize the approach to handle situations where there may be discrepancies in agents' beliefs about plans by incorporating epistemic logic (Van Ditmarsch et al., 2015), as it provides an explicit representation of agents' nested beliefs towards each other and a mechanism to model communication between agents.
The contribution of this paper is threefold: (1) We propose the formalism of conditional doxastic logic (Baltag and Smets, 2008) extended to knowledge bases in order to represent agents' nested beliefs on the set of feasible plans for the task and the state of execution. (2) We model both execution and a rich set of communication actions within the framework, including explanation, intent announcement, and question-asking actions, that allows agents to explicitly talk about the feasibility of plans and exchange their intent. (3) We provide an online execution algorithm based on Monte Carlo Tree Search (MCTS) for the agent to dynamically plan its action to adapt to others or communicate to resolve the discrepancy. Finally, we evaluate the success rate and performance of the algorithm through experiments.
## Motivating Example
Consider a pedagogical example where a robot (our agent) and a human collaborate to prepare a drink. The robot has a manipulator arm that can fetch a mug or a glass as the container, and the human can brew some coffee or take some orange juice from the fridge for the drink. For the task to succeed, it must satisfy that: (C1) the mug has to go with the coffee and the glass has to go with the orange juice. Under an equal partner setting, from the robot's perspective:
**Case1** If the human doesn't believe constraint C1 holds and thinks that any container can go with any drink, then the robot can adapt to the human by waiting for the human to take the drink first, then fetch the corresponding container. The robot can also explain to the human about constraint C1, especially if the task requires the robot to fetch the container first. The robot can also announce the intent for the human
to choose coffee, in which case it can just fetch the mug.
Case2If the human has determined a choice of coffee or juice, but the robot doesn't know which one, the robot may wait for the human to pick first so that it can distinguish their intent, or it can ask the human about their intent.
Case3If the human picked up the juice but doesn't know that the robot couldn't reach the glass, the robot may explain the constraint and that the task has failed.
## Background
In order to represent the agents' nested beliefs, our representation builds on top of conditional doxastic logic [1], which is one variant in the broader epistemic logic literature. Compared to epistemic logic, it allows the modeling of false beliefs and belief revision by pre-encoding the conditional belief of the agents within the model. Given a set of agents \(Ag\) and a set of atomic propositions \(At\), _conditional doxastic logic_\(\mathcal{L}(At,Ag)\) is defined by the following Backus-Naur Form (BNF):
\[\varphi:=p|\neg\varphi|(\varphi\land\varphi)|B_{a}^{\varphi}\varphi,\]
where \(p\in At\), \(a\in Ag\). \(B_{a}^{\psi}\varphi\) reads as "agent \(a\) believes \(\varphi\) given \(\psi\)". Denoting \(\top\) as tautology, \(B_{a}\varphi:=B_{a}^{\top}\varphi\) means that agent \(a\) believes \(\varphi\). Its semantic model is a _plausibility model_, which is a tuple \(M=\langle W,\{\leq_{a}\}_{a\in Ag},L\rangle\), where
* \(W\): a non-empty set of possible worlds,
* \(\leq_{a}\subseteq W\times W\): binary relation on \(W\) imposing a relative plausibility order between any two worlds for agent \(a\),
* \(L:W\to 2^{At}\): valuation function mapping each world to the set of atomic propositions that hold in the world.
\(w\leq_{a}v\) means that agent \(a\) considers \(w\) to be at least as plausible as \(v\). \(<_{a}:=\leq_{a}\cap\geq_{a}\) denotes a strict plausibility order. \(\simeq_{a}:=\leq_{a}\cap\geq_{a}\) denotes an equi-plausibility order. \(\sim_{a}:=\leq_{a}\cup\geq_{a}\) denotes epistemic indistinguishability, and \(c_{a}(w):=\{v\in W\mid w\sim_{a}^{*}v\}\) is the set of worlds that agent \(a\) finds (possibly more or less) plausible given world \(w\), where \(\sim_{a}^{*}\) is the transitive closure of \(\sim_{a}\). Note that the plausibility relation is reflexive, transitive, locally connected, that is, \(v\in cc_{a}(w)\) implies \(v\leq_{a}w\) or \(w\leq_{a}v\), and wellfounded, that is, \(min_{a}(S):=\{w\in S\mid\forall v\in S:v\not\prec_{a}w\}\) is well-defined, which is the subset of worlds in \(S\) that agent \(a\) finds most plausible. A pair \((M,w)\) is a _pointed plausibility model_, which describes a conditional doxastic state with a pointed view at world \(w\in W\), i.e. taking \(w\) as the true world. The truth of a formula \(\varphi\in\mathcal{L}(At,Ag)\) on \((M,w)\), i.e. \((M,w)\vDash\varphi\), can be defined inductively as follows:
* \((M,w)\vDash p\) iff \(p\in L(w)\)
* \((M,w)\vDash\neg\varphi\) iff \(M,w\nvnvnvnv\) and \((M,w)\vDash\psi\)
* \((M,w)\vDash p\wedge\varphi\) iff \(min_{a}([\psi]_{M}\cap cc_{a}(w))\subseteq[\varphi]_{M}\), where \([\varphi]_{M}:=\{w\in W\mid M,w\vDash\varphi\}\) is the set of worlds in \(M\) in which \(\varphi\) holds.
Figure 1 shows an example state represented by a pointed plausibility model \((M,w_{1})\) with agents \(a\) and \(b\). The two worlds \(w_{1}\) and \(w_{2}\) are labeled with the atomic propositions that hold in the respective worlds. The pointed world \(w_{1}\) highlighted in bold represents the true world in which \(p\) holds. The single arrow pointing from \(w_{1}\) to \(w_{2}\) labeled with \(b\) indicates that agent \(b\) considers \(w_{2}\) to be strictly more plausible than \(w_{1}\). We say \((M,w_{1})\vDash B_{b}\neg p\) since \(min_{b}(cc_{b}(w_{1}))=\{w_{2}\}\subseteq[\neg p]_{M}\). If it is instead a double-headed arrow, then it means that agent \(b\) considers \(w_{1}\) and \(w_{2}\) to be equally plausible. The lack of any arrow between \(w_{1}\) and \(w_{2}\) for agent \(a\) indicates that \(w_{1}\not\sim_{a}w_{2}\), that is, when in \(w_{1}\) or \(w_{2}\), agent \(a\) does not consider the other world plausible at all. Since the plausibility relation is reflexive, the self-loops indicate that whichever world it is, the agents find the world plausible. \((M,w_{1})\vDash B_{a}p\) since \(min_{a}(cc_{a}(w_{1}))=\{w_{1}\}\subseteq[p]_{M}\), and \((M,w_{1})\vDash B_{a}B_{b}\neg p\) since \(min_{a}(cc_{a}(w_{1}))=\{w_{1}\}\subseteq[B_{b}\neg p]_{M}=\{w_{1},w_{2}\}\).
An action is defined by a _plausibility action model_\(A=\langle\Sigma,\{\leq_{a}\}_{a\in Ag},pre,post\rangle\), which has a similar structure except instead of a set of worlds \(W\), it has a set of events \(\Sigma\) representing possible events that may occur in the action. \(pre\) and \(post\) are functions that assign to each event \(\sigma\in\Sigma\) a precondition and a postcondition in \(\mathcal{L}(At,Ag)\) respectively, where the postcondition of an event is restricted to a conjunction of literals over \(At\) or \(\top\). A _pointed plausibility action model_ is a pair \((A,\sigma),\sigma\in\Sigma\), which describes an action where \(\sigma\) is the true event.
In general, a pointed plausibility model for state or action can point at multiple worlds, such as \((M,W_{d})\) or \((A,\Sigma_{d})\). \(W_{d}\) and \(\Sigma_{d}\) are called the _designated_ worlds or events. For example, given a state \((M,w)\), \((M,W_{d})\) with \(W_{d}=min_{a}(cc_{a}(w))\) represents agent \(a\)'s local perspective of the state, where \(W_{d}\) includes all the worlds that agent \(a\) finds the most plausible. \((M,W_{d})\) is a _global_ state if \(|W_{d}|=1\). Additionally, \((M,W_{d})\vDash\varphi\) iff \((M,w)\vDash\varphi\) for all \(w\in W_{d}\). An action \(act\) updates a state \(s\) through _action-priority update_\(s\otimes act\), which we refer the readers to the details in [1, 1].
## Approach Overview
Our solution requires answering three questions: (1) what representation to use to capture the agents' nested beliefs of the set of feasible plans and the state of execution, (2) how to model execution and communication actions and how they update the state, and (3) how to strategically choose the next action. Our key insight is to extend conditional doxastic logic to describe knowledge bases, and use the knowledge bases to encode the feasible plans and state of execution, so that we can describe agents' beliefs on the plan space instead of their beliefs on state. As a result of this new logic,
Figure 1: Example pointed plausibility model with legend
execution and communication actions can be defined which operate by adding or removing constraints from the knowledge bases. With the state and action models defined, we use an MCTS-based algorithm to simulate forward in the next \(k\)-step horizon to decide what is the best action to take.
For example, Figure 2 captures the agents' nested beliefs on plans from Case1. Each world in the plausibility model is now a knowledge base that contains the constraints of the task. Since \(H\) (human) finds \(w_{2}\) more plausible, \(H\) believes that constraint C1 does not need to hold. An example action where the robot announces the intent of coffee is shown in Figure 3 (left). The action has a single event whose precondition is that \(R\) (robot) must believe that adding the constraint of coffee is satisfiable given its belief of the current feasible plans. As a result of the action, all worlds now have the constraint of coffee added, including \(w_{2}\) that \(H\) believes in.
## Representing Team's Nested Beliefs on Plans
We describe our representation in two parts: (1) conditional doxastic logic on knowledge bases and its semantics, (2) our task representation and its encoding in the knowledge base.
### Conditional Doxastic Logic on knowledge Bases
Given a finite set of atomic propositions \(At\), and a finite set of agents \(Ag\), _conditional doxastic logic on knowledge bases_\(\mathcal{L}_{KB}(At,Ag)\) is defined by the BNF:
\[\varphi:=in(c)|entailed(c)|\neg\varphi|(\varphi\wedge\varphi)|B_{a}^{\varphi}\varphi,\]
in which \(a\in Ag\), \(c\in\mathcal{C}(At)\), where \(\mathcal{C}(At)\) is the classical propositional logic \(c:=p|\neg c|(c\wedge c)\), \(p\in At\). Note that the formulation naturally extends to constraint systems with finite-domain variables, which is what we use. We hence refer to \(c\) as a constraint. \(in(c)\) means constraint \(c\) is an explicit member of the constraints in the knowledge base, \(entail(c)\) means constraint \(c\) is entailed by the knowledge base, and we define \(sat(c):=\neg entailed(\neg c)\), which means constraint \(c\) is satisfiable by the knowledge base.
The plausibility model for \(\mathcal{L}_{KB}(At,Ag)\) is a tuple \(M=\langle W,\{\leq_{a}\}_{a\in Ag},KB\rangle\), where \(W\) and \(\leq_{a}\) are the same as before and \(KB:W\rightarrow\mathbf{KB}_{C(At)}\) is a function that maps each world to an associated knowledge base in \(\mathcal{C}(At)\). When determining the truth of a formula \(\varphi\in\mathcal{L}_{KB}(At,Ag)\) on a pointed plausibility model \((M,w)\), we replace the first rule on \((M,w)\vDash p\) in the inductive rules with the following:
* \((M,w)\vDash in(c)\) iff \(c\in KB(w)\)
* \((M,w)\vDash entailed(c)\) iff \(KB(w)\vDash c\)
We can say the following about the state in Figure 2:
* \(B_{R}in((\mathtt{mug}\wedge\mathtt{coffee}))\vee(\mathtt{glass}\wedge\mathtt{ juice}))\)
* \(B_{R}B_{H}\neg in((\mathtt{mug}\wedge\mathtt{coffee}))\vee(\mathtt{glass}\wedge \mathtt{juice}))\)
* \(\neg B_{Retailed}(\mathtt{mug}\wedge\mathtt{coffee})\)
* \(B_{R}\neg sat(\mathtt{mug}\wedge\mathtt{juice})\wedge B_{R}B_{H}sat(\mathtt{ mug}\wedge\mathtt{juice})\)
### Task Representation & Encoding
The set of feasible plans towards achieving the goals of the task forms a _plan library_ for the task. Additionally, actions may be ordered in the plan, such as requiring the container to be picked up first before the drink. Therefore, our task representation is a _temporal plan library_\(\langle V,E,O,C\rangle\), where:
* \(V\) is a set of decision variables with \(domain(v),v\in V\).
* \(E\) is a set of time points with guard condition \(guard(e)\) for each \(e\in E\), a conjunction of decision variable assignments. \(e\) should be executed iff \(guard(e)\) is satisfied.
* \(O\) is a set of ordering constraints \(o=\langle e_{i},e_{j},guard(o)\rangle\), \(o\in O\), requiring time point \(e_{i}\) to precede time point \(e_{j}\) in execution order if its guard condition \(guard(o)\) is satisfied. We assume \(guard(o)\vDash guard(e_{i})\wedge guard(e_{j})\).
* \(C\) is a set of constraints scoped on \(V\).
The time points represent the actual events of taking the actions. In multi-agent case, a _multi-agent temporal plan library_\(\langle V,E,O,C,Ag,f\rangle\) additionally has a set of agents \(Ag\) and a function \(f:E\to Ag\) that maps each time point to an agent that it belongs to. In our formulation, the decision variables do not have ownership. This reflects our equal partner setting in which decisions do not belong to any agent and an announced intent can affect multiple agents' actions.
The plan library represents a set of _candidate subplans_\(G\), where a subplan \(g\in G\) is a full assignment to all the decision variables \(V\). We use \(E_{g}\) and \(O_{g}\) to denote the set of time points and ordering constraints activated by \(g\), i.e. those whose guard conditions are satisfied. A subplan induces a set of total orderings on \(E_{g}\) that satisfies \(O_{g}\), which we denote by \(T_{g}\). A subplan \(g\) is _feasible_ iff all the constraints \(C\) are satisfied, i.e. \(\forall c\in C\), \(g\vDash c\), and there exists a total ordering of \(E_{g}\) that satisfies \(O_{g}\), i.e. \(T_{g}\neq\emptyset\).
ExecutionAs execution progresses, decision variables are gradually grounded either implicitly from the execution of time points or explicitly from announcement of intent. The _execution state_ is a tuple \(\langle t,C_{I}\rangle\), where \(t\) is an _execution history_, which is a total ordering of time points \((e_{i},e_{j},...,e_{k})\) that have been executed, and \(C_{I}\) is the set of intents that have been announced during execution. An intent, in its most general form, can be an arbitrary constraint scoped on \(V\), but is commonly an assignment to a specific decision variable. The subplans that are _feasible with respect to \(\langle t,C_{I}\rangle\)_ include any feasible subplan \(g\) such that there exists \(t_{g}\in T_{g}\), where
Figure 3: intent announcement (left) & resulting state (right)
Figure 2: Plausibility model for nested beliefs on plans
is the prefix of \(t_{g}\), and \(g\) satisfies \(C_{I}\). Execution _fails_ when there exists no feasible subplan with respect to \(\langle t,C_{I}\rangle\). Execution _succeeds_ when there exists a feasible subplan \(g\) with respect to \(\langle t,C_{I}\rangle\) such that \(t\in T_{g}\). Note that execution can succeed without ever converging to a unique subplan, and it is possible for further time points to be executed and move away from the success state.
Encoding in Knowledge BaseWe encode the plan library and the execution state in the knowledge base, so that at any point during execution, the knowledge base contains all the feasible subplans with respect to \(\langle t,C_{I}\rangle\). We ensure that the knowledge base is consistent iff execution has not failed.
The variables of the encoding include (1) a discrete variable for each decision variable \(v_{i}\in V\) with the same domain \(domain(v_{i})\), and (2) a boolean variable for each time point \(e_{i}\in E\) with domain \(\{\texttt{T},\texttt{F}\}\), representing if the time point is executed. We add the following constraints to the knowledge base prior to execution:
* The constraints \(C\) as defined in the plan library.
* For each time point \(e_{i}\), \(((e_{i}=\texttt{T})\to guard(e_{i}))\), i.e. if time point \(e_{i}\) is executed, its guard condition must hold.
* Negation of _nogoods_[10] that represent any combination of choices of \(V\) that would result in an inconsistent ordering of time points. This can be computed from the ordering constraints \(O\).
During execution, we may additionally add to the KB:
* Announced intents \(C_{I}\).
* For each execution of time point \(e_{j}\), a conjunction of (1) assignment of variable \(e_{j}\) to T, and (2) the negation of the guard condition \(-guard(o)\) for any ordering constraint \(o=\langle e_{i},e_{j},guard(o)\rangle\) in which the predecessor \(e_{i}\) has not been executed by the time \(e_{j}\) is executed.
The last rule ensures that for any ordering constraint \(o=\langle e_{i},e_{j},guard(o)\rangle\), if the guard condition holds, then if \(e_{j}\) is executed, \(e_{i}\) must also have been executed, hence satisfying the ordering constraint. Note that we only encode the set of time points that have been executed, instead of their actual order of execution. With the above encoding, given a knowledge base KB, execution fails iff KB \(\vDash\bot\). Execution succeeds iff there exists subplan \(g\), i.e. a full assignment of \(V\), such that KB\(\wedge g\nvvdash\bot\) and \(\forall e_{i}\in E_{g}\), KB \(\vDash(e_{i}=\texttt{T})\). We denote the success condition by \(suc_{(V,E)}\), and say that execution succeeds iff KB \(\vDash\mathit{suc}_{(V,E)}\). Additionally, \((M,w)\vDash\mathit{suc}_{(V,E)}\) iff \(KB(w)\vDash\mathit{suc}_{(V,E)}\).
Take Case 1 as an example, the knowledge base contains discrete variables \(container\) with domain \(\{\texttt{mug},\texttt{glass}\}\) and \(drink\) with domain \(\{\texttt{coffee},\texttt{juice}\}\), and boolean variables \(e_{mug}\), \(e_{glass}\), \(e_{coffee}\), \(e_{juice}\) representing the events of picking up each item. Using \(\texttt{mug}\) as a shorthand for \((container=\texttt{mug})\) and similarly for others for the purpose of decluttering, the constraints include:
1. \((\texttt{mug}\wedge\texttt{coffee})\vee(\texttt{glass}\wedge\texttt{juice})\)
2. \((e_{mug}=\texttt{T})\rightarrow\texttt{mug}\), similarly for other time points
Note that we use the same shorthands throughout the rest of the paper. In this example, when the robot picks up the mug, constraint \((e_{mug}=\texttt{T})\) is added to the knowledge base. From 2 above, we now have KB \(\vDash\texttt{mug}\), and consequently from 1, we have KB \(\vDash\texttt{coffee}\), which limits the human's choice of drink to coffee. Picking up juice is no longer feasible since KB \(\wedge(e_{juice}=\texttt{T})\vDash\bot\). Consider another case where the robot's action must precede the human's action, i.e. there are ordering constraints \(\langle e_{mug},e_{coffee},\texttt{mug}\wedge\texttt{coffee}\rangle\), \(\langle e_{glass},e_{coffee},\texttt{glass}\wedge\texttt{coffee}\rangle\), etc. If the human picks up the coffee before the robot takes any action, then \((e_{coffee}=\texttt{T})\wedge\neg(\texttt{mug}\wedge\texttt{coffee})\wedge \neg(\texttt{glass}\wedge\texttt{coffee})\) is added, resulting in an inconsistent knowledge base.
## Dynamic Model of Evolution
In this section, we describe how the model evolves as a result of execution or communication actions. We first introduce the plausibility action model for our extended logic, then describe how to model each type of action. In this work, we assume that agents observe all actions that are taken, that is, all actions are public.
### Plausibility Action Model for Knowledge Bases
A _plausibility action model_\(A\) for \(\mathcal{L}_{KB}(At,Ag)\) is a tuple \(\langle\Sigma,\{\leq_{a}\}_{a\in Ag},pre,post\rangle\), where \(\Sigma\) and \(\leq_{a}\) are the same, and \(pre\) and \(post\) are functions that map each event to a formula in \(\mathcal{L}_{KB}(At,Ag)\). The postcondition is restricted to a conjunction of \(in(c)\), which adds constraint \(c\) to the knowledge base, and \(\neg in(c)\), which removes constraint \(c\) from the knowledge base if it exists, as well as \(\top\), i.e. nothing changes. For this paper, we further restrict the postcondition to be either \(in(c)\) or \(\top\), i.e. adding at most one constraint to the knowledge base. The _action-priority update_ updates the knowledge bases as described accordingly.
Execution ActionAn execution action is the action of an agent executing a time point, such as robot picking up mug. Recall that in our setting, each time point is assigned to an agent who can execute it. Given that the time point being executed is \(e_{i}\in E\), and the agent who executes it is \(a=f(e_{i})\), the simplest case of execution of time point \(e_{i}\) that has no potential predecessors is shown in Figure 4 (left). We assume agents are rational and for agent \(a\) to execute time point \(e_{i}\), it needs to believe that executing \(e_{i}\) is feasible, i.e. \(B_{a}sat(e_{i}=\texttt{T})\). All agents observing the action also observe the truth of agent \(a\) having such belief. For the postcondition, as \(e_{i}\) is executed, the constraint \((e_{i}=\texttt{T})\) is added to the knowledge base.
When there are potential predecessors for \(e_{i}\), we need to make sure the corresponding ordering constraints are satisfied. The postcondition of the event should always add
Figure 4: Action representations
\((e_{i}=\mathsf{T})\) to the knowledge base, and for any ordering constraint \(o=\langle e_{j},e_{i},guard(o)\rangle\), add \((\neg guard(o))\) on condition that \(\neg entailed(e_{j}=\mathsf{T})\), that is, if \(e_{j}\) has not been executed. While a more succinct action model specification is possible [20], we use the standard form defined above by taking the cross product of all the predecessors, and creating equi-plausible events for them as shown in Figure 5. Even though the size of the action model is exponential to the number of potential predecessors each time point has, because the preconditions of these events are mutually exclusive, the model size for the updated state will not increase as a result of the action update.
Intent AnnouncementThe model for an intent announcement action is shown in Figure 4 (middle). For agent \(a\) to announce the intent, it must believe that it is satisfiable, hence the precondition \(B_{a}sat(c)\). The intent is added as a postcondition. Figure 3 shows an example intent announcement.
ExplanationThe model for an explanation action where agent \(a\) explains its belief of \(\varphi\in\mathcal{L}_{KB}(At,Ag)\) is shown in Figure 4 (right). The precondition says that agent \(a\) has to believe \(\varphi\), i.e. agents cannot lie about their belief. To the other agents, the explanation is essentially a public announcement that agent \(a\) believes \(\varphi\). This means that whether a particular agent adopts the explainer's belief depends on the conditional belief pre-encoded in the initial pointed plausibility model, which specifies how an agent's belief gets revised when a new piece of evidence is received.
An example explanation action where the robot explains constraint C1 is shown in Figure 6. Based on the pointed plausibility model in Figure 2, upon the announcement that the robot in fact believes C1 holds, \(w_{2}\) is eliminated as it does not satisfy the precondition, and the human is left with \(w_{1}\) in which C1 holds. Depending on the initial conditional belief, it is also possible to have situations where the human does not trust the robot and does not adopt its belief.
In this paper, we restrict the explained formula to be of the BNF form \(\varphi:=\neg\varphi|B_{a}\varphi|in(c)\), where \(a\in Ag\), \(c\in\mathcal{C}(At)\). This simplifies the explanations in that (1) the explained formula cannot be arbitrarily complex such as \(B_{a}in(c)\to B_{b}in(c)\), (2) the explanation must be about whether the knowledge base contains a constraint or not, instead of the satisfiability or entailment of an arbitrary constraint. This is similar in spirit to the idea of abductive explanations, where we want to give an explanation \(c\) such that together with the existing theory \(T\), it explains an explananum \(O\), i.e. \(T\cup\{c\}\models O\). In this case, what is satisfiable or entailed is often the explananum, and what constraints should or should not be in the theory is what we explain.
Question-AskingAn agent can ask another agent about something that it is uncertain of. Since we assume public actions, the answer is observed by all agents. Given that agent \(a\) is asked about its belief on formula \(\varphi\in\mathcal{L}_{KB}(At,Ag)\), the pointed plausibility action model is shown in Figure 7. We place the same restriction on \(\varphi\) as in the explanation actions.
Using Case 2 as an example, the robot does not know which choice of drink the human has determined on, which is represented by the pointed plausibility model in Figure 8. The robot can ask a question about the human's belief on \(in(\mathtt{coffee})\), i.e. whether its intent is to take coffee. The resulting state would have the double-headed arrow in the middle labeled with \(R\) removed, i.e. the robot will be able to distinguish human's intent.
## Online Execution Problem
We assume that execution is asynchronous, all actions are public, and communication has a cost. We also assume the discrepancies in beliefs come only from the agents' initial beliefs on constraints \(C\), i.e. they share the belief on the rest of the plan library such as the ordering constraints and the guard for the time points. In this paper, we assume that a task involves two agents (e.g. robot and human), though there is no theoretical barrier to applying it to more agents. The online execution problem from a single agent's perspective,
Figure 8: Example pointed plausibility model where the robot is uncertain about the human’s choice
Figure 5: Example execution action of \(e_{i}\) with ordering constraints \(\langle e_{j},e_{i},guard(o_{j})\rangle\) and \(\langle e_{k},e_{i},guard(o_{k})\rangle\)
Figure 6: Explanation action (left) and resulting state (right)
Figure 7: Question-asking action
say agent \(a\), involves taking the input of the following prior to execution:
* A multi-agent temporal plan library \(\langle V,E,O,\{\},Ag,f\rangle\).
* A pointed plausibility model \(s_{0}=(M,W_{d})\) capturing agents' initial nested beliefs on constraints \(C\) from agent \(a\)'s perspective, such that \(W_{d}=cc_{a}(w),\forall w\in W_{d}\).
Note that constraints \(C\) is empty in the plan library as it is captured by the input of \(s_{0}\). \(W_{d}\) includes any world that agent \(a\) finds plausible (not necessarily most plausible), and we assume that \((M,W_{d})\) captures the ground truth state \((M,w^{*})\) as one of its possibilities, so that the agent's belief can also be revised if needed. During execution, the agent receives the input of a stream of actions that are taken by itself or others in real-time, including execution and communication actions. Each action triggers a callback and the agent outputs an action to be taken or none.
Overall AlgorithmUpon receiving an action, our agent determines how it should act next - either take an execution, communicate, or wait for others to act. It simulates forward to predict the utility of each possible action, e.g. if others may follow up with incorrect actions, or if many communication actions will be needed. The algorithm draws insight from epistemic planning for implicit coordination [1] and relies on the agent's ability to take others' perspectives to predict their actions.
The overall algorithm is illustrated in Algorithm 1, which we name it _Epistemic Pike_ (_EPike_) after _Pike_[11]. Prior to execution, the agent compiles the initial state \(s\) from \(s_{0}\) and the plan library as described in the knowledge base encoding section (line 1). Upon receiving an action \(act^{\prime}\), the updated state (line 3) is checked for several conditions. If the agent believes that execution has failed (line 4), then it explains the failure when some agent might not know (line 5 - 6). If the agent is unsure about whether execution has failed (line 7), then it sees if it can ask someone to distinguish it (line 8). If both conditions do not apply, then execution has not failed, and the agent checks if execution has succeeded (line 9). If so, then the agent explains it when some agent might not know (line 10 - 11). If execution has not succeeded, then the agent searches for the next action to take, if any, to progress toward completing the task (line 13). Note that when a question-asking action is taken, we wait until the answer is announced before encoding the answer as an explanation action that gets observed.
```
Input :\(V\), \(E\), \(O\), \(Ag\), \(f\), \(s_{0}=\langle M,W_{d}\rangle\), agent \(a\), Online: \(act^{\prime}\), an observed action Output :\(act\), an action or \(None\)
1Offline:\(s\leftarrow\textsc{CompileInitialState}(s_{0},V,E)\)
2Online upon observing \(act^{\prime}\):
3\(s\gets s\otimes act^{\prime}\)
4if\(s\vDash B_{a}entailed(\bot)\)then
5if\(s\nvDash B_{a}(\wedge_{i\in Ag}B_{i}entailed(\bot))\)then
6return ExplainSulec\((s)\)
7
8else if\(s\nvDash B_{a}\neg entailed(\bot)\)then
9return AskIfailure\((s)\)
10
11else
12return\(None\);
```
**Algorithm 1**Online Execution for Ego Agent \(a\)
Each online subroutine in Algorithm 1 calls an MCTS algorithm with a different configuration. The MCTS algorithm simulates the team's possible execution in the next \(k\)-step horizon, and based on the result of the simulations, the agent decides if it should take an action now and which action to take. Our MCTS algorithm can be configured on (1) the termination conditions, including the horizon \(k\), (2) which types of actions to consider for both the ego agent and the other agents, and (3) the penalties for communication actions, since we assume communication has a cost. We add a fifth type of action, _noop_, to represent agent taking no action and waiting for others to act.
For SearchAction, a node terminates if its state \(s\) satisfies either \(s\vDash entailed(\bot)\), which gives a utility of 0 (execution fails), or \(s\vDash suc_{(V,E)}\), which gives a utility of 1 (execution succeeds), or if simulation reaches a horizon of \(k\), which gives a utility of 1 (execution has not failed). Note that only execution actions increment the horizon, since we care about the outcome after the next three physical actions. During search, we consider all five types of actions (including noop) from all agents, except for the intent announcement and question-asking actions from the other agents. They can be reasonably omitted to reduce the search space, since they may be unpredictable and ignoring it does not prevent the simulated execution to reach success state.
For the rest of the subroutines, a node terminates if its state \(s\) satisfies \(s\vDash\wedge_{i\in Ag}B_{i}entailed(\bot)\) for ExplainFailure, \(s\vDash B_{a}entailed(\bot)\lor B_{a}\neg entailed(\bot)\) for AskIfailure, and \(s\vDash\wedge_{i\in Ag}B_{i}suc_{(V,E)}\) for ExplainSuccess, all giving a utility of 1. For ExplainFailure and ExplainSuccess, only explanation and question-asking actions of the ego agent are considered. Asking a question may still be useful if the agent is uncertain about what others currently believe. For AskIfFailure, only question-asking actions for the ego agent are considered. In these cases, since the ego agent is just looking to inform others or ask a question, it is reasonable to ignore what other agents may do. To penalize communication, we set a penalty factor of 0.9 for explanation actions and question-asking actions, and 0.85 for intent announcement actions, though the values may change depending on applications. Penalty is a multiplication factor to the utility of the node. Execution actions and noop action are not penalized.
Search TreeWe describe the expansion of the search tree using SearchAction as an example, before discussing the details of MCTS. A partially expanded search tree is shown in Figure 9. There are four types of nodes in the tree: root decision node (bold circle), split nodes (diamonds), predict nodes (squares), and decision nodes (circles). Each node has
its state \(s_{i}\) and has a utility score of between \([0,1]\).
The root decision node is only used as the root of the tree and finds the best action to take for the ego agent (including noop). Given input of \(s=(M,W_{d})\) from the subroutine, the state of the root decision node is the ego agent's current belief of the state \(s_{ego}=(M,min_{a}(W_{d}))\). The node branches on all the possible actions the ego agent can take based on its current belief, creating children split nodes. We discuss the generation of possible actions in the Appendix. If there exist children with positive scores, the agent chooses the action that leads to the child with the maximum score, and prefers non-noop actions when there is a tie.
The split node represents the state after the application of the action, which may point at multiple worlds. The split node splits the state into a set of global states where only one unique world is pointed at, and answers the question of: of all the possible states that the action can lead to, what is the worst-case situation that can happen.
Each predict node predicts what may happen from the global state. To do so, it expands into a set of decision nodes for the agents and predicts how each agent contributes to progressing the state toward success. If the parent split node results from an agent taking noop, then the predict node does not expand on the decision node for the same agent, that is, the agent has to wait for someone else to take action.
Each agent's decision node expands on the set of possible actions the agent can take. Assuming the parent predict node has state \(s=(M,w)\), this will be the set of actions that the agent finds feasible from its perspective \((M,min_{a}(cc_{a}(w)))\). For each action, we expand on it both from the agent's subjective view of the state (a thick arrow) to determine how good the action is from the agent's perspective, and from the objective view of the state (a single arrow), i.e. the same perspective as the parent predict node, to determine how good the action actually is. The root decision node can be considered as a special decision node where the subjective and objective views are the same. We assume that the agent only takes the best actions from its perspective, i.e. the ones with the highest subjective score that is greater than 0, and has a uniform probability of choosing any action from that set, with the exception that if there exist perfect execution actions with subjective scores of 1, then the agent would not consider taking noop action. For each node, we can determine what perspective the state is viewed from by traversing the thick arrows from the root, which represent perspective shifts. A node reaches termination state if it satisfies the termination condition defined earlier.
Tree Policy & Simulation (Default) PolicyRegarding the tree policy, for each node, we compute the UCB1 score of the children to select which one to descend down the tree. For the split node, we use the negative score of each child as the exploitation term, to prioritize simulations of the worst-case situation. For the decision node, we use the subjective score of each action as the exploitation term, to prioritize simulations of the actions that are likely taken by the agent. Once the action is selected, out of the two children split nodes from the objective view and the subjective view, we select the one that is less expanded.
For the simulation policy, at each decision node, we only consider the execution actions of the agent, and an agent randomly selects an action with uniform probability if it is feasible from its perspective. The predict node goes through each agent in random order to find an action to simulate forward. If none exists, simulation ends with a score of 0. This means that in the ideal case where all agents share common knowledge of plans, simulation always returns a score of 1.
Back UpWe take a more customized approach to computing the utility score of each node during the back-up phase. The split node takes the minimum score of the children predict nodes since it cares about the worst-case outcome, similar to the work of (Reifsteck et al., 2019), then multiplies it by the penalty factor of the action that leads to the split node.
The decision node computes the expected utility of the agent's action (including noop) towards contributing to the progression of the task from the perspective of the parent predict node, denoted by \(\mathbb{E}_{a}\) for agent \(a\). Given that the subjective (objective) score of an action \(act\) is \(s_{cat}\) (\(oc_{act}\)) and the set of best actions for agent \(a\) is \(Act\), the probability of action \(act\in Act\) being taken, denoted by \(P_{a}(act)\), is \(sc_{act}/\sum_{act^{\prime}\in Act}s_{act^{\prime}}\). The utility score of the decision node is then \(\sum_{act\in Act}P_{a}(act)\cdot o_{act}\). Additionally, we set the objective score of the noop action to 0, since it does not contribute to the progression of the task.
The score of the predict node is computed as:
\[\left(1-\prod_{a\in Ag}P_{a}(noop)\right)\left(\sum_{a\in Ag}\frac{\mathbb{E} _{a}}{\sum_{i\in Ag}1-P_{i}(noop)}\right),\]
which is the probability of at least some agent will act, multiplied by the expected utility of action taken by the first agent who gets to act, since execution is asynchronous. Given that some agent will act, we assume that the probability of agent \(a\) acting first is proportional to its probability of taking a non-noop action, i.e. \(1-P_{a}(noop)\). Therefore, the expected utility is the sum of the normalized probability of each agent \(a\) acting first \(\frac{1-P_{a}(noop)}{\sum_{i\in Ag}1-P_{i}(noop)}\) multiplied by the expected utility of agent \(a\) taking a non-noop action \(\frac{\mathbb{E}_{a}}{1-P_{a}(noop)}\). This
Figure 9: Partially expanded search tree
means that if every agent prefers a noop action, then the predict node has a score of 0, i.e. execution is stuck. Note that in reality, agents may decide to act if nobody else does instead of waiting forever. We do not take into account such interactive behavior, but assume this is a reasonable way to approximate the utility of the predict node.
Implicit Belief RevisionWe consider explanations to be an _explicit_ way of revising others' beliefs. We consider it an _implicit_ belief revision when an unexpected execution action or intent announcement action causes a less plausible world of an agent to be promoted to be a most plausible world. We assume agents do not wish to surprise others and penalize an action that causes implicit belief revision with a score of 0. This makes sure that our agent always explains its action before taking it if it is not expected by others. However, during execution, implicit belief revision may still occur, such as when others take an action unexpected by our agent which revises our agent's belief.
Performance OptimizationThe performance of our algorithm largely depends on the speed of solving constraint satisfaction problems (CSPs) from the knowledge bases. To optimize the performance, we implement incremental checking and caching for CSPs, since the CSPs are largely similar throughout the process. At the decision node, we lazily expand the actions in the order of execution actions, noop action, then communication actions. For example, communication actions do not need to be expanded if higher-priority actions have a better score than the maximum possible score for a communication action due to penalty.
## Experiment Results
We describe our experiment results on the success rate and the scalability of our algorithm EPike compared to Pike [10]. We implemented our own Pike as a naive version of EPike that assumes what it believes is believed by all, that is, it may falsely assume common knowledge when there is not. Note that the original Pike supports some additional functionalities not accounted for by this paper, such as scheduling. We use z3 as our CSP solver [14]. We use an exploration parameter of 4 for SearchAction, and \(\sqrt{2}\) for the rest. We use a horizon of \(k=3\) for SearchAction. We limit our focus to a 2-agent team. The experiments are run by instantiating two EPike or Pike agents that execute together in a task. We measure the runtime in seconds for one agent being the ego agent, who we assume gets to be the first agent to act if it decides to after each action is taken.
Success RateSince MCTS is an anytime algorithm, we evaluate the success rate and failure rate of EPike and Pike, under different timeout in seconds for MCTS (or if it reaches 1000 iterations of simulations, whichever comes first). The experiments are run for the domains of (1) **Breakfast**, which includes variations of our motivating example, (2) **Word Puzzles**, (3) **Search-and-Rescue (SAR)**, (4) Randomly generated sequential tasks. We run each hand-crafted test case for 20 times for both Pike and EPike with no timeout, with results shown in Figure 10. We generate random test cases that vary in the size of the task (number of variables \(V\)) and the number of constraints that agents differ on, ranging from [0, 3], for 10 cases per condition, and report the result after running each case for 2 times for both Pike and EPike under different timeout, as shown in Figure 11. Note that it is possible for execution to neither succeed nor fail, in which case execution hangs as no agents plan to act. This could be because (1) MCTS algorithm is stopped by the timeout before it finds a feasible next step, (2) EPike believes execution is bound to fail no matter its action, such as when the other agent would not trust its explanation, or (3) EPike falsely believes that the other agent will act. In practice, we can adopt mitigations such as allowing EPike to take the next best action after having waited for a long time.
From Figure 11, we see that as timeout increases, EPike's success rate increases, especially for larger-sized tasks, and is higher than Pike's success rate given enough time. Meanwhile, its failure rate is consistently low and always lower than Pike. This shows that EPike is conservative, and when it does not succeed, it is mainly because it has not found a good action to take within the timeout, but it does not take an incorrect action rashly as Pike tends to do. This is consistent with the result of the hand-crafted test cases in Figure 10.
ScalabilityTo see how EPike scales, we run the MCTS algorithm for a fixed number of 500 iterations to see how long it takes to reach a certain level of certainty under different model parameters, such as the size of the initial plausibility model (embodied by the number of constraints that agents differ on, _Diff_ shown by hue of the plot), the size of
Figure 11: Success rate and failure rate of EPike and Pike on randomly generated test cases
Figure 10: Success rate and failure rate of EPike and Pike on a set of hand-crafted test cases
the task (the number of variables \(V\), _Num Variables_), concurrency level of the actions (the number of ordering constraints \(O\), _Num Orders_), and the number of constraints \(C\) in the task (_Num Constraints_). We measure the average runtime for each callback for Pike and EPike. As shown in Figure 12, runtime for EPike is heavily affected by how much agents' beliefs differ, and it also increases as the task size increases. Pike takes less time than EPike, as expected, but when under common knowledge, EPike's runtime is closer to Pike and can also finish relatively quickly.
## Related Work
Human-Robot TeamingOur work is related to human-robot teaming as it considers the collaborative process of humans and robots working together to achieve tasks. Work in this field focuses primarily on recognizing and adapting to humans' intent, and in some cases, communicating about intent, which we inherit in our work. Pike inspired us to take a constraint-based approach for concurrent intent recognition and adaption, in which a library of precompiled plans are encoded in a knowledge base Levine and Williams (2018). Pike is later extended to a probabilistic setting, called _Riker_ Levine (2019), where the robot can further ask the human about their intent Broida (2021). Other than constraint-based approaches, work has been proposed using techniques from classical planning and MDP, such as PReTCIL Freedman and Zilberstein (2017) that uses probabilistic plan recognition and NOPA Puig et al. (2023) that leverages inverse planning for goal recognition. In Unhelkar et al. (2020), a human behavior model is learned through semi-supervised learning and incorporated into the robot's POMDP model that supports bi-directional communication on intent. However, most work assumes common knowledge of the task, as opposed to implementing an explicit Theory of Mind.
Epistemic PlanningIn the field of epistemic planning, there are two main categories of approaches - the semantic approach based on Dynamic Epistemic Logic (DEL) Bolander and Andersen (2011); Le et al. (2018); Fabiano et al. (2020) and the symbolic approach Muise et al. (2015). Our work leverages the DEL approach and carries over their insight on how to model announcement and question-asking actions. In particular, we take an implicit coordination approach, following from the work of Engesser et al., where the agent takes into account the spontaneous cooperation of other agents in achieving the goal, which requires recursive perspective-taking in order to predict their actions Engesser et al. (2017). In Bolander et al. (2018), the authors further discussed the impact of eager and lazy agents in the framework, and in Reifsteck et al. (2019), an MCTS algorithm is developed that shares similar insights as our work. Compared to the work by Engesser et al., we differ in that our framework based on conditional doxastic logic allows the modeling of false beliefs and the revision of false beliefs, and our explanations refer directly to the plan space instead of states as a result of extending the logic to knowledge bases.
XaipOur work is related to Explainable AI in Planning (XAIP), especially to the work on plan explanations taking into account the differences in agents' mental models. In Chakraborti et al. (2017), model reconciliation is proposed that allows robots to explain the model differences upon misalignment between the human's mental model of the robot and the robot's actual model. In Vasileiou et al. (2022), a logic-based approach to model reconciliation is proposed, where the planning problem is encoded as a SAT problem using SatPlan, and the model differences are computed with respect to the human and the robot's knowledge bases. Since these approaches consider the entire PDDL planning model, plan explanations go beyond explaining about the differences in the initial states but can also be about agents' discrepancies in goal states and action models. In Chakraborti et al. (2019), model reconciliation is balanced with explicable planning, which allows robots to find (potentially sub-optimal) plans that are expected by the human based on the human's understanding of the robot Zhang et al. (2017), and in Sreedharan et al. (2020), the two are unified in an expectation-aware planning framework with additional explanatory actions. This inspired us in thinking about how the robot can balance its adaptation and communication with the human. However, most of their work considers humans as observers without much human-robot cooperation. In Zahedi et al. (2022), the authors pointed out the importance of a richer mental modeling framework that allows human-robot collaboration, which we provide a viable way of filling the gap.
Another line of work from Shvo et al. Shvo et al. (2020) provides explanations by considering agents' Theory of Mind represented using epistemic logic. In particular, to resolve the human's misconceptions about plans, a symbolic epistemic planner RP-MEP Muise et al. (2015) is used for the robot to either take actions to align the true state to the human's belief or explain the true state to the human Shvo et al. (2022). However, their explanations are also on states rather than plans.
## Conclusion
In this work, we combine insights from epistemic logic and knowledge-base encoding of plans to allow agents to understand discrepancies in their beliefs of feasible plans. We develop an online execution algorithm Epistemic Pike for the agent to dynamically plan its actions to adapt to others and communicate to resolve any discrepancy. We show that our agent is effective in working in teams where a shared mental model of plans cannot be guaranteed. A natural next step is to consider cases where actions are partially observable.
Figure 12: Runtime of EPike and Pike on random tests
## Acknowledgements
This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001120C0035. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Defense Advanced Research Projects Agency (DARPA).
|
2308.04288 | Cloth2Tex: A Customized Cloth Texture Generation Pipeline for 3D Virtual
Try-On | Fabricating and designing 3D garments has become extremely demanding with the
increasing need for synthesizing realistic dressed persons for a variety of
applications, e.g. 3D virtual try-on, digitalization of 2D clothes into 3D
apparel, and cloth animation. It thus necessitates a simple and straightforward
pipeline to obtain high-quality texture from simple input, such as 2D reference
images. Since traditional warping-based texture generation methods require a
significant number of control points to be manually selected for each type of
garment, which can be a time-consuming and tedious process. We propose a novel
method, called Cloth2Tex, which eliminates the human burden in this process.
Cloth2Tex is a self-supervised method that generates texture maps with
reasonable layout and structural consistency. Another key feature of Cloth2Tex
is that it can be used to support high-fidelity texture inpainting. This is
done by combining Cloth2Tex with a prevailing latent diffusion model. We
evaluate our approach both qualitatively and quantitatively and demonstrate
that Cloth2Tex can generate high-quality texture maps and achieve the best
visual effects in comparison to other methods. Project page:
tomguluson92.github.io/projects/cloth2tex/ | Daiheng Gao, Xu Chen, Xindi Zhang, Qi Wang, Ke Sun, Bang Zhang, Liefeng Bo, Qixing Huang | 2023-08-08T14:32:38Z | http://arxiv.org/abs/2308.04288v1 | # Cloth2Tex: A Customized Cloth Texture Generation Pipeline for 3D Virtual Try-On
###### Abstract
Fabricating and designing 3D garments has become extremely demanding with the increasing need for synthesizing realistic dressed persons for a variety of applications, e.g. 3D virtual try-on, digitalization of 2D clothes into 3D apparel, and cloth animation. It thus necessitates a simple and straightforward pipeline to obtain high-quality texture from simple input, such as 2D reference images. Since traditional warping-based texture generation methods require a significant number of control points to be manually selected for each type of garment, which can be a time-consuming and tedious process. We propose a novel method, called **Cloth2Tex**, which eliminates the human burden in this process. Cloth2Tex is a self-supervised method that generates texture maps with reasonable layout and structural consistency. Another key feature of Cloth2Tex is that it can be used to support high-fidelity texture inpainting. This is done by combining Cloth2Tex with a prevailing latent diffusion model. We evaluate our approach both qualitatively and quantitatively and demonstrate that Cloth2Tex can generate high-quality texture maps and achieve the best visual effects in comparison to other methods. Project page: tomguluson92.github.io/projects/cloth2tex/
## 1 Introduction
The advancement of AR/VR and 3D graphics has opened up new possibilities for the fashion e-commerce industry. Customers can now virtually try on clothes on their avatars in 3D, which can help them make more informed purchase decisions. However, most clothing assets are currently presented in 2D catalog images, which are incompatible with 3D graphics pipelines. Thus it is critical to produce 3D clothing assets automatically from these existing 2D images, aiming at making 3D virtual try-on accessible to everyone.
Towards this goal, the research community has been developing algorithms [19, 20, 37] that can transfer 2D images into 3D textures of clothing mesh models. The key to producing 3D textures from 2D images is to determine the correspondences between the catalog images and the UV textures. Conventionally, this is achieved via the Thin-Plate-Spline (TPS) method [3], which approximates the dense correspondences from a small set of corresponding key points. In industrial applications, these key points are annotated manually and densely for each clothing instance to achieve good quality. With deep learning models, automatic key point detectors [19, 35] have been proposed to detect key points automatically for clothing. However, as seen in Fig. 2, the inherent self-occlusions (_e.g_. sleeves occluded by the main fabric) of TPS warping-based approaches are intractable, leading to erroneous and incomplete texture maps. Several works have attempted to use generative models to refine texture maps. However, such a refinement strategy has demonstrated success only in a small set of clothing types, _i.e_. T-shirts, pants, and shorts. This is because TPS cannot produce satisfactory initial texture maps on all clothing types, and a large training dataset covering high-quality texture maps of diverse clothing types is missing. Pix2Surf [20], a SMPL [18]-based virtual try-on algorithm, has automated the process of texture generation with no apparent cavity or void. However, due to its clothing-specific model, Pix2Surf is limited in its ability to generalize to clothes with arbitrary shapes.
This paper aims to automatically convert 2D reference clothing images into 3D textured clothing meshes for a larger diversity of clothing types. To this end, we first contribute template mesh models for 10+ different clothing types (well beyond current SOTAs: Pix2Surf (**4**) and [19] (**2**)). Next, instead of using the Thin-Plate-Spline (TPS) method as previous methods, we incorporate neural mesh rendering [17] to directly establish dense correspondences between 2D catalog images and the UV textures of the meshes. This results in higher-quality initial texture maps for all clothing types. We achieve this by optimizing the 3D clothing mesh models and textures to align with the catalog images' color, silhouette, and key points.
Although the texture maps from neural rendering are of higher quality, they still need refinement due to missing regions. Learning to refine these texture maps across different clothing types requires a large dataset of high-quality 3D textures, which is infeasible to acquire. We tackle this problem by leveraging the recently emerging latent diffusion model (LDM) [24] as a data simulator. Specifically, we use ControlNet [39] to generate large-scale, high-quality texture maps with various patterns and colors based on its _canny edge_ version. In addition to the high-quality ground-truth textures, the refinement network requires the corresponding initial defective texture maps obtained from neural rendering. To get such data, we render the high-quality texture maps into catalog images and then run our neural rendering pipeline to re-obtain the texture map from the catalog images, which now contain defects as desired. With these pairs of high-quality complete texture maps and the defective texture maps from the neural renderer, we train a high-resolution image translation model that refines the defective texture maps.
Our method can produce high-quality 3D textured clothing from 2D catalog images of various clothing types. In our experiments, we compare our approach with state-of-the-art techniques of inferring 3D clothing textures and find that our method supports more clothing types and demonstrates superior texture quality. In addition, we carefully verify the effectiveness of individual components via a thorough ablation study.
In summary, we contribute **Cloth2Tex**, a pipeline that can produce high-quality 3D textured clothing in various types based on 2D catalog images, which is achieved via
* _a)_ 3D parametric clothing mesh models of 10+ different categories that will be publicly available,
* _b)_ an approach based on neural mesh rendering to transferring 2D catalog images into texture maps of clothing meshes,
* _c)_ data simulation approach for training a texture refinement network built on top of blendshape-driven mesh and LDM-based texture.
## 2 Related Works
**Learning 3D Textures.** Our method is related to learning texture maps for 3D meshes. Texturify [27] learns to generate high-fidelity texture maps by rendering multiple 2D images from different viewpoints and aligning the distribution of rendered images and real image observations. Yu
Figure 2: Problem of warping-based texture generation algorithm: partially filled UV texture maps with large missing holes as highlighted in yellow.
_et al._[38] adopt a similar method, rendering images from different viewpoints and then discriminating the images by separate discriminators. With the emergence of diffusion models [7, 31], recent work Text2Tex [5] exploits 2D diffusion models for 3D texture synthesis. Due to the mighty generalization ability of the diffusion model [11, 24] trained on the largest corpus LAION-5B [26], _i.e_. stable diffusion [24], the textured meshes generated by Text2Tex are of superior quality and contain rich details. Our method is related to these approaches in that we also utilize diffusion models for 3D texture learning. However, different from previous approaches, we use latent diffusion models only to generate synthetic texture maps to train our texture inpainting model, and our focus lies in learning 3D texture corresponding to a specific pair of 2D reference images instead of random or text-guided generation.
**Texture-based 3D Virtual Try-On.** Wang _et al_. [34] provide a sketch-based network that infers both 2D garment sewing patterns and the draped 3D garment mesh from 2D sketches. In real applications, however, many applications require inferring 3D garments and the texture from 2D catalog images. To achieve this goal, Pix2Surf [20] is the first work that creates textured 3D garments automatically from front/back view images of a garment. This is achieved by predicting dense correspondences between the 2D images and the 3D mesh template using a trained network. However, due to the erroneous correspondence prediction, particularly on unseen test samples, Pix2Surf has difficulty in preserving high-frequency details and tends to blur out fine-grained details such as thin lines and logos.
To avoid such a problem, Sahib _et al_. [19] propose to use a warping-based method (TPS) [3] instead and to use further a deep texture inpainting network built upon MADFNet [40]. However, as mentioned in the introduction, warping-based methods generally require dense and accurate corresponding key points in images and UV maps and have only demonstrated successful results on two simple clothing categories, T-shirts and trousers. In contrast to previous work, Cloth2Tex aims to achieve automatic high-quality texture learning for a broader range of garment categories. To this end, we use neural rendering instead of warping, which yields better texture quality on more complex garment categories. We further utilize latent diffusion models (LDMs) to synthesize high-quality texture maps of various clothing categories to train the inpainting network.
## 3 Method
We propose Cloth2Tex, a two-stage approach that converts 2D images into textured 3D garments. The garments are represented as polygon meshes, which can be draped and simulated on 3D human bodies. The overall pipeline is illustrated in Fig. 3. The pipeline's first stage (Phase I) is to determine the 3D garment shape and coarse texture. We do this by registering our parametric garment meshes onto catalog images using a neural mesh renderer. The pipeline's second stage (Phase II) is to recover fine textures from the coarse estimate. We use image translation networks trained on large-scale data synthesized by pre-trained latent diffusion models. The mesh templates for individual clothing categories are a pre-requirement for our pipeline. We obtain these templates by manual artist design and will make them publicly available.
Implementation details are placed in the supp. material due to the page limit.
### Pre-requirement: Template Meshes
For the sake of both practicality and convenience, we design cloth template mesh (with fixed topology) \(\mathcal{M}\) for common garment types (_e.g_., T-shirts, sweatshirts, baseball jackets, trousers, shorts, skirts, and _etc_.). We then build a deformation graph \(\mathcal{D}\)[29] to optimize the template mesh vertices. This is because per-vertex image-based optimization is subject to errors and artifacts due to the high degrees of freedom. Specifically, we construct \(\mathcal{D}\) with \(k\) nodes, which are parameterized with axis angles \(\mathbf{A}\in\mathbb{R}^{3}\) and translations \(\mathbf{T}\in\mathbb{R}^{3}\). The vertex displacements are then derived from the deformation nodes (the number of nodes \(k\) is dependent on the garment type since different templates have different numbers of vertices and faces). We also manually select several vertices on the mesh templates as landmarks \(\mathcal{K}\). The specific requirements of the template mesh are as follows: vertices \(V\) less than 10,000, uniform mesh topology, and integrity of UV. The vertex number of all templates ranges between **skirt** (_6,116_) to **windbreaker** (_9,881_). For uniformity, we set the downsampling factor of \(\mathcal{D}\) for all templates to 20 (details of template meshes are placed in the supp. material). The integrity of UV means that the UV should be placed as a whole in terms of front and back, without further subdivision, as used in traditional computer graphics. Fabricating integral UV is not complicated and can be a super-duper candidate for later diffusion-based texture generation. See Sec. 3.3.1 for more details.
### Phase I: Shape and Coarse Texture Generation
The goal of Phase I is to determine the garment shape and a coarse estimate of the UV textures \(\mathcal{T}\) from the input catalog (_Front & Back_ view). We adopt a differentiable rendering approach [17] to determine the UV textures in a self-supervised way without involving trained neural networks. Precisely, we fit our template model to the catalog images by minimizing the difference between the 2D rendering of our mesh model and the target images. The fitting procedure consists of two stages, namely _Silhouette Matching_ and _Image-based Optimization_. We will now elaborate on these stages below.
#### 3.2.1 Silhouette Matching
We first align the corresponding template mesh to the 2D images based on the 2D landmarks and silhouette. Here, we use BCRNN [35] to detect landmarks \(L_{2d}\) and DenseCLIP [22] to extract the silhouette \(M\). To fit our various types of garments, we finetune BCRNN with 2,000+ manually annotated clothing images per type.
After the mask and landmarks of the input images are obtained, we first perform a global rigid alignment by an automatic cloth scaling method to adjust the scaling factor of mesh vertices according to the overlap of the initial silhouette between mesh and input images, which ensures a rough agreement of the yielded texture map (See Fig. 8). Specifically, we implement this mechanism by checking the silhouette between the rendered and reference images, and then enlarging or shrinking the scale of mesh vertices accordingly. After an optimum **Intersection over Union(IoU)** has been achieved, we fix the coefficient and send the scaled template to the next step.
We then fit the silhouette and the landmarks of the template mesh (the landmarks on the template mesh are pre-defined as described in Sec. 3.1) to those detected from the 2D catalog images. To this end, we optimize the deformations of the nodes in the deformation graph by minimizing the following energy terms:
**2D Landmark Alignment \(E_{\text{link}}\)** measures the distance between 2D landmarks \(L_{\text{2d}}\) detected by BRCNN and the 2D projection of 3D template mesh keypoints:
\[E_{\text{link}}=\|\prod\mathcal{K}-L_{\text{2d}}\|_{2} \tag{1}\]
where \(\prod\) denotes the 2D projection of 3D keypoints.
**2D Silhouette Alignment \(E_{\text{sil}}\)** measures the overlap between the silhouette of \(\mathcal{M}\) and the predicted \(M\) from DenseCLIP:
\[E_{\text{sil}}=\text{MaskIoU}(S_{\text{proj}}(\mathcal{M}),M) \tag{2}\]
where \(S_{\text{proj}}(\mathcal{M})\) is the silhouette rendered by the differentiable mesh renderer SoftRas [17] and _MaskIoU_ loss is derived from Kaolin [9].
Merely minimizing \(E_{\text{link}}\) and \(E_{\text{sil}}\) does not lead to satisfactory results, and optimization procedure can easily get trapped into local minimums. To alleviate this issue, we introduce a couple of regularization terms. We first regularize the deformation using the as-rigid-as-possible loss \(E_{\text{arp}}\)[28] which penalizes the deviation of estimated local surface deformations from rigid transformations. Moreover, we further enforce the normal consistency \(E_{\text{norm}}\), which measures normal consistency for each pair of neighboring faces). The overall optimization objective is given as:
\[w_{\text{sil}}E_{\text{sil}}+w_{\text{lmk}}E_{\text{lmk}}+w_{\text{arp}}E_{ \text{arp}}+w_{\text{norm}}E_{\text{norm}} \tag{3}\]
where \(w_{*}\) are the respective weights of the losses.
Figure 3: **Method overview**: Cloth2Tex consists of two stages. In Phase I, we determine the 3D garment shape and coarse texture by registering our parametric garment meshes onto catalog images using a neural mesh renderer. Next, in Phase II, we refine the coarse estimate of the texture to obtain high-quality fine textures using image translation networks trained on large-scale data synthesized by pre-trained latent diffusion models. Note that the only component that requires training is the inpainting network. Please watch our video on the project page for an animated explanation of Cloth2Tex.
We set large regularization weights \(w_{\text{map}}\), \(w_{\text{norm}}\) at the initial iterations. We then reduce their values progressively during the optimization procedure, so that the final rendered texture aligns with the input images. Please refer to the supp. material for more details.
#### 3.2.2 Image-based Optimization
After the shape of the template mesh is aligned with the image silhouette, we then optimize the UV texture map \(\mathcal{T}\) to minimize the difference between the rendered image \(I_{\text{rend}}=S_{\text{rend}}(\mathcal{M},\mathcal{T})\) and the given input catalog images \(I_{\text{in}}\) from both sides simultaneously. To avoid any outside interference during the optimization, we only preserve the ambient color and set both diffuse and specular components to be zero in the settings of SoftRas [17], PyTorch3D [23].
Since the front and back views do not cover the full clothing texture, _e.g_. the seams between the front and back bodice can not be recovered well due to the occlusions, we use the total variation method [25] to fill in the blank of seam-affected UV areas. The total variation loss \(E_{\text{tv}}\) is defined as the norm of the spatial gradients of the rendered image \(\nabla_{x}I_{\text{rend}}\) and \(\nabla_{y}I_{\text{rend}}\):
\[E_{tv}=\|\nabla_{x}I_{\text{rend}}\|_{2}+\|\nabla_{y}I_{\text{rend}}\|_{2} \tag{4}\]
In summary, the energy function for the image-based optimization is defined as below:
\[w_{\text{img}}\|I_{\text{in}}-I_{\text{rend}}\|_{2}+w_{\text{tv}}E_{\text{tv}} \tag{5}\]
where \(I_{\text{in}}\) and \(I_{\text{rend}}\) are the reference and rendered image. As shown in Fig. 3, \(\mathcal{T}\) implicitly changes towards the final coarse texture \(\mathcal{T}_{coarse}\), which ensures the final rendering is as similar as possible with the input. Please refer to our attached video for a vivid illustration.
### Phase II: Fine texture generation
In Phase II, we refine the coarse texture from Sec. 3.2 and fill in the missing regions. Our approach takes inspiration from the strong and comprehensive capacity of Stable Diffusion (SD), which is a terrific model to have by itself in image inpainting, completion, and text2image tasks. In fact, there's also an entire, growing ecosystem around it: LoRA [12], ControlNet [39], textual inversion [10] and Stable Diffusion WebUI [1]. Therefore, a straightforward idea is to resolve our texture completion via SD.
However, we find poor content consistency between the inpainted blank and original textured UV. This is because UV data in our setting rarely appears in the training dataset LAION-5B [26] of SD. In other words, the semantic composition of LAION-5B and UV texture (cloth) are quite different and challenging for SD to generalize.
To address this issue, we first leverage ControlNet [39] to generate \(\sim 2,000+\) HQ complete textures per template and render emission-only images under the front and back view. Next, we use Phase I again to recover the corresponding coarse textures. After collecting the pairs of coarse and fine textures, we train an inpainting network to fill the missing regions in the coarse texture maps.
#### 3.3.1 Diffusion-based Data Generation
We employ diffusion models [7, 24, 39] to generate realistic and diverse training data.
We generate texture maps following the UV template configuration, adopting the pre-trained ControlNet with edge map as input conditions. ControlNet finetunes text-to-image diffusion models to incorporate additional structural conditions as input. The input edge maps are obtained through canny edge detection on clothing-specific UV, and the input text prompts are generated by applying image captioning models, namely Lavis-BLIP [16], OFA [32] and MPlug [15], on tens of thousands of clothes crawled from Amazon and Taobao.
After generating the fine UV texture maps, we are already able to generate synthetic front and back 2D catalog images, which will be used to train the impainting network. We leverage the rendering power of Blender native EEVEE engine to get the best visual result. A critical step of our approach is to perform data augmentation so that the inpainting network captures invariant features instead of details that differ between synthetic images and testing images, which do not generalize. To this end, we vary the blend shape parameters of the template mesh to generate 2D catalog images in different shapes and pose configurations and simulate self-occlusions, which frequently exist in reality and lead to erroneous textures as shown in Fig. 2. We hand-craft three common blendshapes (Fig. 4) that are enough to simulate the diverse cloth-sleeve correlation/layout in reality.
Next, we run Phase I to produce coarse textures from the rendered synthetic 2D catalog images, yielding the coarse, defect textures corresponding to the fine textures. These pairs of coarse-fine textures serve as the training data for the subsequent inpainting network.
#### 3.3.2 Texture Inpainting
Given the training data simulated by LDMs, we then train our inpainting network. Note that we train a single network for all clothing categories, making it general-purpose.
Regarding the impainting work, we choose Pix2PixHD [33], which shows better results than alternative approaches such as conditional TransUNet [6], ControlNet. One issue of Pix2PixHD is that produces color-consistent output \(\mathcal{T}_{o}\) in contrast to prompt-guided ControlNet (please check our supp. material for visualization comparison). These results are compared with the
input full UV as the condition. To address this issue, we first locate the missing holes, continuous edges and lines in the coarse UV as the residual mask \(M_{r}\) (left corner at the bottom line of Fig. 9). We then linearly blend those blank areas with the model's output during texture repairing. Formally speaking, we compute the output as below:
\[\mathcal{T}_{\text{fine}}=\text{BilateralFilter}(\mathcal{T}_{\text{coarse}}+M _{r}*\mathcal{T}_{o}) \tag{6}\]
where BilateralFilter is non-linear filter that can blur the irregular and rough seaming between \(\mathcal{T}_{\text{coarse}}\) and \(\mathcal{T}_{o}\) very well while keeping edges fairly sharp. More details can be seen in our attached video.
## 4 Experiments
Our goal is to generate 3D garments from 2D catalog images. We verify the effectiveness of Cloth2Tex via thorough evaluation and comparison with state-of-the-art baselines. Furthermore, we conduct a detailed ablation study to demonstrate the effects of individual components.
### Comparison with SOTA
We first compare our method with SOTA virtual try-on algorithms, both 3D and 2D approaches.
**Comparison with 3D SOTA:** We compare Cloth2Tex with SOTA methods that produce 3D mesh textures from 2D clothing images, including model-based Pix2Surf [20] and TPS-based Warping [19] (We replace the original MADF with locally changed UV-constrained Naiver Stokes method, differences between our UV-constrained naiverstokes and original version is described in the suppl. material). As shown in Fig. 5, our method produces high-fidelity 3D textures with sharp, high-frequency details of the patterns on clothing, such as the leaves and characters on the top row. In addition, our method accurately preserves the spatial configuration of the garment, particularly the overall aspect ratio of the patterns and the relative locations of the logos. In contrast, the baseline method Pix2Surf [20] tends to produce blurry textures due to a smooth mapping network, and the Warping [19] baseline introduces undesired spatial distortions (e.g., second row in Fig. 5) due to sparse correspondences.
**Comparison with 2D SOTA:** We further compare Cloth2Tex with 2D virtual try-on methods: Flow-based DAFlow [2] and StyleGAN-enhanced Deep-Generative-Projection (DGP) [8]. As shown in Fig. 6, Cloth2Tex achieves better quality than 2D virtual try-on methods in sharpness and semantic consistency. More importantly, our outputs, namely 3D textured clothing meshes, are naturally compatible with cloth physics simulation, allowing the synthesis of realistic try-on effects in various body poses. In contrast, 2D methods rely on prior learned from training
Figure 4: Illustration of the three sleeve-related blendshapes of our template mesh model. These blendshapes allow rendering clothing images in diverse pose configurations to facilitate simulating real-world clothing image layouts.
Figure 5: Comparison with Pix2Surf [20] and Warping [19] on T-shirts. Please zoom in for more details.
images and are hence limited in their generalization ability to extreme poses outside the training distribution.
**User Study:** Finally, we conduct a user study to evaluate the overall perceptual quality and consistency with our methods' provided input catalog images and 2D and 3D baselines. We consider DGP the 2D baseline and TPS the 3D baseline due to their best performance among existing work. Each participant is shown three randomly selected pairs of results, one produced by our method and the other made by one of the baseline methods. The participant is requested to choose the one that appears more realistic and matches the reference clothing image better. In total, we received 643 responses from 72 users aged between 15 and 60. The results are reported in Fig. 7. Compared to DGP [8] and TPS, Cloth2Tex is favored by the participants with preference rates of 74.60% and 81.65%, respectively. This user study result verified the quality and consistency of our method.
### Ablation Study
To demonstrate the effect of individual components in our pipeline, we perform an ablation study for both stages in our pipeline.
**Neural Rendering vs. TPS Warping:** TPS warping has been widely used in previous work on generating 3D garment textures. However, we found that it suffers from challenging cases illustrated in Fig. 2, so we propose a new pipeline based on neural rendering. We compare our method with TPS warping quantitatively to verify this design choice. Our test set consists of 10+ clothing categories, including T-shirts, Polos, sweatshirts, jackets, hoodies, shorts, trousers, and skirts, with 500 samples per category. We report the structure similarity (SSIM [36]) and peak signal-to-noise ratio (PSNR) between the recovered textures and the ground truth textures.
As shown in Tab. 1, our neural rendering-based pipeline achieves superior SSIM and PSNR compared to TPS warping. This improvement is also preserved after inpainting and refinement, leading to a much better quality of the final texture. We conduct a comprehensive comparison study on various inpainting methods in the supp. material, and please check it if needed.
**Total Variation Loss & Automatic Scaling (Phase I)** As shown in Fig. 8, dropping the total variation loss \(E_{tv}\) and automatic scaling, the textures are incomplete and cannot maintain a semantically correct layout. With \(E_{tv}\), Cloth2Tex produces more complete textures by exploiting
\begin{table}
\begin{tabular}{c c c c} \hline \hline Baseline & Inpainting & SSIM \(\uparrow\) & PSNR \(\uparrow\) \\ \hline TPS & _None_ & 0.70 & 20.29 \\ TPS & _Pix2PixHD_ & 0.76 & 23.81 \\ Phase I & _None_ & 0.80 & 21.72 \\ Phase I & _Pix2PixHD_ & **0.83** & **24.56** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Neural Rendering vs. TPS Warping. We evaluate the texture quality of neural rendering and TPS-based warping, with and without inpainting.
Figure 8: Ablation Study on Phase I. From left to right: base, base + total variation loss \(E_{\text{tv}}\), base + \(E_{\text{tv}}\) + automatic scaling.
Figure 6: Comparison with 2D Virtual Try-One methods, including DAFlow [2] and DGP [8].
Figure 7: User preferences among 643 responses from 72 participants. Our method is favored by significantly more users.
the local consistency of textures. Further applying automatic scaling results in better alignment between the template mesh and the input images, resulting in a more semantically correct texture map.
**Inpainting Methods (Phase II)** Next, to demonstrate the need for training an inpainting model specifically for UV clothing textures, we compare our task-specific inpainting model with general-purpose inpainting algorithms, including Navier-Stokes [4] algorithm and off-the-shelf deep learning models including LaMa [30], MADF [40] and Stable Diffusion v2 [24] with pre-trained checkpoints. Here, we modify the traditional Navier-Stokes [4] algorithm to a UV-constrained version because a texture map is only part of the whole squared image grid, where plenty of non-UV regions produce an adverse effect for texture in-painting (please see supp. material for comparison).
As shown in Fig. 9, our method, trained on our synthetic dataset generated by the diffusion model, outperforms general-purpose inpainting methods in the task of refining and completing clothing textures, especially in terms of the color consistency between inpainted regions and the original image.
### Limitations
As shown in Fig. 10, Cloth2Tex can produce high-quality textures for common garments, T-shirt, Shorts, Trousers and _etc._ (blue bounding box (bbox)). However, we have observed that it is having difficulty in recovering textures for garments with complex patterns: _e.g._ inaccurate and inconsistent local texture (belt, collarband) occurred in wind-breaker (red bbox). We regard this as the extra accessories occurred in the garment, which inevitably add on the partial texture in addition to the main UV.
Another imperfection is that our method cannot maintain the uniformity of checked shirts with densely assembled grids: As shown in the second row of Fig. 6, our method inferior to 2D VTON methods in preserving texture among which comprised of thousands of fine and tiny checkerboard-like grids, checked shirts and pleated skirts are representative type of such garments.
We boil this down to the subtle position changes during deformation graph optimization period, which leads to the template mesh becomes less uniform eventually as the regularization terms, _i.e._ as-rigid-as-possible is not a very strong constraint energy terms in obtaining a conformal mesh. We acknowledge this challenge and leave it to future work to explore the possibility in generating a homogeneous mesh with uniformly-spaced triangles.
## 5 Conclusion
This paper presents a novel pipeline, Cloth2Tex, for synthesizing high-quality textures for 3D meshes from the pictures taken from only front and back views. Cloth2Tex adopts a two-stage process in obtaining visually appealing textures, where phase I offers coarse texture generation and phase II performs texture refinement. Training a generalized texture inpainting network is non-trivial due to the high topological variability of UV space. Therefore, obtaining paired data under such circumstances is important. To the best of our knowledge, this is the first study to combine a diffusion model with a 3D engine (Blender) in collecting coarse-fine paired textures in 3D texturing tasks. We show the generalizability of this approach in a variety of examples.
To avoid distortion and stretched artifacts across clothes,
Figure 9: Comparison with SOTA inpainting methods (Naiver-Stokes [4], LaMa [30], MADF [40] and Stable Diffusion v2 [24]) on texture inpainting. The upper left corners of each column are the conditional mask input. Blue in the first column shows that our method is capable of maintaining consistent boundary and curvature _w.r.t_ reference image while Green highlights the blank regions that need inpainting.
Figure 10: Visualization of 3D virtual try-on. We obtain textured 3D meshes from 2D reference images shown on the left. The 3D meshes are then draped onto 3D humans.
we automatically adjust the scale of vertices of template meshes and thus best prepare them for later image-based optimization, which effectively guides the implicitly learned texture with a complete and distortion-free structure. Extensive experiments demonstrate that our method can effectively synthesize consistent and highly detailed textures for typical clothes without extra manual effort.
In summary, we hope our work can inspire more future research in 3D texture synthesis and shed some light on this area.
|
2301.06194 | Geometric Graph Learning with Extended Atom-Types Features for
Protein-Ligand Binding Affinity Prediction | Understanding and accurately predicting protein-ligand binding affinity are
essential in the drug design and discovery process. At present, machine
learning-based methodologies are gaining popularity as a means of predicting
binding affinity due to their efficiency and accuracy, as well as the
increasing availability of structural and binding affinity data for
protein-ligand complexes. In biomolecular studies, graph theory has been widely
applied since graphs can be used to model molecules or molecular complexes in a
natural manner. In the present work, we upgrade the graph-based learners for
the study of protein-ligand interactions by integrating extensive atom types
such as SYBYL and extended connectivity interactive features (ECIF) into
multiscale weighted colored graphs (MWCG). By pairing with the gradient
boosting decision tree (GBDT) machine learning algorithm, our approach results
in two different methods, namely $^\text{sybyl}\text{GGL}$-Score and
$^\text{ecif}\text{GGL}$-Score. Both of our models are extensively validated in
their scoring power using three commonly used benchmark datasets in the drug
design area, namely CASF-2007, CASF-2013, and CASF-2016. The performance of our
best model $^\text{sybyl}\text{GGL}$-Score is compared with other
state-of-the-art models in the binding affinity prediction for each benchmark.
While both of our models achieve state-of-the-art results, the SYBYL atom-type
model $^\text{sybyl}\text{GGL}$-Score outperforms other methods by a wide
margin in all benchmarks. | Md Masud Rana, Duc Duy Nguyen | 2023-01-15T21:30:21Z | http://arxiv.org/abs/2301.06194v1 | Geometric Graph Learning with Extended Atom-Types Features for Protein-Ligand Binding Affinity Prediction
###### Abstract
Understanding and accurately predicting protein-ligand binding affinity are essential in the drug design and discovery process. At present, machine learning-based methodologies are gaining popularity as a means of predicting binding affinity due to their efficiency and accuracy, as well as the increasing availability of structural and binding affinity data for protein-ligand complexes. In biomolecular studies, graph theory has been widely applied since graphs can be used to model molecules or molecular complexes in a natural manner. In the present work, we upgrade the graph-based learners for the study of protein-ligand interactions by integrating extensive atom types such as SYBYL and extended connectivity interactive features (ECIF) into multiscale weighted colored graphs (MWCG). By pairing with the gradient boosting decision tree (GBDT) machine learning algorithm, our approach results in two different methods, namely \({}^{\text{s}y\text{bj}}\)IGL-Score and \({}^{\text{ecif}}\)GGL-Score. Both of our models are extensively validated in their scoring power using three commonly used benchmark datasets in the drug design area, namely CASE-2007, CASE-2013, and CASE-2016. The performance of our best model \({}^{\text{s}y\text{bj}}\)GGL-Score is compared with other state-of-the-art models in the binding affinity prediction for each benchmark. While both of our models achieve state-of-the-art results, the SYBYL atom-type model \({}^{\text{s}y\text{bj}}\)GGL-Score outperforms other methods by a wide margin in all benchmarks.
_Keywords--_ geometric graph learning, protein-ligand binding affinity, atom-type interaction, weighted colored subgraph, machine learning
## 1 Introduction
In recent years, graph theories have been widely used in chemical, biological, physical, social, and computer sciences. This is because graphs are useful for representing and analyzing a wide range of practical problems. In molecular modeling, graph representation is widely used since it is a natural way to model their structures, in which graph vertices represent atoms and graph edges represent possible interactions between them. In general, graph theories can be divided into three categories: geometric graph theory, algebraic graph theory, and topological graph theory. Geometric graph theory studies a graph's geometric connectivity, which refers to the pairwise relations among graph nodes or vertices [1]. Algebraic graph theory concerns the algebraic connectivity via the characteristic polynomial, eigenvalues, and eigenvectors of matrices associated with the graph, such as the adjacency matrix or the Laplacian matrix [2, 3]. In topological graph theory, embedding and immersion of graphs are studied along with their association with topological spaces, such as abstract simplicial complexes [4, 5].
There are numerous applications of graphs in chemical analysis and biomolecular modeling [6, 7, 8, 9], such as normal-mode analysis (NMA) [10, 11, 12, 13] and elastic network model (ENM) [14, 15, 16, 17, 18, 19] used to study protein B-factor prediction. Algebraic graph theory has been utilized in some of the most popular elastic network models (ENMs) such as the Gaussian network model (GNM) and the anisotropic network model (ANM). However, due to the matrix-diagonalization procedure, these methods have a computational complexity of \(\mathcal{O}(N^{3})\), with \(N\) being the number of matrix elements. Furthermore, these methods suffer from limited accuracy in protein B-factor prediction, with average Pearson correlation coefficients less than 0.6 in all datasets. A geometric graph theory-based weighted graph approach, called flexibility-rigidity index (FRI), was introduced to bypass matrix diagonalization in GNM [20, 21, 22, 23]. FRI assumes that protein interactions, including interactions with its environment, completely determine its structure in a given environment, which in turn, fully determines protein flexibility and functions. Therefore, it is not necessary to invoke a high-dimensional protein interaction Hamiltonian as in spectral graph theory to analyze protein flexibility when the accurate structure of the protein and its environment are known. While the computational complexity of earlier FRI [20] is of \(\mathcal{O}(N^{2})\), the fast FRI [21] is of \(\mathcal{O}(N)\). In order to capture multiscale interactions in macromolecules, multiscale FRI (mFRI) was introduced [24], resulting in a number of graphs with parallel edges, i.e. multiple graphs. Despite the fact that mFRI is about 20% more accurate than the GNM on a set of 364 proteins, the average Pearson's correlation coefficient in B-factor prediction falls below 0.7, which is insufficient to provide a reliable assessment of protein flexibility. The limited accuracy of these graph-based models is due to the fact that they do not distinguish different chemical element types in a molecule or biomolecule, resulting in a severe loss of important chemical and biological information.
To address the aforementioned problem, a multiscale weighted colored graph (MWCG) model was introduced for protein flexibility analysis [25]. In MWCG, the graph of a protein structure is colored according to the type of interaction between nodes in the graph, and subgraphs are defined according to colors. This process is commonly referred to as graph coloring, which is an important technique in graph theory that allows graph vertices or edges to be treated differently. MWCG weights the importance of graph edges by scaling their Euclidean distances in radial basis functions so that the nearest neighbors have the strongest edges in the sense of the Euclidean metric. Mathematical properties of MWCGs include low dimensionality, simplicity, robustness, and invariance of rotations, translations, and reflections. Subgraphs constructed from vertex-labeled and edge-labeled graphs provide powerful representations of intermolecular and intramolecular interactions, such as hydrogen bonds, electrostatics, van der Waals interactions, hydrophilicity, hydrophobicity, etc [1, 25]. The MWCG models offer 40% more accuracy than the GNM in protein B-factor prediction [25].
Molecular interactions between proteins and substrate molecules (ligands) are the principal determinant of many vital processes, such as cellular signaling, transcription, metabolism, and immunity. Therefore, understanding protein-ligand interactions is a central issue in biochemistry, biophysics, and molecular biology. Moreover, an accurate prediction of protein-ligand binding affinity plays a critical role in computational drug design, particularly in virtual screening and lead optimization. Various scoring functions (SFs) have been developed over the past few decades to evaluate protein-ligand interactions in structure-based drug design. These SFs can be classified mainly into four categories: force-field-based or physics-based SF, empirical SF, knowledge-based SF, and machine-learning-based SF. Force-field-based SFs offer physical insight and are not dependent on existing data. Empirical SFs utilizes a number of physical sub-models and use regression to fit existing data. The knowledge-based SF uses available datasets to derive binding patterns for proteins and ligands without requiring further training. Finally, machine learning-based SFs are data-driven, and are capable of capturing non-linear and complex relationships in the data. They can also easily handle large and diverse datasets. The performance of machine learning-based SFs strongly depends on the training set, in addition to their descriptors and machine learning algorithms. These scoring functions often take the top place in several standard benchmarks and community-wide competitions [26, 27, 28, 29, 30].
In recent years, due to the increasing availability of structural and binding affinity data for protein-ligand complexes, machine-learning SFs have become increasingly popular for binding affinity prediction. The RF-Score [31] is considered one of the first machine-learning-based SFs to outperform other SFs in the CASF-2007 benchmark dataset. The model uses the random forest algorithm and employs element-type pair counts as features to describe protein-ligand complexes. The model was later extended to incorporate a more precise chemical description, including SYBYL atom-type pair counts features [32]. Including SYBYL atom types into the model permits deconvoluting the element into a hybridization state and bonding environment. For example, instead of having a single Carbon (C) element atom type, the SYBYL scheme allows the following subtypes: C.1, C.2 C.3, C.ar, and C.cat. A number of SYBYL atom-type-based models have been developed in the past years [33], including SYBYL::ChemScore, SYBYL::G-Score, and SYBYL::D-Score. In a separate study, it has been shown that the connectivity of the atoms [34] can improve the performance of a machine learning model in the binding affinity prediction task [35]. In [35], the authors used a set of protein-ligand atom-type pair counts features, called the extended connectivity interactive features (ECIF), that considers each atom's connectivity to define the atoms involved in the pairs. The atom definition in ECIF is based on the atom environment concept initially introduced in the development of Extended Connectivity Fingerprints (ECFP) [36]. Paired with a machine learning algorithm, the ECIF model significantly improves the performance of the binding affinity prediction with Pearson's correlation of 0.866 for the CASF-2016 benchmark. A number of machine-learning-based SF with different types of descriptors including differential geometry [37, 38], persistent homology [39, 5], and graph theory [1, 2] have emerged in the past few years for protein-ligand binding affinity prediction. Among them, the element-type graph coloring-based MWCG descriptors have particularly been successful in the task [1, 2].
In the present work, we propose a geometric graph theory-based multiscale weighted colored graph (MWCG) descriptors for the protein-ligand complex where the graph coloring is based on SYBYL atom-type and ECIF atom-type connectivity. By pairing with the advanced machine learning architectures, our approach results in two different methods, namely \({}^{\text{syb}}\)!GGL-Score and \({}^{\text{self}}\)GGL-Score. We verify the scoring power of our proposed model against three commonly used benchmarks in drug design, namely CASF-2007 [33], CASF-2013 [40], and CASF-2016 [41]. Several experiments confirm that both of our models achieve state-of-the-art results and outperform other models by a wide margin.
## 2 Methods and Materials
### Multiscale Weighted Colored Geometric Subgraphs
A graph \(\mathcal{G}\) of a biomolecule consists of a set of vertices \(\mathcal{V}\) and edges \(\mathcal{E}\) and can be used to describe the noncovalent interaction of atoms in the molecule. In recent years, graph theory descriptors of protein-ligand binding interactions have been developed for massive and diverse datasets [2, 42]. To improve the graph theory representation, different types of elements are labeled, which is known as graph coloring. A colored graph is used to encode different types of interactions between atoms and gives rise to a basis for the collective coarse-grained description of the dataset. Labeled atoms of a molecule are classified into subgraphs where colored edges correspond to element-specific interactions.
To account for details of physical interactions in protein-ligand complexes such as hydrophobic, hydrophilic, etc., we are interested in constructing the subgraphs in an atomic interactive manner. In our previous work [1, 2], we used the combination of the element symbols of the interacting protein-ligand atoms to classify the interaction, e.g., C-C or N-O. In the present work, instead of the element symbol, we consider the following two schemes to classify the atomic interaction. In the first approach, we consider atom name (excluding hydrogen) for protein and SYBYL atom type
for the ligand to define a range of protein-ligand atom pairs, e.g CD1-C2, CG-C.Car, OE1-N.am, etc. In the second scheme, we consider the extended connectivity interaction features (ECIF) described in [35] to extract the protein-ligand atom-type pair that takes each atom's connectivity into account. The ECIF atom type in a molecule is defined by considering six atomic features: atom symbol, explicit valence, number of attached heavy atoms, number of attached hydrogens, aromaticity, and ring membership. Each of these properties can be represented textually where each property is separated by a semicolon. For example, the ECIF atom type for the \(\alpha\) carbon CA is C;4;3;1;0;0.
For convenience, let \(\mathcal{T}\) be the set of all interested atom types in a given biomolecular dataset for either of the two schemes described above. To reduce the notation complexity, we denote the atom type at the \(i\)th position in the set \(\mathcal{T}\) as \(\mathcal{T}_{i}\). Assuming that a biomolecule has \(N\) atoms of interest, we denote
\[\mathcal{V}=\{(\mathbf{r}_{i},\alpha_{i})|\mathbf{r}_{i}\in\mathbb{R}^{3}; \alpha_{i}\in\mathcal{T};i=1,2,\cdots,N\} \tag{1}\]
a subset of \(N\) atoms (i.e. subgraph vertices) that are members of \(\mathcal{T}\). Note that the \(i\)th atom is labeled by both its coordinate \(\mathbf{r}_{i}\) and atom type \(\alpha_{i}\). We assume that all the pairwise non-covalent interactions between atom types \(\mathcal{T}_{k}\) and \(\mathcal{T}_{k^{\prime}}\) in a molecule or molecular complex can be represented by fast-decay weight functions
\[\mathcal{E}=\{\Phi(\|\mathbf{r}_{i}-\mathbf{r}_{j}\|;\eta_{kk^{ \prime}})|\alpha_{i}=\mathcal{T}_{k},\,\alpha_{j}=\mathcal{T}_{k^{\prime}};\] \[i,j=1,2,\cdots,N;\,\|\mathbf{r}_{i}-\mathbf{r}_{j}\|\leq c\}, \tag{2}\]
where \(\|\mathbf{r}_{i}-\mathbf{r}_{j}\|\) is the Euclidean distance between the \(i\)th and \(j\)th atom and \(c\) is a predefined cutoff distance that defines the binding site of the atom type \(\mathcal{T}_{k}\) and \(\mathcal{T}_{k^{\prime}}\). Here \(\eta_{kk^{\prime}}\) is a characteristic distance between the atoms, and \(\Phi\) is a subgraph weight that satisfies the following admissibility conditions
\[\Phi(\|\mathbf{r}_{i}-\mathbf{r}_{j}\|;\eta_{kk^{\prime}})=1, \quad\text{as}\;\|\mathbf{r}_{i}-\mathbf{r}_{j}\|\to 0, \tag{3}\] \[\Phi(\|\mathbf{r}_{i}-\mathbf{r}_{j}\|;\eta_{kk^{\prime}})=0, \quad\text{as}\;\|\mathbf{r}_{i}-\mathbf{r}_{j}\|\to\infty,\] \[\alpha_{i}=\mathcal{T}_{k},\,\alpha_{j}=\mathcal{T}_{k^{\prime}}. \tag{4}\]
Although most radial basis functions can be used as the subgraph weight, the generalized exponential function
\[\Phi_{E}(\|\mathbf{r}_{i}-\mathbf{r}_{j}\|;\eta_{kk^{\prime}})=e^{-(|\mathbf{ r}_{i}-\mathbf{r}_{j}|/\eta_{kk^{\prime}})^{\kappa}},\quad\kappa>0,\]
and the generalized Lorentz function
\[\Phi_{L}(\|\mathbf{r}_{i}-\mathbf{r}_{j}\|;\eta_{kk^{\prime}})=\frac{1}{1+( \|\mathbf{r}_{i}-\mathbf{r}_{j}\|/\eta_{kk^{\prime}})^{\kappa}},\quad\kappa>0,\]
were shown to work very well for biomolecules [21]. Now, we have a weighted colored subgraph \(G(\mathcal{V},\mathcal{E})\) for a molecule or a molecular complex and we can use it to construct atomic-level collective molecular descriptors. We define the multiscale weighted colored geometric subgraph (MWCGS) interaction between \(k\)th atom type \(\mathcal{T}_{k}\) and \(k^{\prime}\)th atom type \(\mathcal{T}_{k^{\prime}}\) by
\[\mu^{G}(\eta_{kk^{\prime}})=\sum_{i}\mu^{G}_{i}(\eta_{kk^{\prime }})=\sum_{i}\sum_{j}\Phi(\|\mathbf{r}_{i}-\mathbf{r}_{j}\|;\eta_{kk^{\prime}}),\] \[\alpha_{i}=\mathcal{T}_{k},\,\alpha_{j}=\mathcal{T}_{k^{\prime}}, \tag{5}\]
where \(\mu^{G}_{i}(\eta_{kk^{\prime}})\) is the geometric subgraph centrality for the \(i\)th atom of type \(\mathcal{T}_{k}\) and all atoms of type \(\mathcal{T}_{k^{\prime}}\). The summation over the geometric centrality \(\mu^{G}_{i}(\eta_{kk^{\prime}})\) in equation (5) can be interpreted as the total interaction strength for the selected atom type pair \(\mathcal{T}_{k}\) and \(\mathcal{T}_{k^{\prime}}\), which provides the atomic-level coarse-grained description of the molecular properties. The equation (5) is a generalization of a bipartite subgraph discussed in [1] for the predictions of protein-ligand binding affinities and free energy ranking. A bipartite subgraph of a protein-ligand complex is a graph in which each of its edges connects one atom in the protein and another in the ligand. We intend to capture the hydrogen bonds, polarization, electrostatics, van der Waals interactions, hydrophilicity, hydrophobicity, etc. of a protein-ligand complex through the bipartite graph coloring, i.e., atom-specific descriptions and subgraph weight.
The multiscale behavior of the MWCGS arises when a different selection of the characteristic distance \(\eta_{kk^{\prime}}\) for a pair of atom types \(k\) and \(k^{\prime}\) are considered. Therefore, for a molecule or a biomolecule, the MWCGS allows us to systematically construct a family of collective, scalable, multiscale graph-based descriptors by an appropriate selection of atom types pair \(k\) and \(k^{\prime}\), characteristic distance \(\eta_{kk^{\prime}}\), and subgraph weight \(\Phi\). An illustration of the weighted colored subgraph under the SYBYL atom-type system for the molecule xanthine (\(\mathrm{C_{S}H_{4}N_{4}O_{2}}\)) is presented in Figure 1.
### Geometric Graph Learning
The multiscale weighted colored geometric subgraph (MWCGS) descriptors for a molecule or molecular complex can be paired with any machine learning or deep learning algorithm to predict molecular properties. In a supervised machine learning algorithm (either classification or regression), the labeled dataset is divided into two parts: a training set and a test set. Let \(\mathcal{X}_{i}\) be a labeled dataset for the \(i\)th molecule or molecular complex in the training set. Furthermore, suppose \(\mathcal{G}(\mathcal{X}_{i},\lambda)\) be a function that encodes the geometric information of the molecule or molecular complex into suitable graph representations with a set of parameters \(\lambda\). The training of a machine learning model can be translated into a minimization problem,
\[\min_{\lambda,\theta}\sum_{i\in I}\mathcal{L}(\mathbf{y}_{i},\mathcal{G}( \mathcal{X}_{i},\lambda);\theta) \tag{6}\]
where \(\mathcal{L}\) is a scalar loss function to be minimized and \(\mathbf{y}_{i}\) is the labels of the \(i\)th sample in the training set \(I\). Here, \(\theta\) is the set of hyperparameters that depends on the chosen machine learning algorithm and typically be optimized for optimal performance. A wide range of machine learning algorithms, such as support vector machines, random forests, gradient boosting trees, artificial neural networks, and convolution neural networks, can be implemented in conjugation with the present geometric subgraph descriptors. However, to focus on the descriptive power of the proposed geometric subgraph features, we only employ gradient boosting decision trees (GBDT) in the present work and avoid optimizing machine learning algorithm selections. Although relatively simple, GBDT is still powerful, robust against overfitting, and a widely used ensemble algorithm. An illustration of the proposed geometric graph learning strategy is presented in Figure 2.
We use GBDT module in scikit-learn v0.24.1 package with the following parameters: n_estimators = 20000, max_depth = 8, min_samples_split = 2, learning_rate = 0.005, loss = ls, subsample = 0.7, and max_features = sqrt. These parameter values are selected from the extensive tests on PDBbind datasets and are uniformly used in all our validation tasks in this work.
### Dataset
For protein-ligand binding affinity prediction, the most commonly used benchmarks are the PDBbind datasets. In this work, we use the three most popular PDBbind benchmark datasets, CASF-2007, CASF-2013, and CASF-2016, to test the performance of our model. The PDBbind datasets consist of a general set, a refined set, and a core set, where the latter set is a subset of the previous one. In the present work, we explore two different training sets to build predictive models for the binding affinity of the complexes in the test set, which is the core set of the corresponding benchmark. The first training set, denoted by \(S_{R}\), is the refined set excluding the core set of the corresponding benchmark. As a second training set, denoted by \(S_{G}\), we use the general set excluding the core set of the corresponding benchmark. More information about these datasets is offered on the PDBbind website [http://pdbbind.org.cn/](http://pdbbind.org.cn/). A summary of the dataset is provided in Table 1.
\(|S_{G}|\): Number of complexes in the general set excluding the core set of the corresponding benchmark. \(|S_{R}|\): Number of complexes in the refined set excluding the core set of the corresponding benchmark. \(|S_{C}|\): Number of complexes in the core set of the corresponding benchmark.
### Model Parametrization
For the sake of convenience, we use the notation GGL\({}^{\alpha}_{\kappa,r}\) to indicate the geometric graph learning features generated by using kernel type \(\alpha\) and corresponding kernel parameters \(\kappa\) and \(\tau\). Here, \(\alpha=E\) and \(\alpha=L\) refer to the generalized exponential and generalized Lorentz kernels, respectively. And \(\tau\) is used such that \(\eta_{kk^{\prime}}=\tau(\hat{r}_{k}+\hat{r}_{k^{\prime}})\), where \(\hat{r}_{k}\) and \(\hat{r}_{k^{\prime}}\)
\begin{table}
\begin{tabular}{l c c c} \hline Dataset & \(|S_{G}|\) & \(|S_{R}|\) & \(|S_{C}|\) \\ \hline CASF–2007 benchmark & 2852 & 1105 & 195 \\ CASF–2013 benchmark & 11713 & 3516 & 195 \\ CASF–2016 benchmark & 12998 & 3772 & 285 \\ \hline \end{tabular}
\end{table}
Table 1: Summary of PDBbind datasets used in the present work.
Figure 1: Illustration of the weighted colored subgraph. Part (a) is a diagram of the structure of the xanthine molecule (C\({}_{5}\)H\({}_{4}\)N\({}_{4}\)O\({}_{2}\); ligand name: XAN; PDB ID: 2u29), and (b) the weighted colored subgraphs, from left to right, G\({}_{\text{N.am--O.2}}\), G\({}_{\text{N.pl3--O.2}}\), and G\({}_{\text{N.2--O.2}}\) consisting of SYBYL atom-type pair N.am–O.2, N.pl3–O.2, and N.2–O.2, respectively. The dashed line in (b) represents the edges of the graph.
are the van der Waals radii of atom type \(k\) and atom type \(k^{\prime}\), respectively. Kernel parameters \(\kappa\) and \(\tau\) are selected based on the cross-validation with a random split of the training data. We propose a GGL representation in which multiple kernels are parametrized at different scale (\(\eta\)) values. In this work, we consider at most two kernels. As a straightforward notation extension, two kernels can be parametrized by \(\text{GGL}_{\kappa_{1}^{1},\kappa_{2},\kappa_{2},\tau_{2}}^{\alpha_{1},\alpha_{ 2}}\). Each of these kernels gives rise to one set of features. Finally, as we consider two different schemes to extract the protein-ligand atom-type pair, we introduce the following two notations \({}^{\text{s}\text{y}\text{y}\text{y}\text{i}}\text{GGL}_{\kappa_{1}^{1},\kappa _{2},\kappa_{2},\tau_{2}}^{\alpha_{1},\alpha_{2}}\) and \({}^{\text{s}\text{e}\text{i}\text{f}\text{GGL}_{\kappa_{1}^{1},\kappa_{2}, \kappa_{2},\tau_{2}}^{\alpha_{1},\alpha_{2}}}\).
## 3 Results and Discussion
In this section, we present the scoring power of our proposed geometric graph learning (GGL) model for the benchmark datasets discussed above.
### Hyperparameter Optimization and Model Performance
It is a well-known fact that the performance of a machine-learning model depends on the optimization of its essential parameters. To achieve the best performance of our GGL model on each benchmark, we optimize the kernel parameters \(\kappa\) and \(\tau\). We use five-fold cross-validation (CV) and a grid search method to find the optimal parameters \(\tau\) in the range \([0.5,10]\) and \(\kappa\) in the range \([0.5,20]\) with an increment of \(0.5\) for both parameter ranges. The high values of the power parameter \(\kappa\) are considered to approximate the ideal low-pass filter (ILF) [43].
As a general strategy to optimize the model hyperparameters on each benchmark, we carry out a five-fold CV on the training set \(S_{R}\) which is the refined set excluding the core set of the corresponding benchmark. Once we find the best model for each benchmark dataset, we test the performance of the model on the test set \(S_{C}\) (i.e., the core set of the corresponding benchmark). For the prediction task, our first strategy is to train the model using the training set \(S_{R}\) (i.e., the refined set excluding the core set) and observe the performance on the test set. And secondly, we train the best model using the training set \(S_{G}\) (i.e., the general set excluding the core set) and test the performance on the test set. As the general set of each benchmark contains more diverse complexes than the refined set, we expect our model performs better when trained with the training set \(S_{G}\). Below we discuss the optimization of our model hyperparameters \(\tau\) and \(\kappa\) and the model's performance on each benchmark.
Figure 2: Illustration of the geometric graph learning strategy using the molecular complex with PDBID: 5bwc (first column). The second column represents the protein-ligand atom-type pair CA–O.3, OE1–N.pl3, and NE1–C.2, respectively from top to bottom. The corresponding weighted colored geometric subgraphs are shown in the third column. The fourth column presents the statistics of the subgraph rigidity. In the final column, the advanced machine learning models such as the gradient boosting trees integrate these statistical features for training and prediction.
#### 3.1.1 Caf-2016
The first benchmark we consider is the CASF-2016, the latest of the three benchmark datasets in the PDBbind database. We carry out five-fold cross-validation (CV) on the first training set which is the refined set excluding the core set of this benchmark. The CV results for both the single-scale and two-scale SYBYL atom-type GGL models are presented in Figure 3. The parameter set \((\kappa,\tau)=(2.5,1.5)\) gives the best median Pearson's correlation coefficient \(R_{p}\)=0.795 for the single-scale exponential kernel (Figure 3a). For the single-scale Lorentz kernel model the parameters are \((\kappa,\tau)=(14.0,1.5)\) with median \(R_{p}\)=0.795 (Figure 3b). The two-scale kernel model is built on top of the previously optimized single-scale kernel parameters, so we only optimize the parameters for the second kernel. Figure 3c and Figure 3d plots the CV results for the second kernel parameters \(\kappa_{2}\) and \(\tau_{2}\) of the two-scale kernel SYBYL atom-type model \({}^{\text{ybyj}}\text{GGL}^{\text{z},\text{z}}_{\kappa_{1},\tau_{1},\tau_{2}, \tau_{2}}\) with \(\kappa_{1}\) and \(\tau_{1}\) fixed at the optimal value from single-scale model. We observe that the best two-scale exponential kernel model is \({}^{\text{ybyj}}\text{GGL}^{\text{z,E}}_{2.5,1.5;15,0.8,5}\) with median \(R_{p}\)=0.796 (Figure 3c) and the best two-scale Lorentz kernel model is \({}^{\text{ybyj}}\text{GGL}^{\text{L,L}}_{14.0,1.5;16,0.0,5}\) with median \(R_{p}\)=0.797 (Figure 3d).
To find the optimal parameters for the ECIF atom-type models, we carry out a similar process discussed above. Figure 4 plots the CV performance of the single-scale kernel ECIF atom-type model \({}^{\text{ecif}}\text{GGL}^{\text{z},\text{z}}_{\kappa,\tau}\). We find that the best parameters for the single-scale exponential kernel model are \(\kappa\)=13.0 and \(\tau\)=2.5 with median \(R_{p}\)=0.790 (Figure 4a) and the best parameters for the single-scale Lorentz kernel model are \(\kappa\)=14.0 and \(\tau\)=1.5 with median \(R_{p}\)=0.789 (Figure 4b). The optimal parameters for the two-scale kernel model are also explored in a similar fashion as above. The CV results of each combination of the second kernel parameters are presented in Figure 4. The figure confirms that the best two-scale exponential kernel model is \({}^{\text{ecif}}\text{GGL}^{\text{z,E}}_{1.0,2.5;15,0.9,0}\) with median \(R_{p}\)=0.792 (Figure 4c) and the best two-scale Lorentz kernel model is \({}^{\text{ecif}}\text{GGL}^{\text{L,L}}_{14.0,1.5;13.5,9,0}\) with median \(R_{p}\)=0.791 (Figure 4d).
After finding the best models for this benchmark, we are interested in validating their performance on the test set, i.e., the CASF-2016 core set. The performance is measured using the Pearson's correlation coefficient between the predicted and the experimental binding affinities of the test set complexes. First, we train each model with the smaller training set \(S_{R}\), i.e., the PDBbind v2016 refined set excluding the CASF-2016 core set. Then we use the trained model to predict the test set. To this end, we repeat the model up to 50 times and use the average of all predicted values as the final predicted set. As a second approach, we train the model with the bigger training set \(S_{G}\), i.e., the PDBbind v2016 general set excluding the CASF-2016 core set. For the prediction task, we again repeat the trained model 50 times and use the average of all predictions.
The performance of the best models (both SYBYL atom-type and ECIF atom-type) on the test set are listed in Table 2. We find that the performance of all models significantly improved when the model is trained with the bigger training data \(S_{G}\). The results in Table 2 indicate that the two-scale models perform slightly better than their single-scale counterparts as expected. We also observe that the SYBYL atom-type models, both single-scale and two-scale, outperform their ECIF atom-type counterparts. The best model for this benchmark is the two-scale Lorentz kernel SYBYL atom-type model \({}^{\text{ybyj}}\text{GGL}^{\text{L,L}}_{14.0,1.5;16,0.0,5}\) with reported Pearson's correlation \(R_{p}\)=0.873. In addition, we compare the scoring power of our proposed GGL-Score against various state-of-the-art scoring functions in the literature [33, 31, 44, 45, 46, 47]. Figure 9c illustrates such a comparison for CASF-2016 benchmark and clearly our model stands in the top. The second best is the TopBP-DL with reported \(R_{p}\)=0.848. It must be stressed that the base geometric and algebraic graph learning models that consider the element-specific interactions instead of the atom-type interactions have a comparatively lower performance with \(R_{p}\)=0.815 [1] and \(R_{p}\)=0.835 [2] respectively. The above comparison and Figure 9c confirm the scoring power and the effectiveness of considering atom-type pair interactions in the present model. Moreover, to highlight that the current model's impressive performance is due to the incorporation of the atom-type pair interactions and not because of the use of larger training data \(S_{G}\), we explore the performance of the base GGL models with element-specific interactions that are trained on the set \(S_{G}\). The details of this experiment and results are presented in Appendix 5. While the use of the bigger training data improves the performance of the base GGL model, our extended atom-type models still outperform them by a big margin (see Table A1).
#### 3.1.2 Caf-2013
As a second benchmark dataset among the CASF family, we consider the CASF-2013 benchmark. For both SYBYL atom-type and ECIF atom-type models, we carry out a similar hyperparameter optimization to that of the CASF-2016 benchmark. We use the smaller training set \(S_{R}\) of this benchmark which is the PDBbind v2015 refined set excluding
\begin{table}
\begin{tabular}{l c c c c} \multicolumn{1}{c}{Pearson’s \(R_{p}\) of single-scale Model} & \multicolumn{2}{c}{Pearson’s \(R_{p}\) of two-scale Model} \\ \hline Model & Trained with \(S_{R}\) & Trained with \(S_{G}\) & Model & Trained with \(S_{G}\) \\ \hline \({}^{\text{ybyj}}\text{GGL}^{\text{z}}_{2.5,1.5}\) & 0.838 & 0.872 & \({}^{\text{ybyj}}\text{GGL}^{\text{z,E}}_{2.5,1.5;15,0.8,5}\) & 0.872 \\ \({}^{\text{ybyj}}\text{GGL}^{\text{L}}_{14.0,1.5}\) & 0.832 & 0.872 & \({}^{\text{ybyj}}\text{GGL}^{\text{L,L}}_{14.0,1.5;16,0.0,5}\) & 0.873 \\ \hline \({}^{\text{ecif}}\text{GGL}^{\text{z}}_{13.0,2.5}\) & 0.824 & 0.867 & \({}^{\text{ecif}}\text{GGL}^{\text{z,E}}_{13.0,2.5;15,0.9,0}\) & 0.868 \\ \({}^{\text{ecif}}\text{GGL}^{\text{L}}_{14.0,1.5}\) & 0.822 & 0.865 & \({}^{\text{ecif}}\text{GGL}^{\text{L,L}}_{14.0,1.5;13.5,9,0}\) & 0.868 \\ \hline \end{tabular}
\end{table}
Table 2: Performance of various GGL models on CASF–2016 test set.
Figure 4: Optimized parameters for \({}^{\rm eeif}\)GGL model for CASF–2016 benchmark. The best parameters locations are marked by “x”. The optimal parameters for (a) single-scale exponential kernel model are \((\kappa,\tau)=(13.0,2.5)\) with the corresponding median \(R_{p}=0.790\) and (b) single-scale Lorentz kernel model are \((\kappa,\tau)=(14.0,1.5)\) with corresponding median \(R_{p}=0.789\). The optimal second kernel parameters for (c) two-scale exponential kernel model are \((\kappa,\tau)=(15.0,9.0)\) with the corresponding median \(R_{p}=0.792\) and (d) two-scale Lorentz kernel model are \((\kappa,\tau)=(13.5,9.0)\) with the corresponding median \(R_{p}=0.791\).
Figure 3: Optimized parameters for \({}^{\rm sybyl}\)GGL model for CASF–2016 benchmark. The best parameters locations are marked by “x”. The optimal parameters for, (a) single-scale exponential kernel model are \((\kappa,\tau)=(2.5,1.5)\) with the corresponding median \(R_{p}=0.795\) and (b) single-scale Lorentz kernel model are \((\kappa,\tau)=(14.0,1.5)\) with corresponding median \(R_{p}=0.795\). The optimal second kernel parameters for (c) two-scale exponential kernel model are \((\kappa,\tau)=(15.0,8.5)\) with the corresponding median \(R_{p}=0.796\) and (d) two-scale Lorentz kernel model are \((\kappa,\tau)=(16.0,0.5)\) with the corresponding median \(R_{p}=0.797\).
the CASF-2013 core set for the cross-validation process. Figure 5 reveals the optimal parameters for the SYBYL atom-type model. The best parameters for the single-scale exponential kernel are found to be \(\kappa\)=5.5 and \(\tau\)=2.0 with median \(R_{p}\)=0.796 (Figure 5a) and the best parameters for the single-scale Lorentz kernel are \(\kappa\)=5.5 and \(\tau\)=0.5 with median \(R_{p}\)=0.795 (Figure 5b). For the two-scale kernel models, we fix the first kernel parameters at their optimal value and optimize the second kernel parameter. Figure 5c shows that the best two-scale exponential kernel model is \({}^{\text{stybj}}\)GGL\({}^{\text{L.E}}_{5.5,2.0,4.0,0.5}\) with median \(R_{p}\)=0.798 and Figure 5d shows that the best two-scale Lorentz kernel model is \({}^{\text{stybj}}\)GGL\({}^{\text{L.E}}_{1.5,0.5,12.0,9.5}\) with median \(R_{p}\)=0.798.
For the ECIF atom-type model hyperparameter optimization, we follow the same procedure as above. We found that the best single-scale exponential kernel model is \({}^{\text{scif}}\)GGL\({}^{\text{E}}_{1.0,2.5}\) with median \(R_{p}\)=0.792 (Figure 6a) and the best single-scale Lorentz kernel model is found to be \({}^{\text{ccif}}\)GGL\({}^{\text{L.E}}_{15.0,2.0}\) with median \(R_{p}\)=0.791 (Figure 6b). For the two-scale kernel model, the best two-scale exponential kernel model is found to be \({}^{\text{ecif}}\)GGL\({}^{\text{E.E}}_{12.0,2.5,15.0,8.5}\) with median \(R_{p}\)=0.795 (Figure 6c). Finally, from Figure 6d, we found that the best two-scale Lorentz kernel model is \({}^{\text{scif}}\)GGL\({}^{\text{L.E}}_{18.0,2.0,15.0,8.5}\) with median \(R_{p}\)=0.795.
Furthermore, we utilize the best models of this benchmark to predict the binding affinity of the 195 complexes in the CASF-2013 test set. Like the CASF-2016 benchmark, we first train each model using the smaller training set of this benchmark, i.e., the PDBbind v2015 refined set excluding the CASF-2013 core set, and then we generate a prediction for the test set from the average of 50 runs. Secondly, we use the more extensive training set, PDBbind v2015 general set, excluding the CASF-2013 core set to train the model and use it to get the prediction for the test set.
The performance of all models on the CASF-2013 test set is reported in Table 3. It is interesting to see a similar trend that the performance of all models improved significantly when the model is trained on the bigger training data \(S_{G}\). We also observe that the SYBYL atom-type models consistently outperform their ECLF atom-type counterparts. With the two-scale kernel model performing slightly better than the single-scale kernel model, the best-performing model for this benchmark is the two-scale exponential kernel SYBYL atom-type model \({}^{\text{ecif}}\)GGL\({}^{\text{E.E}}_{12.0,2.5,15.0,8.5}\) with reported Pearson's correlation coefficient \(R_{p}\)=0.848. Additionally, Figure 9b proves the dominance of our model in the scoring power over other published models for this benchmark. The reported \(R_{p}\)=0.848 of our best model is significantly higher than the \(R_{p}\)=0.808 of the runner-up model TopBP. Furthermore, Table A1 in Appendix 5 confirms that the outstanding performance of our model is due to the incorporation of the atom-type interactions in the model.
#### 3.1.3 Caf-2007
Our last benchmark is the CASF-2007. The hyperparameter optimization for this benchmark is similar to the previous two benchmarks. The smaller training set \(S_{R}\), which is the PDBbind v2007 refined set excluding the CASF-2007 core set, is used for the five-fold CV. The CV performances of the SYBYL atom-type models are plotted in Figure 7. The optimal kernel parameters for the single-scale exponential model are \(\kappa\)=2.5 and \(\tau\)=0.5 (Figure 6(a)) with median \(R_{p}\)=0.745. For the single-scale Lorentz kernel, Figure 6(b), the best parameters are \(\kappa\)=13.5 and \(\tau\)=0.5 with median \(R_{p}\)=0.746. The two-scale models are built on top of the optimized single-scale model, we only search for the optimal second kernel parameters. Figure 6(c) shows that the two-scale exponential model \({}^{\rm ybyl}\)GGL\({}_{2.5,0.5,19.0,9.0}^{\rm E,E}\) gives the best median \(R_{p}\)=0.74 while Figure 6(d) reveals that the best two-scale Lorentz kernel model is \({}^{\rm ybyl}\)GGL\({}_{2.5,0.5,13.0,9.5}^{\rm L,L}\) with median \(R_{p}\) being 0.747.
The hyperparameter optimization for the ECIF atom-type models is carried out in a similar fashion. Figure 8 displays the best parameters and the CV performance. We found that the best single-scale exponential model is \({}^{\rm coif}\)GGL\({}_{17.5,1.5}^{\rm E}\) with median \(R_{p}\)=0.738 with median \(R_{p}\)=0.739 with median \(R_{p}\)=0.741 (Figure 7(c)). Finally, (Figure 7(d)), shows that the best two-scale Lorentz kernel model is \({}^{\rm coif}\)GGL\({}_{15.5,1.5,15.0,7.5}^{\rm E,E}\) with median \(R_{p}\)=0.742.
Having optimized the models' hyperparameters, we now predict the binding affinity of the 195 complexes in the CASF-2007 test set. Just like in the previous two benchmarks, we first train each model using the smaller training set \(S_{R}\) and produce a prediction for the test set from the average of 50 runs. Secondly, we use the bigger training set \(S_{G}\) of this benchmark, which is the PDBbind v2007 general set excluding the CASF-2007 core set, to train the model and use the trained model to predict the binding affinity of the test set.
The performance of all our selected models for this benchmark is reported in Table 4. We observe that all of these models perform significantly better when trained with the bigger training set \(S_{G}\). Following a similar trend as in the previous two benchmarks, the SYBYL atom-type models of this benchmark consistently perform better than their ECIF atom-type counterparts. Also, the two-scale kernel model improves the performance compared to their single-scale versions. The best-performing model for this benchmark is the two-scale Lorentz kernel SYBYL atom-type model \({}^{\rm ybyl}\)GGL\({}_{13.5,0.5,13.0,9.5}^{\rm L,L}\) with Pearson's correlation coefficient \(R_{p}\)=0.834. Moreover, Figure 8(a) reveals the scoring power of our model in this benchmark. Our proposed GGL model stands at the top with reported \(R_{p}\)=0.834 while AGL-Score is the runner-up with \(R_{p}\)=0.830.
\begin{table}
\begin{tabular}{l c c c c} \multicolumn{2}{c}{Pearson’s \(R_{p}\) of single-scale Model} & \multicolumn{2}{c}{Pearson’s \(R_{p}\) of two-scale Model} \\ \hline Model & Trained with \(S_{R}\) & Trained with \(S_{G}\) & Model & Trained with \(S_{G}\) \\ \hline \({}^{\rm sybyl}\)GGL\({}_{2.5,0.5}^{\rm E}\) & 0.803 & 0.824 & \({}^{\rm sybyl}\)GGL\({}_{2.5,0.5,19.0,9.0}^{\rm E,E}\) & 0.833 \\ \({}^{\rm sybyl}\)GGL\({}_{13.5,0.5}^{\rm L,L}\) & 0.807 & 0.827 & \({}^{\rm sybyl}\)GGL\({}_{13.5,0.5,13.0,9.5}^{\rm L,L}\) & 0.834 \\ \hline \({}^{\rm ecif}\)GGL\({}_{17.5,1.5}^{\rm E}\) & 0.794 & 0.807 & \({}^{\rm ecif}\)GGL\({}_{17.5,1.5,16.5,8.5}^{\rm E,E}\) & 0.811 \\ \({}^{\rm ecif}\)GGL\({}_{15.5,1.5}^{\rm E}\) & 0.792 & 0.805 & \({}^{\rm ecif}\)GGL\({}_{15.5,1.5,15.0,7.5}^{\rm L,L}\) & 0.809 \\ \hline \end{tabular}
\end{table}
Table 4: Performance of various GGL models on CASF–2007 test set.
Figure 6: Optimized parameters for \({}^{\rm ecif}\)GGL model for CASF–2013 benchmark. The best parameters locations are marked by “x”. The optimal parameters for (a) single-scale exponential kernel model are \((\kappa,\tau)=(12.0,2.5)\) with the corresponding median \(R_{p}=0.792\) and (b) single-scale Lorentz kernel model are \((\kappa,\tau)=(18.0,2.0)\) with corresponding median \(R_{p}=0.791\). The optimal second kernel parameters for (c) two-scale exponential kernel model are \((\kappa,\tau)=(15.0,8.5)\) with the corresponding median \(R_{p}=0.795\) and (d) two-scale Lorentz kernel model are \((\kappa,\tau)=(15.0,8.5)\) with the corresponding median \(R_{p}=0.795\).
Figure 8: Optimized parameters for \({}^{\rm eif}\)GGL model for CASF–2007 benchmark. The best parameters locations are marked by “x”. The optimal parameters for (a) single-scale exponential kernel model are \((\kappa,\tau)=(17.5,1.5)\) with the corresponding median \(R_{p}=0.739\) and (b) single-scale Lorentz kernel model are \((\kappa,\tau)=(15.5,1.5)\) with corresponding median \(R_{p}=0.738\). The optimal second kernel parameters for (c) two-scale exponential kernel model are \((\kappa,\tau)=(16.5,8.5)\) with the corresponding median \(R_{p}=0.741\) and (d) two-scale Lorentz kernel model are \((\kappa,\tau)=(15.0,7.5)\) with the corresponding median \(R_{p}=0.742\).
Figure 7: Optimized parameters for \({}^{\rm sybyl}\)GGL model for CASF–2007 benchmark. The best parameters locations are marked by “x”. The optimal parameters for (a) single-scale exponential kernel model are \((\kappa,\tau)=(2.5,0.5)\) with the corresponding median \(R_{p}=0.745\) and (b) single-scale Lorentz kernel model are \((\kappa,\tau)=(13.5,0.5)\) with corresponding median \(R_{p}=0.746\). The optimal second kernel parameters for (c) two-scale exponential kernel model are \((\kappa,\tau)=(19.0,9.0)\) with the corresponding median \(R_{p}=0.747\) and (d) two-scale Lorentz kernel model are \((\kappa,\tau)=(13.0,9.5)\) with the corresponding median \(R_{p}=0.747\).
## 4 Conclusion
The binding affinity between a ligand and its receptor protein is a key component in structure-based drug design. Although significant progress has been made over the past decades, an accurate prediction of protein-ligand binding affinity remains a challenging task. Geometric graph theories are widely used in the study of molecular and biomolecular systems. Furthermore, the element-type graph coloring-based multiscale weighted colored graph (MWCG) approaches have particularly shown success in the task of binding affinity prediction [1, 2]. On the other hand, SYBYL atom-type interaction and extended connectivity interactive features (ECIF) have enjoyed their success in molecular property prediction [32, 35]. Therefore, with an aim to develop robust and reliable scoring functions for large and diverse protein-ligand datasets, the present work combines the graph learning model and extended atom types to give rise to novel geometric graph theory-based multiscale weighted colored graph (MWCG) descriptors for the protein-ligand complex where the graph coloring is based on SYBYL atom-type and ECIF atom-type interactions. By pairing with the gradient boosting decision tree (GBDT) machine learning algorithm, our approach results in two different methods, namely \({}^{\text{syb1}}\)GGL-Score and \({}^{\text{ccl}}\)GGL-Score. We explore the optimal hyperparameters of our models using a five-fold cross-validation on the training set of three commonly used benchmarks in drug design area, namely CASP-2007 [33], CASP-2013 [40], and CASP-2016 [41]. For the binding affinity prediction task of each benchmark's test set complexes, we consider two training sets - the refined set excluding the core set and the general set excluding the core set. Our model performs significantly better in each benchmark when trained with the larger training set. It is also found that the SYBYL atom-type models \({}^{\text{syb1}}\)GGL-Score outperform the ECIF atom-type models \({}^{\text{ccl}}\)GGL-Score in most cases.
To demonstrate the scoring power of the proposed models, many state-of-the-art scoring functions are considered in each benchmark. Impressively, our \({}^{\text{syb1}}\)GGL-Score outperforms other models by a wide margin in all three PDBbind benchmarks. In addition to the accuracy and robustness, our model is computationally inexpensive- the only required structural input is the atom types and coordinates. Moreover, our model can be applied in a vast majority of molecular property predictions such as toxicity, solubility, protein mutation, protein folding, and protein-nucleic acid interactions.
## 5 Appendix A
In this section, we explore the performance of the basic geometric graph approach model that considers element-type interactions presented in [1] using the bigger training set \(S_{G}\), i.e. the general set excluding the core set of each benchmark. We carry out a similar experiment as we did for our present model. For simplicity, we use the notation GGL\({}^{\alpha}_{n,r}\) for a single-scale kernel and GGL\({}^{\alpha_{1},\alpha_{2}}_{\kappa_{1},r_{1}\kappa_{2},r_{2}}\) for a two-scale kernel basic element-type geometric graph learning model. To find the optimized parameters for each benchmark, we carry out a five-fold CV on the training set \(S_{R}\) i.e. the refined set excluding the core set of the corresponding benchmark. For CASP-2016 benchmark, the best single kernel models are
Figure 9: Performance comparison of different scoring functions on CASP benchmarks. Our proposed model in this work, GGL-Score, is highlighted in red, and the rest is in purple. a) CASP–2007: the performances of other methods taken from previous studies [33, 31, 44, 45, 46, 47, 48]. Our \({}^{\text{syb1}}\)GGL-Score achieves \(R_{p}\)=0.834 b) CASP–2013: the other results are extracted from [48, 40, 45]. Our \({}^{\text{syb1}}\)GGL-Score achieves \(R_{p}\)=0.848. c) CASP–2016: our \({}^{\text{syb1}}\)GGL-Score achieves \(R_{p}\)=0.873, other scoring functions are discussed in [41, 49, 48].
found to be \(\mathrm{GGL}_{15.5,2.0}^{\mathrm{E},\mathrm{E}}\) (Figure A1a)and \(\mathrm{GGL}_{16.0,2.0}^{\mathrm{E},\mathrm{E}}\) (Figure A1b) with median Pearson correlation \(R_{p}\)=0.769 for both models. The best two kernel models for CASE-2016 are \(\mathrm{GGL}_{16.0,2.0}^{\mathrm{E},\mathrm{E}}\) (Figure A1c) and \(\mathrm{GGL}_{16.0,2.0,12.0,1.5}^{\mathrm{L},\mathrm{L}}\) (Figure A1d) with median \(R_{p}\)=0.773 for both models. The performance of all models on the test set of CASE-2016 benchmark are reported in Table A1. It is interesting to find that the performance of each model improved significantly when trained with the bigger training data \(S_{G}\) i.e. PDBbind v2016 general set excluding the core set. The best performing model for this benchmark is the two-scale exponential kernel model \(\mathrm{GGL}_{15.5,2.0,16.0,3.0}^{\mathrm{E},\mathrm{E}}\) with \(R_{p}\)=0.859. We note that both of our proposed GGL models, SYBYL atom-type and ECIF atom-type model, perform promisingly better (with reported \(R_{p}\)=0.873 and 0.868 respectively) than the basic element-type model.
The CV performance of CASE-2013 benchmark (Figure A2), reveals that the best models for this benchmark are \(\mathrm{GGL}_{15.0,2.0}^{\mathrm{E},\mathrm{E}}\) and \(\mathrm{GGL}_{16.0,2.0}^{\mathrm{L},\mathrm{E}}\) for the single kernel, and \(\mathrm{GGL}_{15.0,2.0,15.0,3.0}^{\mathrm{E},\mathrm{E}}\) and \(\mathrm{GGL}_{16.0,2.0,11.5,1.5}^{\mathrm{L},\mathrm{L}}\) for two kernels, with median \(R_{p}\)= 0.774, 0.773, 0.778, and 0.776, respectively. Table A1 indicates that the use of the bigger training set \(S_{G}\) improved the performance of these models. The best performing model for this benchmark is the single-scale exponential model \(\mathrm{GGL}_{10.2,0}^{\mathrm{E},\mathrm{E}}\) with reported \(R_{p}\)=0.821. However, our proposed SYBYL atom-type GGL model outperforms the basic GGL model by a huge margin with reported \(R_{p}\)=0.848 for this benchmark.
Figure A3 plots the CV performance for CASE-2007 benchmark. The best models for this benchmark are \(\mathrm{GGL}_{17.0,1.5}^{\mathrm{E}}\) and \(\mathrm{GGL}_{17.0,1.5}^{\mathrm{E},\mathrm{E}}\) for the single kernel, and \(\mathrm{GGL}_{17.0,1.5,16.5,10.0}^{\mathrm{L},\mathrm{E}}\) and \(\mathrm{GGL}_{17.0,1.5,6.5,10.0}^{\mathrm{L},\mathrm{L}}\) for two kernels, with median \(R_{p}\)= 0.724, 0.724, 0.733, and 0.730, respectively. The performance of these models is presented in Table A1. We observe that the use of the bigger training data significantly improves the performance of each model for this benchmark as well. While the best performing basic GGL model for this benchmark is the two-scale exponential kernel model \(\mathrm{GGL}_{17.0,1.5,16.5,3.0}^{\mathrm{E},\mathrm{E}}\) with Pearson's \(R_{p}\)=0.833, our proposed SYBYL atom-type GGL model for this benchmark perform slightly better with \(R_{p}\)=0.834.
Figure A3: Optimized parameters for basic GGL model for CASF–2007 benchmark. The best parameters locations are marked by “x”. The optimal parameters for (a) single-scale exponential kernel model are \((\kappa,\tau)=(17.0,1.5)\) with the corresponding median \(R_{p}=0.724\) and (b) single-scale Lorentz kernel model are \((\kappa,\tau)=(17.0,1.5)\) with corresponding median \(R_{p}=0.724\). The optimal second kernel parameters for (c) two-scale exponential kernel model are \((\kappa,\tau)=(16.5,3.0)\) with the corresponding median \(R_{p}=0.733\) and (d) two-scale Lorentz kernel model are \((\kappa,\tau)=(6.5,10.0)\) with the corresponding median \(R_{p}=0.730\).
## 6 Data and Software Availability
The source code is available at Github: [https://github.com/NguyenLabUKY/GGL-ETA-Score](https://github.com/NguyenLabUKY/GGL-ETA-Score).
## 7 Competing interests
No competing interest is declared.
## 8 Acknowledgments
This work is supported in part by funds from the National Science Foundation (NSF: # 2053284 and # 2151802), and University of Kentucky Startup Fund.
|
2305.14067 | DIVA: A Dirichlet Process Mixtures Based Incremental Deep Clustering
Algorithm via Variational Auto-Encoder | Generative model-based deep clustering frameworks excel in classifying
complex data, but are limited in handling dynamic and complex features because
they require prior knowledge of the number of clusters. In this paper, we
propose a nonparametric deep clustering framework that employs an infinite
mixture of Gaussians as a prior. Our framework utilizes a memoized online
variational inference method that enables the "birth" and "merge" moves of
clusters, allowing our framework to cluster data in a "dynamic-adaptive"
manner, without requiring prior knowledge of the number of features. We name
the framework as DIVA, a Dirichlet Process-based Incremental deep clustering
framework via Variational Auto-Encoder. Our framework, which outperforms
state-of-the-art baselines, exhibits superior performance in classifying
complex data with dynamically changing features, particularly in the case of
incremental features. We released our source code implementation at:
https://github.com/Ghiara/diva | Zhenshan Bing, Yuan Meng, Yuqi Yun, Hang Su, Xiaojie Su, Kai Huang, Alois Knoll | 2023-05-23T13:44:12Z | http://arxiv.org/abs/2305.14067v3 | # DIVA: A Dirichlet Process Based Incremental Deep Clustering Algorithm via Variational Auto-Encoder
###### Abstract
Generative model-based deep clustering frameworks excel in classifying complex data, but are limited in handling dynamic and complex features because they require prior knowledge of the number of clusters. In this paper, we propose a nonparametric deep clustering framework that employs an infinite mixture of Gaussians as a prior. Our framework utilizes a memoized online variational inference method that enables the "birth" and "merge" moves of clusters, allowing our framework to cluster data in a "dynamic-adaptive" manner, without requiring prior knowledge of the number of features. We name the framework as **DIVA**, a **D**irichlet **P**ross-based **I**ncremental deep clustering framework via **V**ariational **A**uto-Encoder. Our framework, which outperforms state-of-the-art baselines, exhibits superior performance in classifying complex data with dynamically changing features, particularly in the case of incremental features.
## 1 Introduction
Clustering is a key task in unsupervised learning that aims to group data points based on similarity or dissimilarity metrics. Recently, deep clustering algorithms that combine deep neural networks with clustering methods have shown great promise in various applications, such as image segmentation [1; 2], document clustering [3; 4], and anomaly detection [5].
Generative model-based deep clustering algorithms have emerged as a promising research direction, with the variational auto-encoder (VAE) [6] being a popular choice due to its ability to learn data representations and generation [2]. VAE-based clustering methods typically involve two stages: training a VAE to learn the underlying data distribution and then using the learned latent variables for clustering. The advantage of VAE is its ability to handle non-linear and complex distributions [1].
A natural idea in this field is to combine the Gaussian mixture model (GMM) [7], which is a highly representative clustering module, with the VAE framework. This framework employs GMM as prior to provide with a richer information capacity, while the VAE's powerful representation learning and reconstruction capabilities can overcome the negative impact of the GMM shallow structure
on the weak representation of non-linearity [1]. However, such frameworks still have limitations, including the need to specify the number of clusters beforehand, which can be challenging when the number is unknown or varies across datasets. In Bayesian nonparametric field, previous work tried to replace the parameters of the isotropic Gaussian prior of standard VAE with the stick-breaking proportions of a Dirichlet process [8]. Unfortunately, in this case the shape and density of individual component cannot be well defined. To address this issue, we propose using the Dirichlet process mixture model (DPMM) [9] from nonparametric Bayes as a clustering module for our framework. DPMM's random probability measure sampled from the base distribution through the stick-breaking process maintains both discrete and continuous characteristics, and the number of components can theoretically reach infinity, overcoming the problem of pre-specifying the number of components. Additionally, we use the "birth" and "merge" behavior provided by DPMM for dynamic feature adaptation, allowing our framework to dynamically adjust the number of components according to the observed data to improve clustering performance. Our proposed framework demonstrates superior clustering performance and disentangled representation learning ability in various datasets, specially in facing incremental features, where the number of features in datasets gradually increases during training. This study presents novel insights on incorporating DPMM as a prior for the VAE and utilizing the "birth" and "merge" behavior to dynamically adjust the number of clusters in generative deep clustering framework.
The contributions of our paper are summarized as follows: First, we eliminate pre-defining the cluster number in prior space by introducing nonparametric clustering module DPMM into our VAE-based framework, allowing for clustering data with infinite features. Second, we introduce a memoized online variational Bayes inference method into the framework, which enables dynamic changes in the number, density, and shape of clusters in the prior space according to the observations. This allows for "birth" and "merge" of clusters. Third, we verify the dynamic-adaptation ability of our proposed framework, DIVA, demonstrating its effectiveness against state-of-the-art baselines in facing incremental data features. We show that DIVA can dynamically adjust the clusters in the feature-increasing dataset to maintain a high level of unsupervised clustering accuracy. The dynamic adaptation capability of DIVA demonstrated in our study has the potential to inspire new approaches to tackling the challenge of catastrophic forgetting [10], and could be extended to domains such as continuous learning [11] and meta-reinforcement learning [12; 13; 14].
## 2 Related work
Clustering is a fundamental task in machine learning, with widely used models such as \(k\)-means [15] or Gaussian mixture models [7] from Bayes parametrics. Bayesian non-parametric methods provide a flexible framework to handle an unknown number of clusters, with representative models like Dirichlet process mixture model [16], Chinese Restaurant Process (CRP) [17], Pitman-Yor Process (PYP) [18], and Hierarchical Dirichlet Process (HDP) [19] commonly used in clustering. However, these models have limitations in handling complex data distributions due to their shallow structure. To address this, deep neural network-based clustering algorithms have been proposed. DEC [20] uses a \(k\)-means model with a Student's \(t\)-distribution kernel to estimate the similarity between feature embeddings and cluster centroids. DeepCluster [21] employs a similar approach, iteratively updating the cluster assignments using a \(k\)-means model. SCAN [22] uses a fine-tuning and pre-trained framework, fine-tuning the clustering results with self-labeling. In addition, generative model-based deep clustering algorithms have gained popularity due to their powerful representation learning ability. VaDE [23] employs a VAE to learn a low-dimensional embedding space and a mixture of Gaussians to model the cluster assignment. GMVAE [24], an extension of VaDE, models the data distribution with a more flexible GMM. Stick-breaking VAE [8] leverages stick-breaking as its prior distribution, allowing for infinite mixtures. Last but not least, Generative Adversarial Networks (GANs) have also been applied to clustering problems, examples include Mixture of GANs [25], DCGAN [26] and its extension [27], and MIC-GANs [28]. We refer to [1; 2] for complete literature surveys about generative model based deep clustering algorithms.
## 3 Preliminaries
In this section, we introduce the concept of Bayesian nonparametric models and then the variational inference methods, which provide the theoretical foundation of our framework.
### Dirichlet process and stick-breaking method
The Dirichlet process (DP) is a distribution over probability measures [29], where the marginal distribution is Dirichlet-distributed, resulting in random distributions. Given a base distribution \(H\) and a positive concentration parameter \(\alpha\), a random probability measure \(G\) is DP-distributed, denoted as \(G\sim\text{DP}(\alpha,H)\). For a detail explanation of DP's properties, please refer to [9; 29].
A DP can be defined constructively using the Stick-Breaking (SB) process [16] via Beta distribution \(\mathcal{B}\), where for \(k\geq 1\): \(\beta_{k}\sim\mathcal{B}(1,\alpha)\), \(\pi_{k}=\beta_{k}\prod_{i=1}^{k-1}(1-\beta_{i})\). In this process, a unit-length stick is imaginatively broken into an infinite number of segments \(\pi_{k}\), with \(\alpha\) being a positive scalar. We first sample \(\beta_{1}\sim\mathcal{B}(1,\alpha)\) from a Beta distribution and break the stick with length \(\pi_{1}=\beta_{1}\). We then sample \(\beta_{2}\), and the length of the second segment will be \(\pi_{2}=\beta_{2}(1-\beta_{1})\). Continuing this process, we have \(\sum_{k=1}^{\infty}\pi_{k}=1\), and the resulting \(\pi\) follows a Griffiths-Engen-McCloskey (GEM) distribution \(\pi\sim\text{GEM}(\alpha)\). Figure 0(a) shows an intuitive image about the SB process.
Since a random distribution \(G\) drawn from a DP maintains discrete property, it can be expressed as a weighted sum of point masses, namely \(G=\sum_{k=1}^{\infty}\pi_{k}\delta_{\theta_{k}^{*}}\)[30], where \(\delta_{\theta_{k}^{*}}\) is the point mass located at \(\theta_{k}^{*}\): it equals \(1\) at \(\theta_{k}^{*}\) and equals \(\hat{0}\) elsewhere. By sampling the weights \(\pi_{k}\) according to SB-process and sampling \(\theta_{k}\) from a base distribution \(\theta_{k}\sim H\), we can say that \(G\sim\text{DP}(\alpha,H)\), indicating that \(G\) is a Dirichlet Process with the base distribution \(H\) and concentration parameter \(\alpha\). Figure 0(b) shows a draw from a DP with \(\alpha=5\).
### Dirichlet process mixture model
DP mixture is a representative generative Bayesian nonparametric model that uses an infinite mixture of clusters to model a set of observations \(\mathbf{x}=x_{1:N}\), where the number of cluster components is not predefined but rather determined by the observations.
The model assumes that each data point \(x_{i}\) is sampled from a distribution \(F(\theta_{i})\) parameterized by a latent variable \(\theta_{i}\) drawn independently from a probability measure \(G\). The DPMM assumes a DP prior \(G|\alpha,H\sim\text{DP}(\alpha,H)\), which introduces discreteness and clustering property where \(\theta_{i}\) takes on repeated values. Then all \(x_{i}\)'s drawn with the same value of \(\theta_{i}\) can be seen as one cluster, resulting in the clustering of observations. The number of unique values of \(\theta_{i}\) determines the active number of cluster components, which can be dynamically inferred during inference from the observed data. Let \(v_{i}\) be a cluster assignment variable that takes on value \(k\) with probability \(\pi_{k}\) drawn from a categorical distribution (Cat). The DPMM can be expressed via the stick-breaking process, where the mixing proportions \(\pi\) are sampled from a GEM distribution and the prior \(H\) over the cluster parameters is the base distribution of an underlying DP measure \(G\). Specifically, we have:
\[\theta_{k}^{*}|H\sim H,\ \ \ \pi|\alpha\sim\text{GEM}(\alpha),\ \ \ v_{i}|\pi\sim\text{Cat}(\pi),\ \ \ x_{i}|v_{i}\sim F(\theta_{v_{i}}^{*}), \tag{1}\]
where \(\theta_{k}^{*}\) are the cluster parameters, \(\pi\) is the mixing proportion, \(F(\theta_{v_{i}}^{*})\) is the distribution over observation in cluster \(k\), and \(H\) is the prior over cluster parameters. Typically, \(F\) is a Gaussian distribution. To provide an intuitive understanding, we draw the generative graphic model for DPMM in Figure 2.
### Variational inference for DPMM
In this section, we introduce variational inference as a method for approximating the posterior density for models based on observed data, with a focus on Bayesian nonparametric models.
Figure 1: (a) Stick-breaking process. (b) Histogram of \(\text{DP}(\alpha=5,H=\mathcal{N}(0,1))\).
Figure 2: Generative graphic model of DPMM
The basic idea of variational inference is to convert the inference problem into an optimization problem. Refer from (1), we write the joint probability for DPMM as
\[p(\mathbf{x},\mathbf{v},\mathbf{\theta},\mathbf{\beta})=\prod_{n=1}^{N}F(x_{n}|\theta_{v_{n}}) \text{Cat}(v_{n}|\mathbf{\pi}(\mathbf{\beta}))\prod_{k=1}^{\infty}\mathcal{B}(\beta_{k} |1,\alpha)H(\theta_{k}|\lambda) \tag{2}\]
Since the true posterior \(p(\mathbf{v},\mathbf{\theta},\mathbf{\beta}|\mathbf{x})\) is intractable, we aim to find the best variational distribution \(q(\mathbf{v},\mathbf{\theta},\mathbf{\beta})\) that minimizes the KL divergence with the exact conditional:
\[\begin{split} q^{*}(\mathbf{v},\mathbf{\theta},\mathbf{\beta})& =\operatorname*{arg\,min}_{q}\mathbb{KL}(q(\mathbf{v},\mathbf{\theta}, \mathbf{\beta})||p(\mathbf{v},\mathbf{\theta},\mathbf{\beta}|\mathbf{x}))\\ \mathbb{KL}(q(\mathbf{v},\mathbf{\theta},\mathbf{\beta})||p(\mathbf{v},\mathbf{ \theta},\mathbf{\beta}|\mathbf{x}))&=\mathbb{E}[\log q(\mathbf{v},\mathbf{ \theta},\mathbf{\beta})]-\mathbb{E}[\log p(\mathbf{x},\mathbf{v},\mathbf{\theta},\mathbf{\beta}) ]+\log p(\mathbf{x})\end{split} \tag{3}\]
Notice that \(\log p(\mathbf{x})\) does not depend on \(q\), so instead of minimizing the KL divergence directly, we maximize the evidence lower bound (ELBO), which consists of the expected log-likelihood of the data \(\mathbb{E}[\log p(\mathbf{x}|\mathbf{v},\mathbf{\theta},\mathbf{\beta})]\) and the KL divergence between two priors \(\mathbb{KL}(q(\mathbf{v},\mathbf{\theta},\mathbf{\beta})||p(\mathbf{v},\mathbf{\theta},\mathbf{\beta}))\), according to the eq. (3), the ELBO can be rewritten as:
\[\begin{split}\text{ELBO}(q)&=\mathbb{E}[\log p(\bm {x},\mathbf{v},\mathbf{\theta},\mathbf{\beta})]-\mathbb{E}[\log q(\mathbf{v},\mathbf{\theta},\mathbf{ \beta})]\\ &=\mathbb{E}[\log p(\mathbf{x}|\mathbf{v},\mathbf{\theta},\mathbf{\beta})]- \mathbb{KL}(q(\mathbf{v},\mathbf{\theta},\mathbf{\beta})||p(\mathbf{v},\mathbf{\theta},\mathbf{\beta}) )\end{split} \tag{4}\]
Therefore, the optimization of the ELBO is interpreted as finding a solution that explains the observed data with minimal deviation from the prior.
For the DPMM model, based on the idea of variational inference, we define the variational distribution \(q\) following the mean-field assumption, where each latent variable has its variational factor and is mutually independent of each other, namely \(q(\mathbf{v},\mathbf{\theta},\mathbf{\beta})=\prod_{n=1}^{N}q_{v_{n}}\prod_{k=1}^{K}q_{ \beta_{k}}q_{\theta_{k}}\), where \(q_{v_{n}}=\text{Cat}(v_{n}|\hat{r}_{n_{1}:n_{K}})\), \(q_{\beta_{k}}=\mathcal{B}(\beta_{k}|\hat{\alpha}_{k_{1}},\hat{\alpha}_{k_{0}})\), \(q_{\theta_{k}}=H(\theta_{k}|\hat{\lambda}_{k})\). In the context of variational inference, the true posterior distribution is infinite, and only an approximate distribution can be obtained. However, as the number of components \(K\) in categorical factor increases, the optimized ELBO objective can result in a variational distribution that closely approximates the infinite posterior. Thus to enable computation, we limit the categorical factor to only finite \(K\) components, in which \(K\) is large enough to cover all potential features. We also consider a special case where the distributions \(H\) and \(F\) come from the exponential family. Hughes and Sudderth [9] showed that in this case, the ELBO is expressed in terms of the expected mass \(N_{k}\) and the expected sufficient statistic \(s_{k}(x)\) of each component \(k\):
\[\begin{split}\text{ELBO}(q)&=\sum_{k=1}^{K}\left[ \mathbb{E}_{q}[\theta_{k}]^{T}s_{k}(x)-\hat{N}_{k}[a(\theta_{k})]+\hat{N}_{k}[ \log\pi_{k}(\beta)]-\sum_{n=1}^{N}\hat{r}_{nk}\log\hat{r}_{nk}\right.\\ &\left.+\mathbb{E}_{q}[\log\frac{\mathcal{B}(\beta_{k}|1,\alpha)}{ q(\beta_{k}|\hat{\alpha}_{k_{1}},\hat{\alpha}_{k_{0}})}]+\mathbb{E}_{q}[\log \frac{H(\theta_{k}|\lambda)}{q(\theta_{k}|\hat{\lambda}_{k})}]\right],\end{split} \tag{5}\]
Then each variational factor can be updated individually in an iterative manner. In first stage, we update the _local_ variational parameters \(\hat{r}_{nk}\) in the categorical factor \(q_{v_{n}}\) for each cluster assignment. In second stage, we update the _global_ parameters \(\hat{\alpha}_{k_{1}},\hat{\alpha}_{k_{0}},\hat{\lambda}_{k}\) in the factors for stick-breaking proportion \(q_{\beta_{k}}\) and the factors for base distribution \(q_{\theta_{k}}\)[31]. For the detail derivation and implementation please refer to our appendix Sec. A.1.
The computation of the summary statistics \(N_{k}\) and \(s_{k}(x)\) requires the full dataset. For inference on a large dataset, we use a batch-based approach called memoized online variational Bayes (memoVB) [9], which breaks the summary statistics of full data into a sum of the summary statistics of each batch. The DPMM has nonparametric nature, which means it can adjust to varying numbers of clusters. This property enables the development of heuristics for dynamically adding or removing clusters, which can help avoid local optima when using batch-based variational inference approaches.
MemoVB is an approach that implements birth and merge moves for dynamic cluster adjustment. To create new clusters, we collect poorly described subsamples \(x^{\prime}\) by a single cluster when passing through each batch and fit a separate DPMM model with \(K^{\prime}\) initial clusters. Assuming that the active number of clusters before the birth move is \(K\), we can either accept or reject the new cluster proposals by comparing the result of assigning \(x^{\prime}\) to \(K+K^{\prime}\) with that of assigning \(x^{\prime}\) to \(K\). To complement the birth move, a merge move potentially combines a pair of clusters into one. Two clusters are merged if the merge improves the ELBO objective, leaving \(K-1\) clusters after the merge ELBO [9].
Methodology
In this section, we present DIVA, a novel deep clustering approach that integrates a Bayesian nonparametric model with a variational autoencoder.
Given an unlabeled set of data with \(n\) points \(\{x_{i}\}_{i=1}^{n}\in X\), the designed deep clustering method that simultaneously learns: (1) The latent representation \(z_{i}\) of each \(x_{i}\) in a \(D\)-dimensional Gaussian latent space \(Z\) and the mapping \(f_{\theta}:X\xrightarrow{}Z\) that projects the data points \(\{x_{i}\}_{i=1}^{n}\in X\) onto the latent space \(Z\). (2) The number \(K\) of Gaussian-distributed clusters and their associated means \(\mu_{k}\) and covariance matrices \(\Sigma_{k}\). (3) The cluster assignment \(v_{i}\) of each \(x_{i}\), where \(v_{i}\in\{1,\dots,K\}\). (4) The reconstructions \(x_{i}^{*}\) of the inputs via decoder network.
To accomplish this, DIVA combines a standard VAE with a DPMM, where the cluster assignments are jointly determined by the learned representation and the cluster distributions. Our algorithm uses an alternating optimization scheme: First, we update the DPMM module using the latent variables \(z_{i}\), which are sampled from the encoder using the reparametrization trick during the last VAE update. Then the DPMM module is fixed, and the VAE parameters are updated, using the assigned clusters to each \(z_{i}\) to minimize the KL divergence. An overview of DIVA is shown in Figure 3.
### Updating the DPMM
When updating the parameters of the DPMM, we fit a DPMM on the latent samples \(z_{i}\)'s obtained from the VAE. We define the generative process of assigning data points to clusters according to the SB process. A generative process is assumed as follows: (1) The mean \(\mu_{k}\) and diagonal covariance matrix \(\Sigma_{k}\) of each cluster \(k\) are drawn from a Normal-Wishart distribution, which is the conjugate prior for a diagonal Gaussian with unknown means and unknown covariance matrix, namely \(\Sigma_{k}\sim\mathcal{W}(\mathbf{W},\nu)\), \(\mu_{k}\sim\mathcal{N}(\mu_{0},(\lambda\Sigma_{k})^{-1})\), assuming the latent space is D-dimensional. (2) The probabilities of each cluster are drawn from a GEM distribution with concentration parameter \(\alpha\), \(\pi\sim\text{GEM}(\alpha)\), as described in Sec. 3.1. (3) The cluster assignment \(v_{n}\) is drawn from a discrete distribution \(\text{Cat}(\pi)\) based on the cluster probabilities previously obtained, which is \(\pi\sim\text{GEM}(\alpha)\). (4) In our DPMM module, when the latent variable \(z_{i}\) is assigned to a cluster \(v_{n}=k\), we assume that it is sampled from a multivariate Gaussian with mean \(\mu_{k}\) and variance \(\Sigma_{k}\), which means \(z_{i}|v_{n}=k\sim\mathcal{N}(\mu_{k},\Sigma_{k})\). (5) The original data are assumed to be generated by the decoder, namely \(f_{\theta}(z_{i})=x_{i}\), where \(\theta\) are the parameters of the decoder.
Given the overall generative process, the DPMM has a joint probability
\[\begin{split} p(\mathbf{z},\mathbf{v},\mathbf{\beta},\mathbf{\mu},\Sigma)=& \prod_{i=1}^{N}\mathcal{N}(z_{i}|\mu_{v_{n}},\Sigma_{v_{n}})\text{Cat}(v_{n}| \mathbf{\pi}(\mathbf{\beta}))\prod_{k=1}^{\infty}\mathcal{B}(\beta_{k}|1,\alpha)\\ &\prod_{k=1}^{\infty}\mathcal{N}(\mu_{k}|\mu_{0},(\lambda\Sigma_ {k})^{-1})\mathcal{W}(\Sigma_{k}|\mathbf{W},\nu)\end{split} \tag{6}\]
We then use variational inference to find the posterior estimates for the parameters. We construct the variational distribution \(q\) with the following factorization:
\[q(\mathbf{v},\mathbf{\beta},\mathbf{\mu},\Sigma)=\prod_{n=1}^{N}\text{Cat}(v_{n}|\hat{r}_{ n_{1}},\cdots,\hat{r}_{n_{k}})\prod_{k=1}^{K}\mathcal{B}(\beta_{k}|\hat{\alpha}_{k _{1}},\hat{\alpha}_{k_{0}})\prod_{k=1}^{K}\mathcal{N}(\mu_{k}|\hat{\mu}_{0},( \hat{\lambda}\Sigma_{k})^{-1})\mathcal{W}(\Sigma_{k}|\hat{\mathcal{W}},\hat{ \nu}). \tag{7}\]
We propose the iterative optimization procedure described in Sec. 3.3 that updates the Normal-Wishart distribution parameters \(\hat{\mu}_{0}\), \(\hat{\lambda}\), \(\hat{\mathcal{W}}\), and \(\hat{\nu}\) in each step. We also utilize the memoized online variational Bayes method to update the parameters of the stick-breaking process \(\hat{r}_{n_{1}:n_{k}},\hat{\alpha}_{k_{1}}\), and
Figure 3: Overview of the DIVA. We optimize the DPMM and the VAE alternately. When updating the DPMM, we use the current latent sample \(z\) obtained from the VAE. When updating the VAE, we assign the outputs of the encoder to the clusters of the current DPMM and minimize the KL divergence with respect to the assigned cluster.
to estimate the cluster probabilities and assignments. In addition, we dynamically adjust the total number of clusters \(K\) using birth-and-merge moves. A complete update for our DPMM module thus breaks down to updating three sets of parameters.
Each update of the DPMM module is performed after a training epoch of the VAE. Since we alternate between updating the DPMM module and the VAE, the DPMM module is not required to converge in each update and avoids the need to fit a new DPMM model from scratch every time. In each update, we initialize the DPMM with the parameters learned from the previous updates and apply this module to new latent samples produced by the updated VAE, enabling us to update the same DPMM while incorporating the latest changes in the latent space mappings.
### Updating the VAE
When training the VAE, we jointly minimize the reconstruction loss \(\mathcal{L}_{recon}\) and the KL divergence loss \(\mathcal{L}_{\text{KL}}\). The reconstruction loss \(\mathcal{L}_{recon}\) is the mean squared error between the observed data \(x\) and the decoder's reconstructions \(x^{*}\), which is kept from standard VAE. The \(\mathcal{L}_{\text{KL}}\) is simply the KL-divergence between two isotropic Gaussian distributions [6].
To compute \(\mathcal{L}_{\text{KL}}\), we obtain the cluster assignment \(v_{i}=k\) of each latent sample \(z_{i}\) from the current DPMM model. Using the DPMM, the mean and covariance matrix of assigned cluster \(k\) is \(\mu_{k}\) and \(\Sigma_{k}\). According to VAE, \(z_{id}=\mu_{d}(\mathbf{x};\mathbf{\phi})+\sigma_{d}(\mathbf{x};\mathbf{\phi})\eta\) for \(d\in\{1,2,\dots,D\}\), where \(\phi\) are encoder parameters, \(\mu(x_{i};\mathbf{\phi})\) and \(\sigma(x_{i};\mathbf{\phi})\) are encoder outputs and \(\eta\sim\mathcal{N}(0,1)\). The KL divergence between two multivariate Gaussian distributions is calculated as follows:
\[D_{\text{KL}_{ik}}=\frac{1}{2}\bigg{[}\log\frac{|\Sigma_{k}|}{| \Sigma(x_{i};\mathbf{\phi})|}-D+\text{tr}\{\Sigma_{k}^{-1}\Sigma(x_{i};\mathbf{\phi}) \}+(\mu_{k}-\mu(x_{i};\mathbf{\phi}))^{T}\Sigma_{k}^{-1}(\mu_{k}-\mu(x_{i};\mathbf{ \phi}))\bigg{]} \tag{8}\]
However, this hard assignment method may assign a sample to a wrong cluster, introducing errors into the training by calculating the KL divergence. To overcome this issue, we introduce _soft assignment_, in which we compute the probability \(p_{ik}\) of assigning the latent sample \(z_{i}\) to cluster \(k\) using the DPMM for all possible \(k\in\{1,2,\dots,K\}\). Then the KL divergence is defined as a weighted sum with respect to each cluster:
\[D_{\text{KL}_{i}}=\sum_{k=1}^{K}p_{ik}D_{\text{KL}_{ik}} \tag{9}\]
Although one can use a more complex weighting strategy, we found this simple weighting by probabilities to be sufficient empirically. We use Algorithm 1 to summarize the DIVA algorithm.
```
0: Dataset \(\mathcal{D}\), batch size \(B\), DPMM train steps \(T\), parameters \(\phi\) of the encoder and \(\theta\) of the decoder.
1: Initialize \(\phi\), \(\theta\) of VAE, the DPMM with \(K=1\), including \(\mu_{0},\lambda,\mathcal{W},\nu\) and \(\alpha\).
2:repeat
3: Sample mini-batch \(\mathcal{M}=\{x_{0:B}\}\sim\mathcal{D}\).
4: With the VAE, obtain latent variables \(z_{0:B}\) and save to a buffer \(\mathcal{Z}=\mathcal{Z}\bigcup\{z_{0:B}\}\).
5: With the current DPMM, obtain the cluster assignments \(v_{0:B}\) and the assignment probabilities of each latent variable.
6: Compute \(\mathcal{L}_{KL}\) using (8), (9), \(\mathcal{L}_{recon}\) and update \(\phi\), \(\theta\).
7:if time to update the DPMM (e.g. at the end of an epoch) then
8:for step \(i=1,2,...,T\)do
9: Update DPMM variables \(\hat{\mu}_{0},\hat{\lambda},\hat{\mathcal{W}},\hat{\nu},\hat{r}_{n_{1}:n_{k}},\hat{\alpha}_{k_{1}},\hat{\alpha}_{k_{0}}\) using \(\mathcal{Z}\) and current DPMM.
10: birth moves to try to add new clusters, and merge moves to try to combine existing clusters.
11: Reset buffer \(\mathcal{Z}=\emptyset\).
12:until convergence.
```
**Algorithm 1** DIVA
## 5 Experiments
In this section, we verify the dynamic-adaptive clustering ability of DIVA and show its effectiveness in clustering when facing dynamically increased data features.
### Implementation details
We select three recognized representative, widely adopted datasets for verification, including two image datasets MNIST [32], Fashion-MNIST [33] and a text classification dataset Reuters10k [23], where we normalize images on MNIST and Fashion-MNIST. Using Reuters10k can evaluate the generalization ability of our model beyond bare image classification tasks.
We compared our proposed method to three baselines: GMM [7], GMVAE [24], and SB-VAE [8]. GMM is a representative clustering module in the Bayes parametrics. GMVAE builds upon GMM by incorporating it as a prior and jointly learning the representation and clustering. SB-VAE replaces the isotropic Gaussian with a Stick-breaking distribution, which offers infinite clustering capability. Both GMVAE and SB-VAE are state-of-the-art algorithms in Bayes parametric and non-parametric generative deep clustering. Since the number of clusters in the prior space must be pre-defined in GMM and GMVAE, we thus select the cluster number \(K=3,5,10\) for both baselines. For image datasets, We use a 2-layer convolutional networks for our VAE encoder, the latent dimension is \(16\). For text dataset, we use 3 full-connection layers as our VAE architecture, the latent space has \(100\) dimension. For more details please refer to the appendix Sec. A.2.
In order to evaluate the unsupervised clustering performance of our framework on various datasets, here we leverage the metrics about the unsupervised clustering accuracy and kNN error from previous works [20; 23] as our key metrics. For both image datasets we run for all frameworks 100 epochs and evaluate the performance on the test dataset. For Reuters10k, we train for all frameworks 15 epochs and evaluate on the last \(20\%\) of total dataset. For each trial we repeat with 3 seeds and calculate the mean value as well as standard deviation of the corresponding metrics.
### Incremental representation learning and clustering
Figure 4 displays the unsupervised clustering accuracy achieved by our proposed method on three datasets: (a) MNIST, (b) Fashion-MNIST, and (c) Reuters10k. We train the model on datasets with a fixed number of ground truth classes and gradually increase the class number in the subsequent trials. The bar plot represents the mean values of clustering accuracy along with the corresponding standard deviation. Our framework consistently achieves high clustering accuracy, with an average accuracy of \(91\%\) and \(71\%\) on MNIST and Fashion-MNIST datasets, respectively, and over \(80\%\) on Reuters10k dataset. Notably, our approach outperforms the baseline frameworks GMVAE and GMM that require prior specification of the number of clusters. In the case of larger feature sets than the number of pre-defined clusters, the clustering accuracy of these baseline models significantly declines due to its inherent limitation about unchangeable cluster number in prior space.
To quantify the performance, Table 1 displays the accuracy statistics on the MNIST test set for our proposed framework DIVA and the baseline methods. The values in the table represent the mean percentage of three trials, with the standard deviation \(\sigma\) indicated in brackets. Our framework
Figure 4: Unsupervised cluster accuracy for (a) MNIST, (b) Fashion-MNIST and (c) Reuters10k. We evaluate our framework and baselines on test dataset with incremental features, e.g. for MNIST the x-axis with “3 features” means the dataset only contains 3 different types of digit to be classified, which are “0, 1, 2” respectively.
consistently maintains high clustering accuracy on the MNIST dataset, even as the number of features gradually increases in each new trial. In contrast, the accuracy of other baseline methods declines significantly when the number of features exceeds their corresponding predefined cluster number. Please see our supplementary material Sec. A.3 for additional results on other datasets.
To assess the effectiveness of our framework in learning a latent representation suitable for clustering, we train a _k_-Nearest Neighbor (kNN) [34] classifier using the latent samplings generated by both our framework and the baselines. The kNN error rates with \(k=3,5,10\) are recorded as averages of 3 trials, expressed in percentage. All datasets consist of full features. The results presented in Table 2 demonstrate that our framework achieves the lowest error rate across all three datasets.
Additionally, we present the t-SNE [35] projection of the learned latent space on the MNIST full dataset in Figure 5 for both DIVA and the baselines, to providing an intuitive observation. In the figures, each color represents one ground truth class, and the classes represented by the same color in different figures are also consistent. As shown in the figures, our DIVA is capable of learning a distinct and cluster-suitable representation, while other baseline methods, such as Stick-breaking VAE, fail to classify the data points from each other. The GMVAE model, with an appropriate number of clusters (\(K=10\)) (Fig. 5c), can learn a better clustering representation, but some clustering groups are still "stick" together. However, when the number of defined clusters is smaller than the number of features, the GMVAE model fails to learn a distinct latent representation (Fig. 5d, 5e).
### Dynamic adaptation
We evaluate the effectiveness of DIVA's "birth" and "merge" moves during training via trials with the incremental features mode on the MNIST dataset. The models are initially trained with three digits, and we increase the number of digits at epochs 30, 60, and 90 to 5, 7, and 10, respectively. The test accuracy of all models (solid lines) and the number of cluster changes in DIVA (dashed line)
\begin{table}
\begin{tabular}{l l l l l l l l l l} \hline \hline \multirow{2}{*}{\# features} & \multicolumn{3}{c}{MNIST kNN error} & \multicolumn{3}{c}{Fashion kNN error} & \multicolumn{3}{c}{Reuters10k NNN error} \\ \cline{2-10} & k=3 & k=5 & k=10 & k=3 & k=5 & k=10 & & k=3 & k=5 & k=10 \\ \hline GMVAE K=3 & \(8.3\) & \(7.1\) & \(6.3\) & \(26.4\) & \(24.6\) & \(22.7\) & GMVAE K=2 & \(8.1\) & \(8.0\) & \(7.5\) \\ GMVAE K=5 & \(8.1\) & \(7.2\) & \(6.3\) & \(26.0\) & \(24.4\) & \(22.6\) & GMVAE K=3 & \(8.3\) & \(7.9\) & \(7.3\) \\ GMVAE K=10 & \(7.7\) & \(6.9\) & \(6.2\) & \(26.7\) & \(24.7\) & \(23.1\) & GMVAE K=4 & \(7.9\) & \(7.3\) & \(7.3\) \\ SB-VAE & \(13.2\) & \(12.7\) & \(12.4\) & \(24.7\) & \(23.6\) & \(22.5\) & & & \(63.7\) & \(64.4\) & \(62.4\) \\
**DIVA** & \(\mathbf{3.1}\) & \(\mathbf{3.2}\) & \(\mathbf{3.1}\) & \(\mathbf{13.2}\) & \(\mathbf{12.7}\) & \(\mathbf{12.5}\) & & \(\mathbf{5.0}\) & \(\mathbf{4.8}\) & \(\mathbf{5.0}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Test error-rate (%) for kNN on MNIST, Fashion-MNIST and Reuters10k
\begin{table}
\begin{tabular}{l l l l l l l l l l} \hline \hline \multirow{2}{*}{\# features} & \multicolumn{3}{c}{**DIVA**} & \multicolumn{3}{c}{GMVAE K=10} & \multicolumn{3}{c}{GMVAE K=5} & \multicolumn{3}{c}{GMVAE K=3} & \multicolumn{3}{c}{GMGM K=10} & \multicolumn{3}{c}{GMGM K=5} & \multicolumn{3}{c}{GMM K=3} \\ \hline
3 features & \(91.64\)\(\pm\)\(1.31\) & \(91.31\)\(\pm\)\(4.26\) & \(96.15\)\(\pm\)\(5.01\) & \(89.11\)\(\pm\)\(14.36\) & \(76.82\)\(\pm\)\(3.30\) & \(74.35\)\(\pm\)\(3.06\) & \(74.17\)\(\pm\)\(0.01\) \\
5 features & \(90.80\)\(\pm\)\(0.81\) & \(94.81\)\(\pm\)\(3.99\) & \(92.14\)\(\pm\)\(5.00\) & \(59.85\)\(\pm\)\(0.89\) & \(62.61\)\(\pm\)\(7.32\) & \(47.57\)\(\pm\)\(2.69\) & \(39.13\)\(\pm\)\(4.83\) \\
7 features & \(\mathbf{92.76}\)\(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbfmathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}\))))))))))))))}}))}(((((((((())(())(())(())()()())()()())()()()())()()()())()())()()())()()())()())()()())()()())()()())()()())()()())()()())()()())()()())()()()())()())()()())()()()())()()())()())()())()()())()()())()()()())()()())()()())()()())()())()()())()()())()())()())()())()())()())()())()()())()())())()()())()())()())()())()())()())()())()())())()())()())()())()())()())())()())()())())()())())())()())()())())()())())()())()())())()())()())())()())())())()())())()())()())()())())()())())()())())()()())())())()())()())())()())())()())()())()())())()())())())()())())()())())()())())())()())())()())())()())())())()())()())())()())())()())())()())()())())()())())())()())())()())())())())()()())()())())())()())()())())())()())())()())())())())()())())())())())()())())())())())()())()())())())())()())())())())())()())())())())())())()())())())())())())())())())())())())())())())())())())())())())())())())())()))())())())())())())()))())())())()))()))()))()))())))()))()))(()))))(())))())())))(()))))(())))()))(()))))(())))))(()))))(()))))(()))))(()))))(()))))())))(())))()))())))(()))))(()))))())))(()))))(())))))(()))))())))(())))))(()))))(())))))(()))))())))(())))))(()))))())))(()))))()))())))(()))))())))(())))))()))()))())))()))(()))))()))())))()))()))())))()))()))()))()))())))()))())))()))()))())))()))())))()))())))()))()))()))())))()))()))()))())))())))())))()))())))()))())))())))()))())))()))())))())))()))())))()))())))())))()))())))()))())))())))())))())))()))())))())))()))())))())))())))())))())))())))())))())))(())))))())))())))())))())))())))(
are depicted in Figure 6. As new features are added to the dataset, DIVA dynamically "births" new components to fit them, which may cause a temporary decrease in accuracy during the early stages of training as both the VAE and DP components are not yet converged. However, once training on the new subset converged, the accuracy return to the best possible performance. In contrast, baseline models such as GMVAEs lack this dynamic-adaptive capability. Once the number of features exceed the number of components, their test accuracy display a stepwise decline. Additional trial results please refer to the appendix Sec. A.3 and video visualization. The right plot in Figure 6 shows the latent space learned by DPMM, colored according to the clusters. It is observed that each ground truth class is learned by 2 or 3 sub-clusters, resulting in a total number of components greater than the number of ground truth classes, as demonstrated by the left plot. This is because DPMM captures not only the classes themselves but also the sub-features within each class. Moreover, after achieving convergence during training on the provided observations, the mapping between clusters and ground truth remains consistent and does not undergo shuffling.
To gain more insight into what individual sub-cluster has learned, we conducted image reconstructions using the components in the DPMM. Specifically, we sampled latent variables from each sub-cluster and then fed them to the decoder to rebuild images. The generated images from sub-clusters on MNIST (a)-(f) and Fashion-MNIST (g)-(l) are shown in Figure 7. The results demonstrate that each sub-cluster is capable of capturing distinct sub-features of the corresponding class. For instance, 3 sub-clusters that cluster the digit "0" in (a)-(c), have additionally learned different writing style of "0". Similar conclusions can also be get from other examples. In summary, our proposed framework efficiently extracts informative sub-features from rough class labels. In particular, when dealing with unknown data features, our framework's disentangled representation learning capability is highly beneficial in uncovering deep information from data samples.
Figure 6: (a)“Birth” and “merge” moves of DIVA on MNIST. We introduce dynamic feature in training, where the model was initially trained on 3 digits, and the number of digits is increased to 5, 7, and 10 at epochs 30, 60, and 90. We record the test accuracy (solid lines) and the number of clusters for DIVA (dashed line). (b) t-SNE of DIVA on full MNIST, colored by clusters. Each true label is learned by 2 or 3 clusters, enabling the sub-features of individual digits to be captured.
Figure 7: Reconstruction from DPMM clusters on MNIST (a)-(f) and Fashion-MNIST (g)-(l). Each class is learned by 2 or 3 sub-clusters, each cluster learns the sub-feature of individual class.
Conclusion
Our proposed framework, DIVA, utilizes the infinite clustering property of Bayesian non-parametric mixtures and combines it with the powerful latent representation learning ability of VAEs to overcome the challenge of clustering complex or dynamically changing data without prior knowledge of the feature number. The dynamic adaptation exhibited by DIVA on three datasets outperforms state-of-the-art baselines in handling data with incremental features. In addition, our framework excels in discovering finer-grained features and its adaptability to observed data suggests potential applications in domains like continuous learning. We encourage readers to explore further extensions and applications based on our framework.
|
2306.02355 | Observations of the Current Sheet Heating in X-ray during a Solar Flare | In the solar corona, magnetic reconnection occurs due to the finite
resistivity of the plasma. At the same time, resistivity leads to ohmic
heating. Therefore, the reconnecting current sheet should heat the surrounding
plasma. This paper presents experimental evidence of such plasma heating caused
by magnetic reconnection. We observed the effect during a C1.4 solar flare on
16 February 2003 at the active region NOAA 10278, near the solar limb. Thanks
to such a location, we successfully identified all the principal elements of
the flare: the flare arcade, the fluxrope, and, most importantly, the presumed
position of the current sheet. By analyzing the monochromatic X-ray images of
the Sun obtained by the CORONAS-F/SPIRIT instrument in the Mg XII 8.42 A
spectral line, we detected a high-temperature ($T \geq$ 4 MK) emission at the
predicted location of the current sheet. The high-temperature emission appeared
during the CME impulsive acceleration phase. We believe that this additionally
confirms that the plasma heating around the current sheet and magnetic
reconnection inside the current sheet are strongly connected. | Anton Reva, Sergey Bogachev, Ivan Loboda, Artem Ulyanov, Alexey Kirichenko | 2023-06-04T13:21:47Z | http://arxiv.org/abs/2306.02355v1 | # Observations of the Current Sheet Heating in X-ray during a Solar Flare
###### Abstract
In the solar corona, magnetic reconnection occurs due to the finite resistivity of the plasma. At the same time, resistivity leads to ohmic heating. Therefore, the reconnecting current sheet should heat the surrounding plasma. This paper presents experimental evidence of such plasma heating caused by magnetic reconnection. We observed the effect during a C1.4 solar flare on 16 February 2003 at the active region NOAA 10278, near the solar limb. Thanks to such a location, we successfully identified all the principal elements of the flare: the flare arcade, the fluxrope, and, most importantly, the presumed position of the current sheet. By analyzing the monochromatic X-ray images of the Sun obtained by the _CORONAS-F_/SPIRIT instrument in the Mg xii 8.42 A spectral line, we detected a high-temperature (\(T\geq 4\) MK) emission at the predicted location of the current sheet. The high-temperature emission appeared during the CME impulsive acceleration phase. We believe that this additionally confirms that the plasma heating around the current sheet and magnetic reconnection inside the current sheet are strongly connected.
Solar corona (1483); Solar x-ray emission (1536); Solar coronal mass ejection (310); Solar flares (1496) +
Footnote †: journal: ApJ
0000-0002-4880-788X]A.A Reva
0000-0002-4880-788X]S.A. Bogachev
0000-0002-4880-788X]I.P. Loboda
0000-0002-4880-788X]A.S. Ulyanov
0000-0002-1888-088X]A.S. Kirichenko
## 1 Introduction
Plasma in the solar corona has such a low resistivity that its motion can be treated in the approximation of the ideal magneto-hydrodynamic (MHD). In this approximation, the connectivity of the magnetic field is conserved: two points that belonged to the same field line will continue to belong to the same field line during plasma motion (see chapter 2 in Priest, 2014).
Nonetheless, plasma in the solar corona has non-zero resistivity, and, therefore, the connectivity of the magnetic field lines can change. The most important manifestation of this process is magnetic reconnection: a mutual annihilation of two magnetic lines of opposite polarities at the'magnetic separator.'
Changes of the coronal magnetic field induce the electric current inside the separator. This current prevents reconnection of the magnetic field lines. If the magnetic field continues to change, the separator will bifurcate into a current sheet: a thin layer of electric current. Eventually, due to the finite resistivity of plasma, the induced current will slowly dissipate, and the magnetic structure will slowly relax to a potential configuration. This process is called'steady reconnection' (see chapter 1 in Somov, 2006).
If the magnetic energy is accumulated faster than it is dissipated by ohmic heating, the current sheet size will increase. Eventually, the current sheet can reach such a size that it becomes unstable, and the whole amount of accumulated energy will be released catastrophically through the 'impulsive reconnection.'
Impulsive reconnection is a central element of the standard model of a solar flare (Carmichael, 1964; Sturrock, 1966; Hirayama, 1974; Kopp & Pneuman, 1976). In this model, before the flare begins, the active region has the following configuration: a loop arcade, a fluxrope (cylindrical twisted magnetic structure) above the arcade, and a current sheet between the arcade and the fluxrope (see Figure 1). The flare starts when the cur
rent sheet becomes unstable, and impulsive reconnection begins inside it. The reconnection process produces two plasma flows, one of which pushes the fluxrope up and may lead to a coronal mass ejection (CME). At the same time, inside the reconnection region, the electrons are accelerated by the induced electric field. The accelerated electrons move along the magnetic field lines towards the chromosphere, where they slow down, heat plasma, and produce a bremsstrahlung hard X-ray (HXR) emission. The heated plasma fills the magnetic loops above the chromosphere, making them visible in soft X-ray and EUV spectral ranges.
The current sheet is a vital element of the standard flare model. At the same time, it is one of the hardest objects on the Sun to observe. One of the main reasons is that there are no effective ways to directly measure magnetic fields or electric currents in the solar corona. Another reason is that the current sheet is a very thin structure. It has a huge size in the \(Y\) and \(Z\) directions in Figure 1, but negligible size along the \(X\) axis (several meters, or even several cm). So the plasma emission inside the current sheet is expected to be negligible compared to the emission of the surrounding coronal plasma.
Despite this, several authors reported signatures of the current sheet observed during solar flares. In the imaging observations, a long thin linear structure above the flaring active region is usually interpreted as a current sheet signature (Lin et al., 2005; Savage et al., 2010; Reeves & Golub, 2011; Zhu et al., 2016; Seaton et al., 2017). Besides that, an elongated double Y-shaped dark structure can appear below the CME core in the Fe 171 A images (Reva et al., 2016). In the spectroscopic observations, researchers consider a high-temperature emission from the presumed location of the current sheet--below the CME core and above the flaring active region--as a signature of the current sheet (Ciaravella et al., 2002; Ko et al., 2003; Ciaravella & Raymond, 2008). Recently, Warren et al. (2018) presented multi-wavelength observations of the current sheet using both imaging (AIA; Lemen et al., 2012) and spectroscopic (EIS; Culhane et al., 2007) instruments.
In all the examples listed above, the authors reported, not the direct observations of the current sheet, but rather its indirect observational signatures. Among such indirect evidence, one of the most important is the plasma heating in the vicinity of the current sheet caused by the ohmic heating due to the finite resistivity of the coronal plasma.
Mechanisms of such heating are actively studied with the theoretical models and the numerical MHD simulations. These studies showed that ohmic heating (Reeves et al., 2010, 2019), adiabatic compression (Birn et al., 2009; Reeves et al., 2019), and turbulence (Ye et al., 2020) can effectively heat plasma inside the current sheet. Some of the thermal energy accumulated inside the current sheet may leak away due to thermal conduction, which will heat the surrounding plasma and widen the visible size of the current sheet ('thermal halo'; Yokoyama & Shibata, 1998; Seaton & Forbes, 2009; Reeves et al., 2010). Using MHD simulations, Reeves et al. (2010) showed that thermal conduction could leak up to 50 % of the energy released during the reconnection.
For many reasons, it is hard to experimentally study the relationship between the current sheet heating and magnetic reconnection rate. First of all, as mentioned above, the observations of current sheets (even indirect) are rare. Secondly, most solar telescopes--such as AIA/_SDO_ or XRT/_Hinode_(Golub et al., 2007)--cannot detect high-temperature plasma in a monochromatic mode. The hot plasma emission in their images is mixed with a low-temperature background, which complicates the analysis of the heating.
In this work, we report reliable signatures of the plasma heating up to temperatures higher than 4 MK detected in the vicinity of the coronal current sheet in a monochromatic mode. We also experimentally confirmed the relationship between the plasma heating and the reconnection rate, which we derived from the observation of CME acceleration. In section 2, we describe the experimental data used in the research. In section 3, we present the obtained results; then, in section 4, we discuss them and make the conclusions.
Figure 1: Standard flare/CME model. 1) the fluxrope; 2) the current sheet; 3) the flare arcade.
## 2 Experimental Data
The event (flare and CME) that we have studied in this paper occurred on 16 February 2003 near the western edge of the Sun. To study it, we used several instruments to get most of the information about plasma heating, details of the reconnection process, and CME structure. The full list of data used in the study is presented in Table 1.
The most important instrument for our study was the Mg xii spectroheliograph that operated on board the _CORONAS-F_/SPIRIT satellite from 2001 till 2003 (Oraevsky & Sobelman, 2002; Zhitnik et al., 2002). Considering that the instrument may not be well known to some readers, we will briefly describe some of its important features.
The instrument is an imaging spectroheliograph based on Bragg-crystal optics. It obtained monochromatic X-ray images of the solar corona in the Mg xii 8.42 A spectral line. During the selected period of observations, the spectroheliograph worked with a 2 min cadence and a binned resolution of 8''.
The main feature that distinguishes the Mg xii spectroheliograph from other imaging instruments is its temperature selectivity. The Mg xii 8.42 A line produces a noticeable signal only at temperatures higher than 4 MK. So, the corresponding images clearly outline the high-temperature plasma on the Sun without any contribution from the low-temperature background (see Figure 2). This gives an effective way to study the plasma heating processes in many objects: large-scale flares (Grechnev et al., 2006; Urnov et al., 2007; Reva et al., 2015), CMEs (Kirichenko & Bogachev, 2013; Reva et al., 2017), and even microflares (Reva et al., 2012; Kirichenko & Bogachev, 2017, 2018; Reva et al., 2018).
It is important to note that the instrument is equipped with a crystal mirror and, therefore, had dispersion. In the Mg xii images, each point of the Sun looks like a short profile of the Mg xii 8.42 A spectral line. To lessen this effect, we numerically deconvolved the Mg xii images. For more details, see Appendix A.
The pointing system of the _CORONAS-F_ spacecraft had a significant residual jitter. To correct it, we used data from the Solar X-ray Imager (SXI; Hill et al., 2005;
Figure 2: Comparison of the images obtained by the EIT 195 Å (left), SXI (middle), and the Mg xii spectroheliograph (right). The images were taken on 16 February 2003.
Pizzo et al., 2005) that worked on board the _GOES-12_ satellite. The telescope provided full-disk soft X-ray solar images in the 6-60 A wavelength range with a spatial resolution of \(\approx 10^{\prime\prime}\) and a \(5^{\prime\prime}\) pixel size. The 'Be-thin' channel of SXI is sensitive to the same temperatures as the Mg xii spectroheliograph, but with a noticeable contribution of low-temperature background. The orientation of the Sun in the SXI images is known. Using cross-correlation, we determined the shift between the Mg xii images and the 'Be-thin' SXI images. Then we shifted the Mg xii images by the corresponding value to correct the jitter.
During the selected period of observations, all of the SPIRIT telemetry was allocated to the Mg xii data. This improved the cadence of the observations, but, as a result, the data of other instruments of the SPIRIT complex were not available.
Below we briefly describe other instruments used in this research.
The _Rewen Ramaty High Energy Solar Spectroscopic Imager_ (RHESSI; Lin et al., 2002) observes HXR spectra from 3 keV to 17 MeV. Using Fourier-based methods, RHESSI can synthesize HXR images in the same spectral range.
The Extreme ultraviolet Imaging Telescope (EIT; Delaboudiniere et al., 1995) on the _Solar and Heliospheric Observatory_ (_SoHO_; Domingo et al., 1995) takes solar images at the wavelengths centered at 171, 195, 285,
\begin{table}
\begin{tabular}{l l l} \hline \hline & Instrument & Reference \\ \hline Location and dynamics of high-temperature plasma & Mg xii spectroheliograph & Zhitnik et al. (2003) \\ & SXI & Hill et al. (2005); Pizzo et al. (2005) \\ Location of the HXR sources & RHESSI & Lin et al. (2002) \\ CME structure and dynamics & EIT & Delaboudinière et al. (1995) \\ & TRACE & Handy et al. (1999) \\ & Mk4 & Elmore et al. (2003) \\ & LASCO & Brueckner et al. (1995) \\ Magnetic field configuration & MDI & Scherrer et al. (1995) \\ Prominences and filaments in H\(\alpha\) & PICS & DOI: 10.5065/D65719TR \\ \hline \end{tabular}
\end{table}
Table 1: List of instruments.
Figure 3: Active region NOAA 10278 on the disc. Left: H\(\alpha\) image obtained with the PICS telescope. Right: magnetogram obtained with MDI. Images were taken on February 11, 2003. The yellow dashed line indicates the position of the limb at 23:00 UT on February 16, 2003.
and 304 A. The instrument has a pixel size of 2.6\({}^{\prime\prime}\) and a spatial resolution of 5\({}^{\prime\prime}\). The EIT had two observational modes: synoptic and 'CME watch.' In a synoptic mode, it takes images in all four channels every 6 hours. In the 'CME watch' mode, the telescope takes images in the 195 A channel every 12 min.
The Transition Region And Coronal Explorer (TRACE; Handy et al., 1999) is a space-based telescope that observes the Sun in EUV and white-light. It had a limited field of view (\(8.5^{\prime}\times 8.5^{\prime}\)) but a high spatial resolution (1\({}^{\prime\prime}\)).
Mk4 coronameter is a ground-based instrument located at Mauna Loa Solar Observatory (DOI: 10.5065/D66972C9; Elmore et al., 2003). It builds images of the solar corona from 1.14 to 2.86 \(R_{\odot}\) in the white light (700-900 nm) with a spatial resolution of 5.95\({}^{\prime\prime}\) and a cadence of 3 min.
Large Angle Spectroscopic Coronagraph (LASCO; Brueckner et al., 1995) is a set of white-light coronagraphs that observe solar corona from 1.1 \(R_{\odot}\) up to 30 \(R_{\odot}\) (C1, 1.1-3 \(R_{\odot}\); C2, 2-6 \(R_{\odot}\); C3, 4-30 \(R_{\odot}\)). In 1998, LASCO C1 stopped working, and for this research, only C2 and C3 data are available. Coronagraph C2 has a resolution of 11\({}^{\prime\prime}\), and C3 has a resolution of 56\({}^{\prime\prime}\).
The Michelson Doppler Imager (MDI; Scherrer et al., 1995) on the _SOHO_ satellite maps the line of sight component of the photospheric magnetic field with a 4\({}^{\prime\prime}\) resolution and 90-min cadence.
Polarimeter for Inner Coronal Studies (PICS, DOI: 10.5065/D65719TR) is a ground-based instrument located at Mauna Loa Solar Observatory. It takes H\(\alpha\) images (6563 A) with a field of view of 2.3 \(R_{\odot}\), a spatial resolution of 2.9\({}^{\prime\prime}\), and a cadence of 3 minutes.
## 3 Results
### Flare Topology and Dynamics
The studied event was a typical solar flare associated with a CME, which developed in full agreement with the standard views on how a solar flare should evolve. This is very important for our study because it is thanks to the standard configuration of the flare we are sure about where the current sheet was located.
Taking this into account, let us describe the flare topology and dynamics. The pre-flare configuration (5 days before the flare) is shown in Figure 3, where the left panel is the H\(\alpha\) image, and the right panel is the corresponding MDI magnetogram. The flare occurred in the active region NOAA 10278. On 11 February 2003, this active region was approximately in the center of the solar disk. The most prominent feature of the active region seen in H\(\alpha\) images (left panel) was a filament that was slightly tilted to the East-West direction and was
Figure 4: Evolution of the CME. Green: EIT telescope; blue: Mk4 coronagraph; red: LASCO C2 coronagraph.
Figure 5: Kinematics of the CME core. Top: the distance between the CME core and Sun’s center; middle: the CME core’s velocity; bottom: the CME core’s acceleration. Green: data of the EIT telescope; purple: data of the Mk4 coronagraph; red: data of the LASCO/C2 coronagraph; blue: data of the LASCO/C3 coronagraph.
located between two areas of opposite polarities. We match this filament to the fluxrope of the further CME.
This filament was seen in H\(\alpha\) images all five days from February 11 until February 16, when the active region reached the solar limb. At this moment, the region took the same position as in our sketch for the standard flare/CME model (see Figure 1). So, this was the best projection to observe the current sheet.
Approximately at this time, the CME started to erupt. To study this process, we used synthetic images combined from EIT data obtained in EUV and two white light images: one from the Mk4 coronameter and the second one from the LASCO C2 coronagraph (see Figure 4). The CME had a classic 3-part structure in white-light images: bright core, dark cavity, and bright frontal loop (Illing & Hundhausen, 1985; Webb & Hundhausen,
Figure 6: CME topology. Copper inner circle: TRACE image; copper outer circle: EIT image; blue: Mk4 image. The dotted line marks the presumed location of the current sheet. The images were taken on 16 February 2003.
1987). As we said above, we identify the CME core with the filament (fluxrope) that was seen in the H\(\alpha\) images before the CME (Illing & Athay, 1986). In this case, the fluxrope should be aligned along the \(Y\) axis in Figure 1.
Observation of the CME motion gives an indirect way to measure the reconnection rate during a solar flare. Generally, we can expect a simple relationship: the faster is the reconnection rate, the faster is the CME acceleration.
Taking this into account, we measured the CME coordinates during its motion (for details of the measurement method, see Appendix B). The result--the CME height, velocity, and acceleration as a function of time--is shown in Figure 5. From these plots, we see that CME evolution consists of three main phases:
1. before \(\approx\) 22:35, the CME structure was stable;
2. from \(\approx\) 22:35 to \(\approx\) 23:10, the CME impulsively accelerated from 0 to 250 km s\({}^{-1}\); the acceleration peaked at \(\approx\) 200 m s\({}^{-2}\);
3. after \(\approx\) 23:10, the CME gradually accelerated to \(\approx\) 500 km s\({}^{-1}\) during 9 hours with an acceleration of \(\approx\) 10-20 m s\({}^{-2}\).
The observed flare topology is consistent with the standard flare model. We can clearly see a flare arcade, a fluxrope, and a cavity that surrounds the fluxrope (see Figure 6). The CME evolution is also consistent with the standard model. It has a stable phase, an impulsive acceleration phase (probably caused by the impulsive reconnection), and a steady acceleration phase (probably caused by the solar wind; Yashiro et al., 2004). Since the studied flare looks and behaves exactly as the standard model predicts, it is natural to assume that a current sheet should exist between the flare arcade and the CME core.
### Observation of the Plasma Heating
As we said above, the main goal of this paper was to find clear evidence of plasma heating in the vicinity of the reconnecting current sheet. For this purpose, we carefully checked all the Mg xii images obtained during the studied event. Thanks to the specific temperature sensitivity of the Mg xii spectroheliograph (starting from \(T\geq 4\) MK), we consider the corresponding images as a good marker of plasma heating. As soon as the signal appears in a Mg xii image, we can conclude that the plasma temperature increases to 4 MK or higher.
The observed dynamics of hot plasma is shown in Figure 7. We found two high-temperature regions during the flare. The first one (the brightest one) appeared at 21:53 UT and existed for \(\approx\) 5 hours. The region was compact, and its location approximately coincides with the top of the flare loop seen in the TRACE 195 A image. Plasma heating near the looptop region is a typical feature of a solar flare, and for this reason, we do not consider it in detail. The looptop plasma heating takes place under the current sheet in the so-called 'cusp' region of the magnetic configuration. Such a source is often associated with a looptop hard X-ray emission: another typical detail of a solar flare. The HXR emission appears at the same place (see Figure 7) and at the same time (see Figure 8) as region-1.
The most interesting for us was the second high-temperature region. At 22:28 UT, a faint linear structure appeared above the flare arcade. It gradually increased its length but decreased its intensity. Eventually, the linear structure faded away, reaching the maximal length of 250 Mm. In other parts of the CME, there was no hot plasma.
The hot linear structure was located between the flare arcade and the CME core (observed in the Mk4 images) and inside the dark cavity observed in the EIT images (see Figure 9). Such a location coincides with the presumed position of the current sheet that should exist during CME.
Two important features distinguish this second plasma region from the first one. The first feature is the location. Region-2 is clearly not associated with the top of the flare loop and with a 'cusp' region. Another feature is a significant difference in brightness and size. Region-2 is much larger than the compact region-1, but, despite this, it is much fainter. The peak flux of the linear structure was \(\approx\) 4 % of the peak flux from the compact, bright source (see Figure 8), while the brightness ratio (pixel intensity ratio) was \(\approx\) 0.5-1 %. This seriously justifies that region-1 and region-2 were heated by two different mechanisms. We want to emphasize that region-2 is observed simultaneously with region-1. So, we cannot explain them as two consecutive stages of the same heating mechanism.
The timing of the plasma heating is in good agreement with the CME motion. The flare begins at \(\approx\) 21:53 UT when the plasma heating starts near the looptop region of the arcade, and the HXR emission starts to rise (see Figure 8). We did not find any signatures of plasma heating in the current sheet (region-2) during this stage. Approximately 30 minutes later, the second stage associated with the CME motion started. At \(\approx\) 22:30 UT, the CME impulsively accelerated from zero up to \(v\approx\) 300 km s\({}^{-1}\) with an acceleration of \(a\approx\) 100-200 m s\({}^{-2}\). Due to the fast CME motion, the current sheet should be elongating rapidly in the \(Z\)-direction
(see Figure 1), and just at this time, we observed an additional plasma heating around the presumable location of the current sheet. The CME acceleration lasted about 30 minutes, which is in excellent agreement with the observed duration of plasma heating in region-2 (see Figure 10).
Sadly, the RHESSI data were not available during the CME impulsive acceleration (see Figure 8). For this reason, we do not have information about the dynamics of the HXR emission.
## 4 Discussion and Conclusion
The appearance of high-temperature plasma in the solar corona is typical for solar flares and usually relates to the magnetic reconnection process. Bright sources of thermal X-ray emission usually appear above the top of magnetic loops during the impulsive phase of solar flares. They may be heated by energized electrons or by super-sonic plasma flows.
In a typical magnetic configuration of a flare region, the bright loop-top source is not the only high-temperature region that may appear above the loop during a flare. A much fainter but much larger source may appear higher in the corona due to a diffusion of thermal energy from the region of magnetic reconnection.
Usually, this second source (named region-2 in our study) cannot be observed due to its low brightness, which does not allow it to be distinguished from the low-temperature background. We succeeded in this study due to two main factors. The first one is the region's location and orientation: the CME occurred at the solar limb, and the current sheet was aligned along the line of sight. Such a location and orientation are favorable to detect faint emission of the current sheet. The second one is the temperature sensitivity of the Mg xii spectroheliograph. The instrument detects a high-temperature emission without the contribution of low-temperature background.
The low brightness ratio of faint and bright components (\(\sim\) 1 %) also contributes to the difficulties of the registration. We think that this ratio is typical for flares, but, of course, we have no confirmation of this since only one such event was detected.
Figure 7: Hot plasma dynamics observed with the Mg xii spectroheliograph. Blue corresponds to low intensities, red and yellow correspond to high intensities. Contours mark the location of the RHESSI 6–12 keV emission.
Figure 8: X-ray flux dynamics. Top: the Mg xii lightcurves of region-1 (thin black line) and region-2 (thick black line). Middle: the RHESSI count rates. Black line: 3–6 keV channel; red line: 6–12 keV channel; blue line: 12–25 keV channel. Purple lines mark the RHESSI nights. Bottom: GOES flux. Red line: 1–8 Å channel; blue line: 0.5–4 Å channel.
The appearance of the faint component we associate with the heating caused by the reconnection inside the current sheet. Comparison of the emission and CME dynamics confirms this idea. The faint high-temperature component appears approximately at the same time as the CME impulsively accelerates. According to the standard CME model, the CME impulsively accelerates during impulsive reconnection inside the current sheet. We think that this clearly indicates that the energy for plasma heating in region-2 comes from magnetic reconnection, which is in good agreement with standard views on solar flares.
Another indirect evidence of the connection between heating and reconnection is the relative timing of region-1 and region-2 emissions. Region-2 appears during the rising phase of the Mg xii and GOES fluxes of region-1
Figure 9: Comparison of the Mg xii, TRACE, EIT, and Mk4 images. Inside yellow contour: signal of the Mg xii spectroheliograph (blue corresponds to low intensities, red and yellow correspond to high intensities). Copper inner circle: TRACE image; copper outer circle: EIT image; blue: Mk4 image. The images were taken on 16 February 2003.
(see Figure 8). According to the Neupert effect (Neupert, 1968), the HXR emission of a flare correlates with the derivative of the SXR emission. The GOES 1-8 A flux derivative--our estimate of the HXR emission--peaks when we observe region-2 (see Figure 10). Since HXR emission is a signature of the reconnection, we think that such a correlation further strengthens our interpretation.
The hot plasma was detected in the previous observations of the current sheets. Ciaravella et al. (2002); Ko et al. (2003); Ciaravella and Raymond (2008) studied current sheets observed with Ultraviolet Coronagraph Spectrometer (UVCS; Kohl et al., 1995) and reported temperatures around 6 MK. The current sheets analyzed by Zhu et al. (2016) and Seaton et al. (2017) had temperatures around 8-10 MK, while the ones analyzed by Hannah and Kontar (2013) and Warren et al. (2018) had temperatures around 10-20 MK. Finally, Landi et al. (2012) presented observations, in which current sheet temperature never exceeded 3 MK. Most likely, the difference in the current sheet temperatures is caused by the difference in the reconnection rate.
Our conclusion that the current sheet heating is caused by the reconnection inside it is consistent with the MHD theory and simulations (Yokoyama and Shibata, 1998; Seaton and Forbes, 2009; Reeves et al., 2010). However, it is difficult to compare this conclusion with other experimental works. Current sheet observations are rare, and only a couple of them allow studying the dynamics of the current sheet heating relative to the dynamics of the reconnection.
The first such example is a current sheet observed during an X8.3 flare on 2017 September 10 (Warren et al., 2018). At the time of writing, this is the most detailed observation of the current sheet (see references in Chen et al., 2020). In this event, some re
Figure 10: Relative timings of the CME’s acceleration (black line at the top), the derivative of the GOES 1–8 Å flux (red line at the top), and the linear structure (region-2) intensity in the Mg xii images (green line at the bottom).
connection signatures--the CME impulsive acceleration (Gopalswamy et al., 2018; Veronig et al., 2018), the hard X-ray emission, the derivative of the soft X-ray emission, and the microwave emission (Gary et al., 2018)--correlated with each other and occurred during a small period of time (\(\approx\) 15:50-16:00 UT). On the other hand, other reconnection signatures--downflows (Longcope et al., 2018) and turbulence (Cheng et al., 2018; Warren et al., 2018) inside the current sheet--were observed for \(\approx\) 1 hour after the first reconnection signatures appeared (\(\approx\) 16:00-17:00 UT). The current sheet itself appeared after the first reconnection signatures (\(\approx\) 16:00 UT) and hot plasma inside it was observed for \(\approx\) 1 hour in the 94 A channel of Solar Ultraviolet Imager (SUVI; Seaton and Darnel, 2018).
Another example is the current sheet observed during the CME on 2009 April 17 (Reva et al., 2016). The event was observed with the TESIS EUV telescope that build images of the solar corona in the Fe 171 A line up to distances of 2 \(R_{\odot}\) from the Sun center (Kuzin et al., 2011; Reva et al., 2014). The current sheet looked like a double Y-shaped darkening in the Fe 171 A images. Such darkening indicates that the current sheet is heated (although, it is not clear up to what temperature). The heating (darkening) occurred simultaneously with the CME impulsive acceleration (signature of the reconnection). At the start of the CME impulsive acceleration phase, GOES and SphinX (Gburek et al., 2011) registered a short duration flux increase.
All of these three examples--this work, Reva et al. (2016), and Warren et al. (2018)--exhibit a similar pattern. During the CME eruption, a short-duration reconnection signatures appear, which are followed by the long-duration reconnection signatures. The heating inside the current sheet starts approximately when the short-duration reconnection signature appears. The hot plasma inside the current sheet is observed during the lifetime of the long-duration reconnection signatures. These similarities support our conclusion that heating inside the current sheet is caused by the reconnection.
However, there are differences between these events. In this work and Reva et al. (2016), the heating starts slightly before the short-duration reconnection signatures; in Warren et al. (2018), it starts after. In this work and Reva et al. (2016), the impulsive CME acceleration is a long-duration signature; in Warren et al. (2018), it is a short-duration signature. These differences and the fact that some reconnection signatures have short duration and some long duration show that the reconnection dynamics has a complex structure, which may vary from one event to another.
We are sure that the energy for the current sheet heating comes from the magnetic reconnection. However, the exact mechanism of the heating is unclear. As we said in the Introduction, ohmic heating (Reeves et al., 2010, 2019), adiabatic compression (Birn et al., 2009; Reeves et al., 2019), and turbulence (Ye et al., 2020) can heat current sheet. Sadly, our data don't allow us to determine the heating mechanism.
Another issue that we would like to discuss is the relative weakness of the studied flare. Usually, current sheets are observed during strong flares (X and M classes), while the flare in this work is only C1.4. Figure 3 shows that most of the flaring active region was visible. Even if we take into account partial occultation by the disk, the flare would still be of a C class.
We believe that the studied event is an example of reconnection heating occurring as a universal process in flares from C to X class. Most likely, reconnection heating is less intense in weak flares than in strong ones, and, therefore, plasma is heated to lower temperatures and has lower emission measure. As a result, such heating is rarely observed in weak flares because it is difficult to detect faint hot emission.
We think that monochromatic imagers similar to the Mg XII spectroheliograph--for example, see Kuznetsov et al. (2016); Kirichenko et al. (2021); Reva et al. (2021)--can help us study reconnection heating. We hope that such instruments will be created in future and that they will improve our understanding of the processes occurring inside the current sheets during solar flares.
This is the Accepted Manuscript version of an article accepted for publication in the Astrophysical Journal. IOP Publishing Ltd is not responsible for any errors or omissions in this version of the manuscript or any version derived from it. The Version of Record is available online at doi:10.3847/1538-4357/ac6b3d.
This research was funded by a grant from the Russian Science Foundation (grant No 21-72-10157, [https://rscf.ru/project/21-72-10157/](https://rscf.ru/project/21-72-10157/)).
Mk4 coronameter (DOI: 10.5065/D66972C9) and PICS (DOI: 10.5065/D65719TR) data are provided courtesy of the Mauna Loa Solar Observatory, operated by the High Altitude Observatory, as part of the National Center for Atmospheric Research (NCAR). NCAR is supported by the National Science Foundation. The RHESSI satellite is a NASA Small Explorer (SMEX) mission. SXI full-disk X-ray images are supplied courtesy of the Solar X-ray Imager (SXI) team.
The SOHO/LASCO data are produced by a consortium of the Naval Research Laboratory (USA), Max-Planck-Institut fur Aeronomie (Germany), Laboratoire d'Astronomie (France), and the University of Birmingham (UK). MDI and EIT data are supplied courtesy of the SOHO/MDI and SOHO/EIT consortia. SOHO is a project of international cooperation between ESA and NASA.
## Appendix A Mg XII deconvolution
The Mg XII spectroheliograph used Bragg crystal optics. As a result, the instrument has dispersion: its images are a convolution of the spatial component with the profile of the Mg XII 8.42 A line. More details about this effect could be read in Kuzin et al. (1994) or Reva et al. (2021).
The Mg XII \(\lambda=8.42\) A line is a Ly-\(\alpha\) doublet of a hydrogen-like Mg ion: \(\lambda_{1}=8.4192\) A (\(1s\ ^{2}S_{1/2}\) - \(2p\ ^{2}P_{3/2}\)) and \(\lambda_{2}=8.4246\) A (\(1s\ ^{2}S_{1/2}\) - \(2p\ ^{2}P_{1/2}\)). The ratio of the line intensity should be 2:1.
The Mg XII images consist of two slightly shifted images overlayed one onto another (see Figure 11a). The majority of the pixels in the Mg XII images do not have a signal: they consist of several isolated hot objects. In the direction of the dispersion, the boundaries of those objects contain only a signal from one of the doublet components. The profile of the doublet components overlaps only inside those objects.
To deconvolve the Mg XII images, we use the following algorithm.
1. We start at the boundary of the image that corresponds to the short wavelengths.
2. For each pixel, we calculate the intensity of the weaker component (divided by two).
3. We subtract this value from the pixel that corresponds to the location of the weaker component of the doublet.
4. We move in the direction of the dispersion to the next pixel that corresponds to a longer wavelength.
5. We repeat steps 2-4 until we reach the boundary of the image.
The result is shown in Figure 11b. The algorithm successfully eliminates the second component of the doublet. The images become less elongated and easier to interpret.
There are several issues that this algorithm does not address. Firstly, it does not deconvolve the line width. It cannot be deconvolved in a straightforward way because the line width is determined by the Doppler broadening, which varies from pixel to pixel. Secondly, the line ratio of the Ly\(\alpha\) doublet could deviate from the theoretical value (Sylvester et al., 1986; Laming, 1990). If this effect is present, it can distort the deconvolved images. Finally, if significant plasma motions along the line of sight are present, the Doppler shifts will distort the images.
## Appendix B CME kinematics measurements
For the measurements of the CME coordinates, we used the data of the EIT telescope, Mk4 coronameter,
Figure 11: Deconvolution of the Mg XII images. Left: image before deconvolution; right: image after deconvolution.
and LASCO coronagraphs. We used a simple point-and-click method. To estimate the errors of the measurements, we repeated the procedure nine times for each image.
In the Mk4 and LASCO images, we aimed at the CME core's center. Since the core was not seen in the EIT images, we aimed for the lowest part of the dark cavity in the EIT images.
The coordinates measured in the EIT images (lowest part of the dark cavity) and the white-light images (CME's core) are different parts of the CME. We cannot simply combine these coordinates and compute derivatives (velocity and acceleration). Furthermore, the point-and-click method is subjective, and different instruments image corona differently. The center of the core in images obtained by different instruments corresponds to slightly different parts of the CME. In order to compute derivatives, we need to recalculate coordinates measured in the EIT images to the CME's core coordinates and adjust the core coordinates measured by different white-light instruments.
To solve the problem, we adopted the method from Reva et al. (2016). We assumed that the CME expands proportionally and that the temporal dependence of the heights of different parts of the CME could be linearly scaled. Then we picked scaling coefficients so that temporal dependence of the CME core height seamlessly transitions from one instrument to another.
After scaling the data, we numerically differentiated the radial distance and obtained radial velocity. Then we numerically differentiated velocity and obtained acceleration. For the differentiation, we used the local least-square approximation method (Wood, 1982; Reva et al., 2017). The result of the measurements is presented in Figure 5.
|
2305.13708 | Sobolev type inequalities for fractional maximal functions and Riesz
potentials in half spaces | In this paper, we study Sobolev type inequalities for fractional maximal
functions $M_{{\mathbb H},\nu}f$ and Riesz potentials $I_{{\mathbb H},\alpha}
f$ of functions in weighted Morrey spaces of the double phase functional
$\Phi(x,t) = t^{p} + (b(x) t)^{q}$ in the half space, where $1<p<q$ and
$b(\cdot)$ is non-negative, bounded and H\"older continuous of order $\theta
\in (0,1]$. We also show that the Riesz potential operator $I_{{\mathbb
H},\alpha}$ embeds from weighted Morrey space of the double phase functional
$\Phi(x,t)$ to weighted Campanato spaces. Finally, we treat the similar
embedding for Sobolev functions. | Yoshihiro Mizuta, Tetsu Shimomura | 2023-05-23T05:46:58Z | http://arxiv.org/abs/2305.13708v1 | # Sobolev type inequalities for fractional maximal functions and Riesz potentials in half spaces
###### Abstract
In this paper, we study Sobolev type inequalities for fractional maximal functions \(M_{\mathbb{H},\nu}f\) and Riesz potentials \(I_{\mathbb{H},\alpha}f\) of functions in weighted Morrey spaces of the double phase functional \(\Phi(x,t)=t^{p}+(b(x)t)^{q}\) in the half space, where \(1<p<q\) and \(b(\cdot)\) is non-negative, bounded and Holder continuous of order \(\theta\in(0,1]\). We also show that the Riesz potential operator \(I_{\mathbb{H},\alpha}\) embeds from weighted Morrey space of the double phase functional \(\Phi(x,t)\) to weighted Campanato spaces. Finally, we treat the similar embedding for Sobolev functions.
+
Footnote †: Key words and phrases : fractional maximal functions, Riesz potentials, Sobolev’s inequality, double phase functionals, weighted Morrey spaces, weighted Campanato spaces
+
Footnote †: Key words and phrases : fractional maximal functions, Riesz potentials, Sobolev’s inequality, double phase functionals, weighted Morrey spaces, weighted Campanato spaces
## 1 Introduction
Morrey spaces were introduced by C. B. Morrey in 1938 to study the existence and regularity of partial differential equations ([34]). We also refer to [35]. The boundedness of the maximal operator was studied on Morrey spaces in [9]. For Herz-Morrey-Orlicz spaces on the half space, see [30]. The boundedness of the fractional maximal operator was studied on Morrey spaces in [15]. For local Morrey-type spaces, see [6]. There are many related results; see e.g. [2, 5, 8, 22, 23, 24, 25, 29, 36]. There has been an increasing interest in Sobolev spaces; see [13, 14, 18] and so on.
For a locally integrable function \(f\) on the half space \(\mathbb{H}=\{x=(x^{\prime},x_{n})\in\mathbb{R}^{n-1}\times\mathbb{R}^{1}:x_{n}>0\}\) and \(\nu\) with \(0\leq\nu\leq n\), the fractional Hardy-Littlewood maximal function \(M_{\mathbb{H},\nu}f\) is defined by
\[M_{\mathbb{H},\nu}f(x)=\sup_{\{r>0:B(x,r)\subset\mathbb{H}\}}\frac{r^{\nu}}{| B(x,r)|}\int_{B(x,r)}|f(y)|\,dy,\]
where \(B(x,r)\) is the ball in \(\mathbb{R}^{n}\) centered at \(x\) of radius \(r>0\) and \(|B(x,r)|\) denotes its Lebesgue measure. The mapping \(f\mapsto M_{\mathbb{H},\nu}f\) is called the fractional central maximal operator. When \(\nu=0\), we write \(M_{\mathbb{H}}f\) instead of \(M_{\mathbb{H},0}f\).
In view of the well-known theorem by Hardy and Littlewood, the usual maximal operator is bounded in \(L^{p}(\mathbb{H})\). However, this is not always true in the weighted
\(L^{p}\) spaces, as will be seen in Remarks 2.2 and 2.4 below. To conquer difficulties, we consider the local maximal operators; for an application of the local maximal operators, see [21].
In the previous paper [33], we established a Sobolev type inequality for the fractional maximal function \(M_{\mathbb{H},\nu}f\) in weighted Morrey spaces. In fact, the following result is shown in [33, Theorem 2.1]:
Theorem A. Let \(p>1\), \(1/p^{*}=1/p-\nu/\sigma>0\) and \(0<\sigma<(n+1)/2\). Suppose \(\beta<(n+1)/(2p^{\prime})\), where \(1/p+1/p^{\prime}=1\). Then there exists a constant \(C>0\) such that
\[\sup_{\{r>0:B(x,r)\subset\mathbb{H}\}}\frac{r^{\sigma}}{|B(x,r)|} \int_{B(x,r)}\left(z_{n}^{\beta}M_{\mathbb{H},\nu}f(z)\right)^{p^{*}}dz \leq C\]
when \(\sup_{r>0,x\in\mathbb{H}}\frac{r^{\sigma}}{|B(x,r)|}\int_{\mathbb{H} \cap B(x,r)}\left(|f(y)|y_{n}^{\beta}\right)^{p}dy\leq 1\).
This is not true for the usual fractional maximal function \(M_{\nu}f\) defined by
\[M_{\nu}f(x)=\sup_{r>0}\frac{r^{\nu}}{|B(x,r)|}\int_{\mathbb{H} \cap B(x,r)}|f(y)|\,dy;\]
see Remarks 2.2 and 2.4.
The double phase functional was introduced by Zhikov [38] in the 1980s. Regarding regularity theory of differential equations, Mingione and collaborators [3, 4, 11, 12] investigated a double phase functional
\[\hat{\Phi}(x,t)=t^{p}+a(x)t^{q},\ x\in\mathbb{R}^{N},\ t\geq 0,\]
where \(1<p<q\), \(a(\cdot)\) is nonnegative, bounded and Holder continuous of order \(\theta\in(0,1]\). See [20, 37] for Calderon-Zygmund estimates, [26, 30] for Sobolev inequalities, [32] for Hardy-Sobolev inequalities and [27, 28] for Campanato-Morrey spaces for the double phase functional. We refer to for instance [7, 10, 16, 17, 19, 31] and references therein for other recent works.
In the present paper, relaxing the continuity of \(a(\cdot)\), we consider the double phase functional
\[\Phi(x,t)=t^{p}+(b(x)t)^{q},\]
where \(1<p<q\) and \(b(\cdot)\) is non-negative, bounded and Holder continuous of order \(\theta\in(0,1]\) (cf. [11]); if we write \(\Phi(x,t)=t^{p}+a(x)t^{q}\) with \(a(x)=b(x)^{q}\), then \(a\) is not always Holder continuous of order \(\theta q\) when \(\theta q>1\).
In connection with Theorem A above, our first aim in this paper is to give Sobolev type inequalities for \(M_{\mathbb{H},\nu}f\) of functions in weighted Morrey spaces of the double phase functional \(\Phi(x,t)\) (Theorem 2.1). We are mostly interested in functions \(f\) for which \(M_{\nu}f=\infty\); see Remark 2.4 given below.
For \(0<\alpha<n\) and a locally integrable function \(f\) on \(\mathbb{H}\), we define the Riesz potential of order \(\alpha\) in \(\mathbb{H}\) by
\[I_{\mathbb{H},\alpha}f(x)=\int_{B(x,x_{n})}|x-y|^{\alpha-n}f(y)dy.\]
Our arguments are applicable to the study of Sobolev's inequalities for \(I_{{\mathbb{H}},\alpha}f\) (Theorem 3.4), which have not been found in the literature. The sharpness of Theorem 3.4 will be discussed in Remark 3.5.
In Section 4, we are concerned with Sobolev's inequalities for \(I_{{\mathbb{H}},\alpha}f\) of functions in weighted Morrey spaces of the double phase functional \(\Phi(x,t)\) (Theorem 4.1).
In Section 5, we treat the case \(\sigma=\alpha p\). In fact, we show that \(I_{{\mathbb{H}},\alpha}\) embeds from weighted Morrey spaces to weighted Campanato spaces in the case \(\sigma=\alpha p<(n+1)/2\) (Theorem 5.1). Further, we show that \(I_{{\mathbb{H}},\alpha}\) embeds from weighted Morrey spaces of the double phase functional \(\Phi(x,t)\) to weighted Campanato spaces in the case \(\sigma=\alpha q=(\alpha+\theta)p\) (Theorem 5.2).
In the final section, we show the embedding for Sobolev functions in the same frame (Theorems 6.2 and 6.8).
Throughout this paper, let \(C\) denote various constants independent of the variables in question. The symbol \(g\sim h\) means that \(C^{-1}h\leq g\leq Ch\) for some constant \(C>0\).
## 2 Boundedness of fractional maximal operators
for double phase functionals
Throughout this paper, let
\[p>1\quad\mbox{and}\quad\sigma>0.\]
Our aim in this section is to study the boundedness of the fractional central maximal operator \(M_{{\mathbb{H}},\nu}\) in weighted Morrey spaces of the double phase functional \(\Phi(x,t)\).
Theorem 2.1.: Let \(p>1\), \(1/q=1/p-\theta/\sigma\), \(1/p^{*}=1/p-\nu/\sigma>0\) and \(1/q^{*}=1/q-\nu/\sigma>0\). Set
\[\Phi^{*}(x,t)=\Phi_{p^{*},q^{*}}(x,t)=t^{p^{*}}+(b(x)t)^{q^{*}}.\]
Suppose \(\beta<(n+1)/(2p^{\prime})\) and \(0<\sigma<(n+1)/2\), where \(1/p+1/p^{\prime}=1\). Then there exists a constant \(C>0\) such that
\[\sup_{\{r>0:B(x,r)\subset{\mathbb{H}}\}}\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x, r)}\Phi^{*}(z,z_{n}^{\beta}M_{{\mathbb{H}},\nu}f(z))dz\leq C\]
when \(\sup_{r>0,x\in{\mathbb{H}}}\frac{r^{\sigma}}{|B(x,r)|}\int_{{ \mathbb{H}}\cap B(x,r)}\Phi(y,|f(y)|y_{n}^{\beta})dy\leq 1\).
Remark 2.2.: In Theorem 2.1, the assumption that \(B(x,r)\subset{\mathbb{H}}\) is needed. See [33, Remark 2.9].
Before a proof of Theorem 2.1, we recall some lemmas from [33, 15].
Lemma 2.3 ([33, Lemma 2.3]).: For \(\varepsilon>(n-1)/2\), set
\[I(x) = \int_{B(x,x_{n})}y_{n}^{\varepsilon-n}dy.\]
Then there exists a constant \(C>0\) such that
\[I(x) \leq Cx_{n}^{\varepsilon}.\]
Remark 2.4.: If \(f(y)=|y_{n}|^{-1}\), then
\[M_{\nu}f(x)=\infty\]
for all \(x\in\mathbb{R}^{n}\). However,
\[M_{\mathbb{H},1}f(x)\leq C\]
for \(x\in\mathbb{H}\), which is shown by Lemma 2.3.
Lemma 2.5 ([33, Lemma 2.4]).: For \(\varepsilon<(n-1)/2\), set
\[J(y) = \int_{\{x\in\mathbb{H}:|x-y|<x_{n}\}}x_{n}^{\varepsilon-n}dx.\]
Then there exists a constant \(C>0\) such that
\[J(y) \leq Cy_{n}^{\varepsilon}.\]
We know the following result on the boundedness of \(M_{\nu}\).
Lemma 2.6 ([15, Lemma 4], cf. [2, Corollary 2]).: Let \(1/p^{*}=1/p-\nu/\sigma>0\). Then there exists a constant \(C>0\) such that
\[\sup_{r>0}\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}\left(M_{\nu}g( x)\right)^{p^{*}}dx \leq C\]
when \(\sup_{r>0,x\in\mathbb{H}}\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x, r)}|g(y)|^{p}dy\leq 1\).
Lemma 2.7 ([33, Lemma 2.7]).: Set
\[K(x) = \frac{x_{n}^{\nu}}{|B(x,x_{n})|}\int_{B(x,x_{n})}|f(y)|dy.\]
Suppose \(1/p^{*}=1/p-\nu/\sigma>0\), \(0<\sigma<(n+1)/2\) and \(\beta<(n+1)/(2p^{\prime})\). Then there exists a constant \(C>0\) such that
\[\sup_{0<r<x_{n}}\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}\left(K( z)z_{n}^{\beta}\right)^{p^{*}}dz\leq C\]
when \(\sup_{x\in\mathbb{H}}\frac{x_{n}^{\sigma}}{|B(x,x_{n})|}\int_{B (x,x_{n})}\left(|f(y)|y_{n}^{\beta}\right)^{p}dy\leq 1\).
Let us prove Theorem 2.1.
Proof of Theorem 2.1.: Let \(f\) be a measurable function on \(\mathbb{H}\) such that
\[\sup_{r>0,x\in\mathbb{H}}\frac{r^{\sigma}}{|B(x,r)|}\int_{\mathbb{H}\cap B(x,r)} \Phi(y,|f(y)|y_{n}^{\beta})dy\leq 1.\]
First we see from [33, Theorem 2.1] that
\[\sup_{\{r>0:B(x,r)\subset\mathbb{H}\}}\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)} (z_{n}^{\beta}M_{\mathbb{H},\nu}f(z))^{p^{*}}dz\leq C.\]
Next we show that
\[\sup_{\{r>0:B(x,r)\subset\mathbb{H}\}}\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)} (b(z)z_{n}^{\beta}M_{\mathbb{H},\nu}f(z))^{q^{*}}dz\leq C.\]
Note that
\[\frac{r^{\nu}}{|B(x,r)|}b(x)\int_{B(x,r)}|f(y)|dy\] \[= \frac{r^{\nu}}{|B(x,r)|}\int_{B(x,r)}\{b(x)-b(y)\}|f(y)|dy+\frac{ r^{\nu}}{|B(x,r)|}\int_{B(x,r)}b(y)|f(y)|dy\] \[\leq C\frac{r^{\nu+\theta}}{|B(x,r)|}\int_{B(x,r)}|f(y)|dy+\frac{r^{ \nu}}{|B(x,r)|}\int_{B(x,r)}b(y)|f(y)|dy,\]
so that
\[L_{1}(x) = \sup_{0<r<x_{n}/2}\frac{r^{\nu}}{|B(x,r)|}b(x)\int_{B(x,r)}|f(y)|dy\] \[\leq \sup_{0<r<x_{n}/2}Cx_{n}^{-\beta}\frac{r^{\nu+\theta}}{|B(x,r)|} \int_{B(x,r)}|f(y)|y_{n}^{\beta}dy\] \[+\sup_{0<r<x_{n}/2}Cx_{n}^{-\beta}\frac{r^{\nu}}{|B(x,r)|}\int_{B (x,r)}b(y)|f(y)|y_{n}^{\beta}dy\] \[\leq Cx_{n}^{-\beta}M_{\nu+\theta}g(x)+Cx_{n}^{-\beta}M_{\nu}h(x),\]
where \(g(y)=|f(y)|y_{n}^{\beta}\chi_{\mathbb{H}}(y)\) and \(h(y)=b(y)|f(y)|y_{n}^{\beta}\chi_{\mathbb{H}}(y)\). We have
\[L_{2}(x) = \sup_{x_{n}/2<r<x_{n}}\frac{r^{\nu}}{|B(x,r)|}b(x)\int_{B(x,r)}|f (y)|dy\] \[\leq C\frac{x_{n}^{\nu+\theta}}{|B(x,x_{n})|}\int_{B(x,x_{n})}|f(y)| dy+C\frac{x_{n}^{\nu}}{|B(x,x_{n})|}\int_{B(x,x_{n})}b(y)|f(y)|dy\] \[= C\{L_{21}(x)+L_{22}(x)\}.\]
Hence
\[b(z)z_{n}^{\beta}M_{\mathbb{H},\nu}f(z)\leq CM_{\nu+\theta}g(z)+CM_{\nu}h(z)+ CL_{21}(z)z_{n}^{\beta}+CL_{22}(z)z_{n}^{\beta} \tag{2.1}\]
for \(z\in B(x,r)\). By Lemma 2.7, we obtain
\[\sup_{0<r<x_{n}}\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}\left(L_{21}(z)z_{n}^{ \beta}\right)^{q^{*}}dz\leq C\]
and
\[\sup_{0<r<x_{n}}\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}\left(L_{22}(z)z_{n}^{ \beta}\right)^{q^{*}}dz\leq C.\]
By (2.1) and Lemma 2.6, we obtain for \(r>0\) such that \(B(x,r)\subset\mathbb{H}\)
\[\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}(b(z)z_{n}^{\beta}M_{ \mathbb{H},\nu}f(z))^{q^{*}}dz\] \[\leq C\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}(M_{\nu+\theta}g(z))^{q^ {*}}dz+C\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}(M_{\nu}h(z))^{q^{*}}dz\] \[+C\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}\left(L_{21}(z)z_{n}^{ \beta}\right)^{q^{*}}dz+C\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}\left(L_{22} (z)z_{n}^{\beta}\right)^{q^{*}}dz\] \[\leq C.\]
Thus the proof is completed.
## 3 Sobolev's inequality
In this section we are concerned with the Riesz potential of order \(\alpha\) in \(\mathbb{H}\) defined by
\[I_{\mathbb{H},\alpha}f(x)=\int_{B(x,x_{n})}|x-y|^{\alpha-n}f(y)dy,\]
where \(0<\alpha<n\).
For \(0<r<x_{n}/2\), we see that
\[J_{1}(x)=\int_{B(x,x_{n}/2)}|x-y|^{\alpha-n}|f(y)|dy\leq Cx_{n}^{-\beta}\int_ {\mathbb{H}}|x-y|^{\alpha-n}g(y)dy, \tag{3.1}\]
where \(g(y)=|f(y)|y_{n}^{\beta}\chi_{\mathbb{H}}(y)\), as before.
For \(0<\alpha<n\) and a locally integrable function \(g\) on \(\mathbb{R}^{n}\), we define the usual Riesz potential \(I_{\alpha}g\) of order \(\alpha\) by
\[I_{\alpha}g(x)=\int_{\mathbb{H}}|x-y|^{\alpha-n}g(y)\,dy.\]
The following result is due to Adams [1].
Lemma 3.1 (_Sobolev's inequality for Morrey spaces_).: Let \(1/p^{*}=1/p-\alpha/\sigma>0\). Suppose \(\alpha p<\sigma\leq n\). Then there exists a constant \(C>0\) such that
\[\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}|I_{\alpha}g(z)|^{p^{*}}\ dz\leq C\]
for all \(x\in\mathbb{R}^{n}\), \(r>0\) and measurable functions \(g\) on \(\mathbb{R}^{n}\) with
\[\sup_{r>0,x\in\mathbb{R}^{n}}\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}|g(y)|^{ p}dy\leq 1.\]
Lemma 3.2. Suppose
\[\sup_{r>0,x\in\mathbb{H}}\frac{r^{\sigma}}{|B(x,r)|}\int_{\mathbb{H}\cap B(x,r)} \left(|f(y)|y_{n}^{\beta}\right)^{p}dy\leq 1.\]
If \(1/p^{*}=1/p-\alpha/\sigma>0\), then there exists a constant \(C>0\) such that
\[\sup_{\{r>0:B(x,r)\subset\mathbb{H}\}}\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)} \left(z_{n}^{\beta}J_{1}(z)\right)^{p^{*}}dz\leq C.\]
Proof.: Set \(g(y)=|f(y)|y_{n}^{\beta}\chi_{\mathbb{H}}(y)\) for simplicity. By (3.1), we have
\[z_{n}^{\beta}J_{1}(z)\leq CI_{\alpha}g(z),\]
so that by Lemma 3.1
\[\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}\left(z_{n}^{\beta}J_{1}(z)\right)^{p ^{*}}dz\leq C\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}|I_{\alpha}g(z)|^{p^{*}} \ dz\leq C,\]
as required.
Next let us treat
\[J_{2}(x) = \int_{B(x,x_{n})\setminus B(x,x_{n}/2)}|x-y|^{\alpha-n}|f(y)|dy \tag{3.2}\] \[\leq Cx_{n}^{\alpha-n}\int_{B(x,x_{n})\setminus B(x,x_{n}/2)}|f(y)|dy.\]
Here we prepare the following lemma.
Lemma 3.3. Suppose \(1/p^{*}=1/p-\alpha/\sigma>0\), \(\beta<(n+1)/(2p^{\prime})\) and \(0<\sigma<(n+1)/2\). Let \(f\) be a measurable function on \(\mathbb{H}\) satisfying
\[\sup_{r>0,x\in\mathbb{H}}\frac{r^{\sigma}}{|B(x,r)|}\int_{\mathbb{H}\cap B(x, r)}\left(|f(y)|y_{n}^{\beta}\right)^{p}dy\leq 1.\]
Then there exists a constant \(C>0\) such that
\[\sup_{\{r>0:B(x,r)\subset\mathbb{H}\}}\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r) }\left(z_{n}^{\beta}J_{2}(z)\right)^{p^{*}}dz\leq C.\]
Proof.: By Holder's inequality and Lemma 2.3, we have
\[\int_{B(x,x_{n})}|f(y)|dy \leq \left(\int_{B(x,x_{n})}\left(y_{n}^{\beta}|f(y)|\right)^{p}dy \right)^{1/p}\left(\int_{B(x,x_{n})}y_{n}^{-\beta p^{\prime}}dy\right)^{1/p^{ \prime}} \tag{3.3}\] \[\leq Cx_{n}^{-\beta+n/p^{\prime}}\left(\int_{B(x,x_{n})}\left(y_{n}^ {\beta}|f(y)|\right)^{p}dy\right)^{1/p}\] \[\leq Cx_{n}^{-\beta+n-\sigma/p}\]
since \(-\beta p^{\prime}+n>(n-1)/2\). By (3.2), we obtain
\[x_{n}^{\beta}J_{2}(x) \leq Cx_{n}^{\alpha+\beta-n}\int_{B(x,x_{n})}|f(y)|dy\leq Cx_{n}^{ \alpha-\sigma/p}=Cx_{n}^{-\sigma/p^{*}}.\]
If \(0<r<x_{n}/2\), then
\[\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}\left(z_{n}^{\beta}J_{2}(z) \right)^{p^{*}}dz \leq C\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}z_{n}^{-\sigma}dz\] \[\leq C\left(\frac{r}{x_{n}}\right)^{\sigma}\leq C\]
and if \(x_{n}/2\leq r<x_{n}\), then Lemma 2.3 gives
\[\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}\left(z_{n}^{\beta}J_{2}( z)\right)^{p^{*}}dz \leq C\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}z_{n}^{-\sigma}dz\] \[\leq C\frac{x_{n}^{\sigma}}{|B(x,x_{n})|}\int_{B(x,x_{n})}z_{n}^{- \sigma}dz\leq C\]
when \(\sigma<(n+1)/2\).
Theorem 3.4.: Let \(1/p^{*}=1/p-\alpha/\sigma>0\). Suppose \(\beta<(n+1)/(2p^{\prime})\) and \(0<\sigma<(n+1)/2\). Then there exists a constant \(C>0\) such that
\[\sup_{\{r>0:B(x,r)\subset\mathbb{H}\}}\frac{r^{\sigma}}{|B(x,r)|} \int_{B(x,r)}\left(z_{n}^{\beta}I_{\mathbb{H},\alpha}f(z)\right)^{p^{*}}dz \leq C\]
when \(f\geq 0\) such that \(\sup_{r>0,x\in\mathbb{H}}\frac{r^{\sigma}}{|B(x,r)|}\int_{\mathbb{H} \cap B(x,r)}\left(|f(y)|y_{n}^{\beta}\right)^{p}dy\leq 1\).
Proof.: Let \(f\) be a measurable function on \(\mathbb{H}\) such that
\[\sup_{r>0,x\in\mathbb{H}}\frac{r^{\sigma}}{|B(x,r)|}\int_{\mathbb{H}\cap B(x, r)}\left(|f(y)|y_{n}^{\beta}\right)^{p}dy\leq 1.\]
By Lemmas 3.2 and 3.3, we obtain for \(r>0\) such that \(B(x,r)\subset\mathbb{H}\)
\[\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}\left(z_{n}^{\beta}I_{ \mathbb{H},\alpha}f(z)\right)^{p^{*}}dz\] \[\leq C\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}\left(z_{n}^{\beta}J_{1 }(z)\right)^{p^{*}}dz+C\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}\left(z_{n}^{ \beta}J_{2}(z)\right)^{p^{*}}dz\] \[\leq C,\]
as required.
Remark 3.5.: If \(f(y)=|y_{n}|^{-1}(1+|y|)^{-m}\chi_{\mathbb{H}}(y)\), then
1. \(\sup_{r>0,x\in\mathbb{H}}\frac{r^{\sigma}}{|B(x,r)|}\int_{\mathbb{H}\cap B(x, r)}\left(f(y)y_{n}^{\beta}\right)^{p}dy<\infty\) when \((\beta-1)p+1>0\) and \(0\leq\sigma+(\beta-1)p<\sigma+mp-n\);
2. \(I_{\mathbb{H},\alpha}f(x)\geq C\int_{B(x,x_{n}/2)}|x-y|^{\alpha-n}f(y)dy\geq Cx _{n}^{\alpha-1}(1+|x|)^{-m}\) for \(x\in\mathbb{H}\);
3. \(\frac{x_{n}^{\sigma_{1}}}{|B(x,x_{n})|}\int_{B(x,x_{n})}\left(z_{n}^{\beta}I_{ \mathbb{H},\alpha}f(z)\right)^{p^{*}}dz\geq C\frac{x_{n}^{\sigma_{1}}}{|B(x,x_{ n})|}\int_{B(x,x_{n})}z_{n}^{(\beta+\alpha-1)p^{*}}dz\) \(\rightarrow\infty\) as \(x_{n}\to 0\) and \(x\in\mathbb{H}\cap B(0,1)\) when \(\sigma_{1}+(\beta+\alpha-1)p^{*}<0\), and thus the sharpness of exponent \(\sigma\) is seen in Theorem 3.4 when \(-\sigma/p\leq\ \beta-1<-\sigma_{1}/p-(\sigma-\sigma_{1})\alpha/\sigma\);
4. \(\int_{\mathbb{H}\cap B(0,1)}\left(x_{n}^{\beta}I_{\mathbb{H},\alpha}f(x) \right)^{p^{*}}dx=\infty\) when \((\beta+\alpha-1)p^{*}+1\leq 0\) or \((-1/p<)\,\beta-1\leq-1/p+(\sigma^{-1}-1)\alpha\).
Hence, in Theorem 3.4, the assumption that \(B(x,r)\subset\mathbb{H}\) is needed when \(1/p^{\prime}<\beta\leq 1/p^{\prime}+(\sigma^{-1}-1)\alpha\) and \(0<(\sigma^{-1}-1)\alpha<(n-1)/(2p^{\prime})\).
## 4 Sobolev's inequality for double phase functionals
In this section, we are concerned with Sobolev's inequality for \(I_{\mathbb{H},\alpha}f\) of functions in weighted Morrey spaces of the double phase functional \(\Phi(x,t)\).
Theorem 4.1.: Let \(1/q=1/p-\theta/\sigma\), \(1/p^{*}=1/p-\alpha/\sigma>0\) and \(1/q^{*}=1/q-\alpha/\sigma>0\). Suppose \(\beta<(n+1)/(2p^{\prime})\) and \(0<\sigma<(n+1)/2\). Then there exists a constant \(C>0\) such that
\[\sup_{\{r>0:B(x,r)\subset\mathbb{H}\}}\frac{r^{\sigma}}{|B(x,r)|} \int_{B(x,r)}\Phi^{*}\left(z,z_{n}^{\beta}I_{\mathbb{H},\alpha}f(z)\right)dz \leq C\]
when \(f\geq 0\) such that \(\sup_{r>0,x\in\mathbb{H}}\frac{r^{\sigma}}{|B(x,r)|}\int_{\mathbb{H}\cap B(x, r)}\Phi\left(y,f(y)y_{n}^{\beta}\right)dy\leq 1\).
Proof.: Let \(f\) be a nonnegative measurable function on \(\mathbb{H}\) such that
\[\sup_{r>0,x\in\mathbb{H}}\frac{r^{\sigma}}{|B(x,r)|}\int_{\mathbb{H}\cap B(x, r)}\Phi(y,f(y)y_{n}^{\beta})dy\leq 1.\]
First we see from Theorem 3.4 that
\[\sup_{\{r>0:B(x,r)\subset\mathbb{H}\}}\frac{r^{\sigma}}{|B(x,r)|} \int_{B(x,r)}(z_{n}^{\beta}I_{\mathbb{H},\alpha}f(z))^{p^{*}}dz\leq C.\]
Next we show that
\[\sup_{\{r>0:B(x,r)\subset\mathbb{H}\}}\frac{r^{\sigma}}{|B(x,r)|} \int_{B(x,r)}(b(z)z_{n}^{\beta}I_{\mathbb{H},\alpha}f(z))^{q^{*}}dz\leq C.\]
Note that
\[b(x)I_{\mathbb{H},\alpha}f(x)\] \[= b(x)\int_{B(x,x_{n}/2)}|x-y|^{\alpha-n}f(y)dy+b(x)\int_{B(x,x_{n} )\setminus B(x,x_{n}/2)}|x-y|^{\alpha-n}f(y)dy\] \[= T_{1}(x)+T_{2}(x).\]
Set \(g(y)=|f(y)|y_{n}^{\beta}\chi_{\mathbb{H}}(y)\) and \(h(y)=b(y)f(y)y_{n}^{\beta}\chi_{\mathbb{H}}(y)\) for simplicity. We have
\[T_{1}(x) = \int_{B(x,x_{n}/2)}|x-y|^{\alpha-n}\{b(x)-b(y)\}f(y)dy+\int_{B(x,x_ {n}/2)}|x-y|^{\alpha-n}b(y)f(y)dy\] \[\leq C\int_{B(x,x_{n}/2)}|x-y|^{\alpha-n+\theta}f(y)dy+\int_{B(x,x_{n} /2)}|x-y|^{\alpha-n}b(y)f(y)dy\] \[\leq Cx_{n}^{-\beta}\int_{B(x,x_{n}/2)}|x-y|^{\alpha-n+\theta}f(y)y_{n }^{\beta}dy\] \[+Cx_{n}^{-\beta}\int_{B(x,x_{n}/2)}|x-y|^{\alpha-n}b(y)f(y)y_{n} ^{\beta}dy\] \[\leq Cx_{n}^{-\beta}I_{\alpha+\theta}g(x)+Cx_{n}^{-\beta}I_{\alpha}h (x)\]
and
\[T_{2}(x) \leq C\int_{B(x,x_{n})\setminus B(x,x_{n}/2)}|x-y|^{\alpha-n+\theta}f (y)dy\] \[+\int_{B(x,x_{n})\setminus B(x,x_{n}/2)}|x-y|^{\alpha-n}b(y)f(y)dy\] \[= CT_{21}(x)+T_{22}(x).\]
Hence
\[b(z)z_{n}^{\beta}I_{\mathbb{H},\alpha}f(z)\leq CI_{\alpha+\theta}g(z)+CI_{ \alpha}h(z)+CT_{21}(z)z_{n}^{\beta}+T_{22}(z)z_{n}^{\beta} \tag{4.1}\]
for \(z\in B(x,r)\). By Lemma 3.3, we obtain
\[\sup_{0<r<x_{n}}\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}\left(T_{21}(z)z_{n}^{ \beta}\right)^{q^{*}}dz\leq C\]
and
\[\sup_{0<r<x_{n}}\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}\left(T_{22}(z)z_{n}^{ \beta}\right)^{q^{*}}dz\leq C.\]
By (4.1) and Lemma 3.1, we obtain for \(r>0\) such that \(B(x,r)\subset\mathbb{H}\)
\[\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}(b(z)z_{n}^{\beta}I_{ \mathbb{H},\alpha}f(z))^{q^{*}}dz\] \[\leq C\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}(I_{\alpha+\theta}g(z)) ^{q^{*}}dz+C\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}(I_{\alpha}h(z))^{q^{*}}dz\] \[+C\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}\left(T_{21}(z)z_{n}^{ \beta}\right)^{q^{*}}dz+\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}\left(T_{22}( z)z_{n}^{\beta}\right)^{q^{*}}dz\] \[\leq C.\]
Thus the proof is completed.
## 5 Weighted Campanato spaces for the double phase functionals
In this section, we are concerned with Sobolev type inequalities for \(I_{\mathbb{H},\alpha}f\) in the Campanato setting.
For a measurable function \(f\) on \(\mathbb{H}\), \(x\in\mathbb{H}\) and \(0<r<x_{n}\), we set
\[f_{B(x,r)}=\frac{1}{|B(x,r)|}\int_{B(x,r)}f(y)\,dy.\]
Set \(g=f\chi_{B(x,x_{n})}\) and
\[I_{\alpha}g(z)=\int_{\mathbb{H}}|z-y|^{\alpha-n}g(y)\,dy.\]
Theorem 5.1.: Suppose \(\beta<(n+1)/(2p^{\prime})\) and \(\sigma=\alpha p<(n+1)/2\). If \(0<\varepsilon<\min\{1,\alpha\}\) and \(1/p_{1}=1/p-(\alpha-\varepsilon)/\sigma=\varepsilon/\sigma\) and \(\beta p_{1}>-(n+1)/2\), then there exists a constant \(C>0\) such that
\[\sup_{\{r>0:B(x,r)\subset\mathbb{H}\}}\frac{1}{|B(x,r)|}\int_{B(x,r)} \left(z_{n}^{\beta}\left|I_{\mathbb{H},\alpha}f(z)-(I_{\alpha}g)_{B(x,r)} \right|\right)^{p_{1}}dz \leq C\]
when \(g=f\chi_{B(x,x_{n})}\) and
\[\sup_{r>0,x\in\mathbb{H}}\frac{r^{\sigma}}{|B(x,r)|}\int_{\mathbb{H}\cap B(x, r)}\left(|f(y)|y_{n}^{\beta}\right)^{p}dy\leq 1.\]
Proof.: Let \(f\) be a nonnegative measurable function on \(\mathbb{H}\) such that
\[\sup_{r>0,x\in\mathbb{H}}\frac{r^{\sigma}}{|B(x,r)|}\int_{\mathbb{H}\cap B(x, r)}\left(|f(y)|y_{n}^{\beta}\right)^{p}dy\leq 1.\]
Let \(0<\varepsilon<\min\{1,\alpha\}\). For \(0<r<x_{n}/4\) and \(z\in B(x,r)\), we see that
\[I_{\mathbb{H},\alpha}f(z)-(I_{\alpha}g)_{B(x,r)}\] \[= \frac{1}{|B(x,r)|}\int_{B(z,z_{n})}\left(\int_{B(x,r)}\{|z-y|^{ \alpha-n}-|w-y|^{\alpha-n}\}dw\right)f(y)dy\] \[+\frac{1}{|B(x,r)|}\int_{B(x,r)}\left(\int_{B(z,z_{n})}|w-y|^{ \alpha-n}f(y)dy-\int_{B(x,x_{n})}|w-y|^{\alpha-n}f(y)dy\right)dw\] \[= I_{1}+I_{2}.\]
Note that
\[|I_{1}| \leq C\int_{B(x,2r)}|z-y|^{\alpha-n}|f(y)|dy+Cr\int_{B(z,z_{n}) \setminus B(x,2r)}|z-y|^{\alpha-n-1}|f(y)|dy\] \[\leq C\int_{B(z,3r)\cap B(z,z_{n})}|z-y|^{\alpha-n}|f(y)|dy+Cr\int_{B (z,z_{n})\setminus B(z,r)}|z-y|^{\alpha-n-1}|f(y)|dy\] \[\leq Cr^{\varepsilon}\int_{B(z,3r)\cap B(z,z_{n})}|z-y|^{\alpha- \varepsilon-n}|f(y)|dy+Cr^{\varepsilon}\int_{B(z,z_{n})\setminus B(z,r)}|z-y| ^{\alpha-\varepsilon-n}|f(y)|dy\] \[\leq Cr^{\varepsilon}I_{\mathbb{H},\alpha-\varepsilon}|f|(z)\]
since \(B(x,2r)\subset B(z,z_{n})\). Moreover,
\[\int_{B(z,z_{n})}|w-y|^{\alpha-n}f(y)dy-\int_{B(x,x_{n})}|w-y|^{ \alpha-n}f(y)dy\] \[= \int_{B(z,z_{n})\setminus B(x,x_{n})}|w-y|^{\alpha-n}f(y)dy-\int_ {B(x,x_{n})\setminus B(z,z_{n})}|w-y|^{\alpha-n}f(y)dy,\]
so that by (3.3)
\[|I_{2}| \leq Cx_{n}^{\alpha-n}\int_{B(z,z_{n})\setminus B(x,x_{n})}|f(y)|dy+ Cx_{n}^{\alpha-n}\int_{B(x,x_{n})\setminus B(z,z_{n})}|f(y)|dy\] \[\leq Cz_{n}^{\alpha-n}\int_{B(z,z_{n})}|f(y)|dy+Cx_{n}^{\alpha-n} \int_{B(x,x_{n})}|f(y)|dy\] \[\leq Cz_{n}^{-\beta}+Cx_{n}^{-\beta}\] \[\leq Cz_{n}^{-\beta}\]
since \(\sigma=\alpha p\). Hence we find
\[\left|I_{\mathbb{H},\alpha}f(z)-(I_{\alpha}g)_{B(x,r)}\right| \leq Cr^{\varepsilon}I_{\mathbb{H},\alpha}|f|(z)+Cz_{n}^{-\beta} \tag{5.1}\]
for \(0<r<x_{n}/4\) and \(z\in B(x,r)\).
For \(x_{n}/4<r<x_{n}\) and \(z\in B(x,r)\), we see from (3.3) that
\[\left|I_{\mathbb{H},\alpha}f(z)-(I_{\alpha}g)_{B(x,r)}\right| \tag{5.2}\] \[\leq C\int_{B(z,z_{n})}|z-y|^{\alpha-n}|f(y)|dy+Cx_{n}^{\alpha-n} \int_{B(x,x_{n})}|f(y)|dy\] \[\leq Cr^{\varepsilon}I_{\mathbb{H},\alpha-\varepsilon}|f|(z)+Cx_{n}^ {-\beta}.\]
By (5.1), (5.2), Lemma 2.3 and Theorem 3.4, we obtain for \(0<r<x_{n}\)
\[\frac{1}{|B(x,r)|}\int_{B(x,r)}\left(z_{n}^{\beta}\left|I_{ \mathbb{H},\alpha}f(z)-(I_{\alpha}g)_{B(x,r)}\right|\right)^{p_{1}}dz\] \[\leq C\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}\left(z_{n}^{\beta}I_{ \mathbb{H},\alpha-\varepsilon}|f|(z)\right)^{p_{1}}dz+C\] \[\leq C\]
since \(\varepsilon p_{1}=\sigma\), when \(\beta p_{1}>-(n+1)/2\).
Thus this theorem is proved.
Our second aim in this section is to establish the following result in the double phase setting.
Theorem 5.2.: Let \(1/q=1/p-\theta/\sigma\) and \(\alpha q=\sigma=(\alpha+\theta)p\). For \(0<\varepsilon<\min\{1,\alpha\}\), set \(1/q_{1}=1/q-(\alpha-\varepsilon)/\sigma=\varepsilon/\sigma>0\). Suppose \(\beta<(n+1)/(2p^{\prime})\), \(0<\sigma<(n+1)/2\) and \(\beta q_{1}>-(n+1)/2\). Then there exists a constant \(C>0\) such that
\[\sup_{\{r>0:B(x,r)\subset\mathbb{H}\}}\frac{1}{|B(x,r)|}\int_{B(x,r)}\left(z_{ n}^{\beta}b(z)\left|I_{\mathbb{H},\alpha}f(z)-(I_{\alpha}g)_{B(x,r)}\right| \right)^{q_{1}}dz\leq C\]
when \(g=f\chi_{B(x,x_{n})}\) and
\[\sup_{r>0,x\in\mathbb{H}}\frac{r^{\sigma}}{|B(x,r)|}\int_{\mathbb{H}\cap B(x, r)}\Phi\left(y,|f(y)|y_{n}^{\beta}\right)dy\leq 1.\]
Proof.: Let \(f\) be a measurable function on \(\mathbb{H}\) such that
\[\sup_{r>0,x\in\mathbb{H}}\frac{r^{\sigma}}{|B(x,r)|}\int_{\mathbb{H}\cap B(x,r)} \Phi\left(y,|f(y)|y_{n}^{\beta}\right)dy\leq 1.\]
Let \(0<\varepsilon<\min\{1,\alpha\}\). Then, for \(0<r<x_{n}/4\) and \(z\in B(x,r)\), we see from the proof of Theorem 5.1 that
\[\left|I_{\mathbb{H},\alpha}f(z)-(I_{\alpha}g)_{B(x,r)}\right| \leq Cr^{\varepsilon}\int_{B(z,3r)\cap B(z,z_{n})}|z-y|^{\alpha- \varepsilon-n}|f(y)|dy\] \[\ \ +Cr^{\varepsilon}\int_{B(z,z_{n})\setminus B(z,r)}|z-y|^{ \alpha-\varepsilon-n}|f(y)|dy+|I_{2}|\] \[= C(U_{1}+U_{2})+|I_{2}|.\]
We have by (3.3)
\[b(z)|I_{2}| \leq Cx_{n}^{\alpha-n+\theta}\int_{B(z,z_{n})\setminus B(x,z_{n})}|f (y)|dy+Cx_{n}^{\alpha-n}\int_{B(z,z_{n})\setminus B(x,z_{n})}|b(y)f(y)|dy\] \[\ \ +Cz_{n}^{\alpha-n+\theta}\int_{B(x,x_{n})\setminus B(z,z_{n})}| f(y)|dy+Cz_{n}^{\alpha-n}\int_{B(x,x_{n})\setminus B(z,z_{n})}|b(y)f(y)|dy\] \[\leq Cx_{n}^{\alpha-n+\theta}z_{n}^{-\beta+n-\sigma/p}+Cz_{n}^{ \alpha-n}z_{n}^{-\beta+n-\sigma/q}\] \[\ \ +Cz_{n}^{\alpha-n+\theta}x_{n}^{-\beta+n-\sigma/p}+Cz_{n}^{ \alpha-n}x_{n}^{-\beta+n-\sigma/q}\] \[\leq Cz_{n}^{-\beta}\]
since \(\sigma=(\alpha+\theta)p=\alpha q\). Hence we find
\[b(z)\left|I_{\mathbb{H},\alpha}f(z)-(I_{\alpha}g)_{B(x,r)}\right| \leq Cb(z)(U_{1}+U_{2})+Cz_{n}^{-\beta}. \tag{5.3}\]
Note that
\[b(z)U_{1} = r^{\varepsilon}\int_{B(z,3r)\cap B(z,z_{n})}|z-y|^{\alpha- \varepsilon-n}\{b(z)-b(y)\}|f(y)|dy\] \[+r^{\varepsilon}\int_{B(z,3r)\cap B(z,z_{n})}|z-y|^{\alpha- \varepsilon-n}b(y)|f(y)|dy\] \[\leq Cr^{\varepsilon}\int_{B(z,3r)\cap B(z,z_{n})}|z-y|^{\alpha- \varepsilon-n+\theta}|f(y)|dy\] \[+r^{\varepsilon}\int_{B(z,3r)\cap B(z,z_{n})}|z-y|^{\alpha- \varepsilon-n}b(y)|f(y)|dy\] \[\leq Cr^{\varepsilon}I_{\mathbb{H},\alpha-\varepsilon+\theta}|f|(z)+r^ {\varepsilon}I_{\mathbb{H},\alpha-\varepsilon}(b|f|)(z).\]
On the other hand,
\[b(z)U_{2} = r^{\varepsilon}\int_{B(z,z_{n})\setminus B(z,r)}|z-y|^{\alpha- \varepsilon-n}\{b(z)-b(y)\}|f(y)|dy\] \[+r^{\varepsilon}\int_{B(z,z_{n})\setminus B(z,r)}|z-y|^{\alpha- \varepsilon-n}b(y)|f(y)|dy\] \[\leq Cr^{\varepsilon}\int_{B(z,z_{n})\setminus B(z,r)}|z-y|^{\alpha- \varepsilon-n+\theta}|f(y)|dy\] \[+r^{\varepsilon}\int_{B(z,z_{n})\setminus B(z,r)}|z-y|^{\alpha- \varepsilon-n}b(y)|f(y)|dy\] \[\leq Cr^{\varepsilon}I_{\mathbb{H},\alpha-\varepsilon+\theta}|f|(z)+r^ {\varepsilon}I_{\mathbb{H},\alpha-\varepsilon}(b|f|)(z).\]
Hence we find by (5.3)
\[b(z)\left|I_{\mathbb{H},\alpha}f(z)-(I_{\alpha}g)_{B(x,r)}\right| \tag{5.4}\] \[\leq C\left\{r^{\varepsilon}I_{\mathbb{H},\alpha-\varepsilon+\theta}|f| (z)+r^{\varepsilon}I_{\mathbb{H},\alpha-\varepsilon}(b|f|)(z)+z_{n}^{-\beta} \right\}.\]
For \(x_{n}/4<r<x_{n}\), we see from the proof of Theorem 5.1 that for \(z\in B(x,r)\)
\[b(z)\left|I_{\mathbb{H},\alpha}f(z)-(I_{\alpha}g)_{B(x,r)}\right| \tag{5.5}\] \[\leq C\int_{B(z,z_{n})}|z-y|^{\alpha-n+\theta}|f(y)|dy+\int_{B(z,z_{n} )}|z-y|^{\alpha-n}b(y)|f(y)|dy+Cx_{n}^{-\beta}\] \[\leq C\left\{r^{\varepsilon}I_{\mathbb{H},\alpha-\varepsilon+\theta}| f|(z)+r^{\varepsilon}I_{\mathbb{H},\alpha-\varepsilon}(b|f|)(z)+x_{n}^{-\beta} \right\}.\]
By (5.4), (5.5), Lemma 2.3 and Theorem 3.4 we have
\[\frac{1}{|B(x,r)|}\int_{B(x,r)}\left(z_{n}^{\beta}b(z)\left|I_{ \mathbb{H},\alpha}f(z)-(I_{\mathbb{H},\alpha}f)_{B(x,r)}\right|\right)^{q_{1} }dz\] \[\leq C\bigg{\{}\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}|z_{n}^{\beta} I_{\mathbb{H},\alpha-\varepsilon+\theta}f(z)|^{q_{1}}dz\] \[+\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}|z_{n}^{\beta}I_{\mathbb{ H},\alpha-\varepsilon}(bf)(z)|^{q_{1}}dz+1\bigg{\}}\] \[\leq C\]
since \(\varepsilon q_{1}=\sigma\), when \(\beta q_{1}>-(n+1)/2\).
Thus the proof is completed.
## 6 Sobolev functions
In this section, we are concerned with Sobolev type inequalities for Sobolev functions in the Campanato setting.
First let us show the following result.
Lemma 6.1.: If \(u\in C^{1}(\mathbb{H})\) and \(B(x,r)\subset\mathbb{H}\), then for \(z\in B(x,r)\)
\[\left|u(z)-u_{B(x,r)}\right|\leq C\int_{B(x,r)}|z-w|^{1-n}|\nabla u(w)|dw.\]
Proof.: By the mean value theorem for analysis we find
\[\left|u(z)-u_{B(x,r)}\right|\] \[= \left|\frac{1}{|B(x,r)|}\int_{B(x,r)}\left\{u(z)-u(y)\right\}dy\right|\] \[= \left|\frac{1}{|B(x,r)|}\int_{B(x,r)}\left\{\int_{0}^{1}\frac{d} {dt}u(z+t(y-z)dt\right\}dy\right|\] \[\leq \frac{1}{|B(x,r)|}\int_{B(x,r)}\left\{\int_{0}^{1}|y-z||\nabla u( z+t(y-z)|dt\right\}dy\] \[\leq 2r\frac{1}{|B(x,r)|}\int_{0}^{1}\left\{\int_{B(x,r)}|\nabla u(z +t(y-z)|dy\right\}dt.\]
If \(w=z+t(y-z)\), then \(|w-z|=t|y-z|\leq 2rt\), so that
\[\left|u(z)-u_{B(x,r)}\right| \leq 2r\frac{1}{|B(x,r)|}\int_{B(x,r)}|\nabla u(w)|\left\{\int_{|w-z|/(2 r)}^{1}t^{-n}dt\right\}dw\] \[\leq C\int_{B(x,r)}|z-w|^{1-n}|\nabla u(w)|dw,\]
as required.
Theorem 6.2.: Suppose \(1/p^{*}=1/p-1/\sigma>0\), \(1-(n-1)/(2p^{\prime})-\sigma/p<\beta<(n+1)/(2p^{\prime})\) and \((1-(n-1)/(2p^{\prime})-\sigma/p)p^{*}+(n+1)/2>0\). Then there exists a constant \(C>0\) such that
\[\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}\left(z_{n}^{\beta}\left|u(z)-u_{B(x, x_{n})}\right|\right)^{p^{*}}dz \leq C\]
for \(x\in\mathbb{H}\), \(0<r<x_{n}\) and \(u\in C^{1}(\mathbb{H})\) with
\[\sup_{r>0,x\in\mathbb{H}}\frac{r^{\sigma}}{|B(x,r)|}\int_{\mathbb{H}\cap B(x, r)}\left(|\nabla u(y)|y_{n}^{\beta}\right)^{p}dy\leq 1.\]
Remark 6.3.: The condition that \((1-(n-1)/(2p^{\prime})-\sigma/p)p^{*}+(n+1)/2>0\) is written as
\[((n+1)/2-\sigma)/p^{*}>(n-1)/(2p^{\prime}),\]
which holds at least near \(p=1\) when \(1<\sigma<(n+1)/2\).
For a proof of Theorem 6.2 we note : if \(z\in B(x,x_{n})\), then Lemma 6.1 gives
\[\left|u(z)-u_{B(x,x_{n})}\right| \tag{6.1}\] \[\leq C\int_{B(x,x_{n})}|z-y|^{1-n}|\nabla u(y)|dy\] \[= C\int_{B(z,z_{n})}|z-y|^{1-n}|\nabla u(y)|dy+C\int_{B(x,x_{n}) \setminus B(z,z_{n})}|z-y|^{1-n}|\nabla u(y)|dy\] \[= C\{I_{\mathbb{H},1}f(z)+I_{2}(z)\},\]
where \(f(y)=|\nabla u(y)|\chi_{\mathbb{H}}(y)\).
Lemma 6.4.: Suppose \(\beta<(n+1)/(2p^{\prime})\). Then there exists a constant \(C>0\) such that
\[\int_{B(x,x_{n})\cap B(z,r)}|f(y)|dy \leq Cx_{n}^{(n-1)/(2p^{\prime})}r^{(n+1)/(2p^{\prime})-\beta}\] \[\times\left(\int_{B(x,x_{n})\cap B(z,r)}\left(y_{n}^{\beta}|f(y)| \right)^{p}dy\right)^{1/p}\]
for \(z\in B(x,x_{n})\) and \(r>z_{n}\).
Proof.: For \(z\in B(x,x_{n})\) and \(r>z_{n}\) we have by Holder's inequality
\[\int_{B(x,x_{n})\cap B(z,r)}|f(y)|dy\] \[\leq \left(\int_{B(x,x_{n})\cap B(z,r)}\left(y_{n}^{\beta}|f(y)|\right)^ {p}dy\right)^{1/p}\left(\int_{B(x,x_{n})\cap B(z,r)}y_{n}^{-\beta p^{\prime}}dy \right)^{1/p^{\prime}}.\]
Here note
\[\int_{B(x,x_{n})\cap B(z,r)}y_{n}^{-\beta p^{\prime}}dy \leq C\int_{0}^{2r}(\sqrt{x_{n}y_{n}})^{n-1}y_{n}^{-\beta p^{\prime}}dy _{n}\] \[\leq Cx_{n}^{(n-1)/2}r^{(n-1)/2-\beta p^{\prime}+1}\]
since \((n-1)/2-\beta p^{\prime}+1>0\). Therefore
\[\int_{B(x,x_{n})\cap B(z,r)}|f(y)|dy\] \[\leq Cx_{n}^{(n-1)/(2p^{\prime})}r^{(n+1)/(2p^{\prime})-\beta}\left( \int_{B(x,x_{n})\cap B(z,r)}\left(y_{n}^{\beta}|f(y)|\right)^{p}dy\right)^{1/p},\]
which gives the result.
Lemma 6.5.: Suppose \(\alpha-(n-1)/(2p^{\prime})-\sigma/p<\beta<(n+1)/(2p^{\prime})\). Let \(f\) be a measurable function on \(\mathbb{H}\) satisfying
\[\sup_{x\in\mathbb{H},0<r<x_{n}}\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}\left(| f(y)|y_{n}^{\beta}\right)^{p}dy\leq 1.\]
Set
\[J_{\alpha}|f|(z)=\int_{B(x,x_{n})\setminus B(z,z_{n})}|x-y|^{\alpha-n}|f(y)|dy.\]
Then there exists a constant \(C>0\) such that
\[z_{n}^{\beta}J_{\alpha}|f|(z)\leq Cx_{n}^{(n-1)/(2p^{\prime})}z_{n}^{\alpha-(n -1)/(2p^{\prime})-\sigma/p}\]
for \(z\in B(x,x_{n})\).
Proof.: Let \(f\) be a nonnegative measurable function on \(\mathbb{H}\) satisfying
\[\sup_{x\in\mathbb{H},0<r<x_{n}}\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}\left( |f(y)|y_{n}^{\beta}\right)^{p}dy\leq 1.\]
By Lemma 6.4, we have
\[J_{\alpha}|f|(z) \leq C\int_{B(x,x_{n})\setminus B(z,z_{n})}|z-y|^{\alpha-n}|f(y)|dy\] \[\leq C\int_{z_{n}}^{\infty}\left(\frac{1}{|B(x,r)|}\int_{B(x,x_{n}) \cap B(z,r)}|f(y)|dy\right)r^{\alpha-1}dr\] \[\leq Cx_{n}^{(n-1)/(2p^{\prime})}\int_{z_{n}}^{\infty}r^{-\beta+(n+1 )/(2p^{\prime})-n+(n-\sigma)/p}r^{\alpha-1}dr\] \[\leq Cx_{n}^{(n-1)/(2p^{\prime})}z_{n}^{\alpha-\beta-(n-1)/(2p^{ \prime})-\sigma/p},\]
since \(\alpha-(n-1)/(2p^{\prime})-\sigma/p<\beta<(n+1)/(2p^{\prime})\), which completes the proof.
Lemma 6.6.: Suppose \(1/p^{*}=1/p-\alpha/\sigma>0\), \(\alpha-(n-1)/(2p^{\prime})-\sigma/p<\beta<(n+1)/(2p^{\prime})\) and \((\alpha-(n-1)/(2p^{\prime})-\sigma/p)p^{*}>-(n+1)/2\). Let \(f\) be a measurable function on \(\mathbb{H}\) satisfying
\[\sup_{x\in\mathbb{H},0<r<x_{n}}\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}\left( |f(y)|y_{n}^{\beta}\right)^{p}dy\leq 1.\]
Then there exists a constant \(C>0\) such that
\[\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}\left(z_{n}^{\beta}J_{\alpha}|f|(z) \right)^{p^{*}}dz\leq C.\]
for \(0<r<x_{n}\).
Proof.: By Lemma 6.5, we have for \(0<r<x_{n}\)
\[\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}\left(z_{n}^{\beta}J_{ \alpha}|f|(z)\right)^{p^{*}}dz\] \[\leq Cx_{n}^{(n-1)p^{*}/(2p^{\prime})}\frac{r^{\sigma}}{|B(x,r)|} \int_{B(x,r)}z_{n}^{(\alpha-(n-1)/(2p^{\prime})-\sigma/p)p^{*}}dz\] \[= Cx_{n}^{(n-1)p^{*}/(2p^{\prime})}\frac{r^{\sigma}}{|B(x,r)|} \int_{B(x,r)}z_{n}^{-\sigma-(n-1)p^{*}/(2p^{\prime})}dz\]
since \(\alpha-(n-1)/(2p^{\prime})-\sigma/p<\beta<(n+1)/(2p^{\prime})\). If \(0<r<x_{n}/2\), then
\[x_{n}^{(n-1)p^{*}/(2p^{\prime})}\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}z_{n} ^{-\sigma-(n-1)p^{*}/(2p^{\prime})}dz\leq C\left(\frac{r}{x_{n}}\right)^{ \sigma}\leq C\]
and if \(x_{n}/2\leq r<x_{n}\), then Lemma 2.3 gives
\[x_{n}^{(n-1)p^{*}/(2p^{\prime})}\frac{r^{\sigma}}{|B(x,r)|}\int_ {B(x,r)}z_{n}^{-\sigma-(n-1)p^{*}/(2p^{\prime})}dz\] \[\leq Cx_{n}^{(n-1)p^{*}/(2p^{\prime})}\frac{x_{n}^{\sigma}}{|B(x,x_{n })|}\int_{B(x,r)}z_{n}^{-\sigma-(n-1)p^{*}/(2p^{\prime})}dz\] \[\leq C\]
since \((\alpha-(n-1)/(2p^{\prime})-\sigma/p)p^{*}>-(n+1)/2\).
Thus we complete the proof.
Now let us prove Theorem 6.2.
Proof of Theorem 6.2.: Let \(u\in C^{1}(\mathbb{H})\) with
\[\sup_{r>0,x\in\mathbb{H}}\frac{r^{\sigma}}{|B(x,r)|}\int_{\mathbb{H}\cap B(x,r )}\left(|\nabla u(y)|y_{n}^{\beta}\right)^{p}dy\leq 1.\]
Set \(f(y)=|\nabla u(y)|\chi_{\mathbb{H}}(y)\). By (6.1), Theorem 3.4 and Lemma 6.6, we obtain for \(x\in\mathbb{H}\) and \(0<r<x_{n}\)
\[\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}\left(z_{n}^{\beta}\left| u(z)-u_{B(x,x_{n})}\right|\right)^{p^{*}}dz\] \[\leq C\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}\left(z_{n}^{\beta}I_{ \mathbb{H},1}f(z)\right)^{p^{*}}dz+C\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)} \left(z_{n}^{\beta}I_{2}(z)\right)^{p^{*}}dz\] \[\leq C,\]
as required.
Remark 6.7.: Let \(u(x)=x_{n}^{-\varepsilon}\). Then
1. \(\sup_{\begin{array}{l}x\in\mathbb{H},0<r<x_{n}\\ (n+1)/2\ ;\end{array}}\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}\left(|\nabla u(y)|y_{n }^{\beta}\right)^{p}dy<\infty\) when \(\sigma=-(\beta-\varepsilon-1)p<(n+1)/2\ ;\)
2. \(\frac{x_{n}^{\sigma}}{|B(x,x_{n})|}\int_{B(x,x_{n})}\left(|u(y)|y_{n }^{\beta}\right)^{p^{*}}dy=\infty\) when \((\beta-\varepsilon)p^{*}\leq-(n+1)/2\).
Now \(\varepsilon\) is taken so that
\[1-(n+1)/(2p)<\beta-\varepsilon\leq-(n+1)/(2p^{*})=(n+1)/(2\sigma)-(n+1)/(2p)\]
when \(p<\sigma<(n+1)/2\). In this case,
(3) \(\sup_{r>0,x\in\mathbb{H}}\frac{r^{\sigma}}{|B(x,r)|}\int_{ \mathbb{H}\cap B(x,r)}\left(|\nabla u(y)|y_{n}^{\beta}\right)^{p}dy=\infty\)
and we do not know whether Theorem 6.2 holds under a weaker condition such as (1).
Our final goal is to obtain the following result in the double phase setting.
Theorem 6.8.: Let \(1/q=1/p-\theta/\sigma\), \(1/p^{*}=1/p-1/\sigma>0\) and \(1/q^{*}=1/q-1/\sigma>0\). Suppose
\[-(n+1)/(2p^{*})<1+\theta-(n-1)/(2p^{\prime})-\sigma/p<\beta<(n+1)/(2p^{\prime})\]
and
\[-(n+1)/(2q^{*})<1-(n-1)/(2q^{\prime})-\sigma/q<\beta<(n+1)/(2q^{\prime}).\]
Then there exists a constant \(C>0\) such that
\[\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}\Phi^{*}\left(z,z_{n}^{ \beta}\left|u(z)-u_{B(x,x_{n})}\right|\right)dz \leq C\]
for \(x\in\mathbb{H}\), \(0<r<x_{n}\) and \(u\in C^{1}(\mathbb{H})\) with
\[\sup_{r>0,x\in\mathbb{H}}\frac{r^{\sigma}}{|B(x,r)|}\int_{\mathbb{H}\cap B(x, r)}\Phi\left(y,|\nabla u(y)|y_{n}^{\beta}\right)dy\leq 1.\]
Proof.: Let \(u\in C^{1}(\mathbb{H})\) with
\[\sup_{r>0,x\in\mathbb{H}}\frac{r^{\sigma}}{|B(x,r)|}\int_{\mathbb{H}\cap B(x, r)}\Phi\left(y,|\nabla u(y)|y_{n}^{\beta}\right)dy\leq 1.\]
First we see from Theorem 6.2 that
\[\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}\left(z_{n}^{\beta}\left|u(z)-u_{B(x, x_{n})}\right|\right)^{p^{*}}dz \leq C.\]
Next we show that
\[\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}\left(b(z)z_{n}^{\beta} \left|u(z)-u_{B(x,x_{n})}\right|\right)^{q^{*}}dz \leq C.\]
Set \(f(y)=|\nabla u(y)|\chi_{\mathbb{H}}(y)\). Recall from (6.1) that
\[\left|u(z)-u_{B(x,x_{n})}\right|\leq C\{I_{\mathbb{H},1}f(z)+I_{2}(z)\}, \tag{6.2}\]
Note that
\[b(z)I_{\mathbb{H},1}f(z)\] \[= \int_{B(z,z_{n})}|z-y|^{1-n}\{b(z)-b(y)\}|f(y)|dy+\int_{B(z,z_{n}) }|z-y|^{1-n}b(y)|f(y)|dy\] \[\leq C\int_{B(z,z_{n})}|z-y|^{1-n+\theta}|f(y)|dy+\int_{B(z,z_{n})}|z- y|^{1-n}b(y)|f(y)|dy\] \[\leq CI_{\mathbb{H},1+\theta}|f|(z)+I_{\mathbb{H},1}(b|f|)(z).\]
On the other hand,
\[b(z)I_{2}(z)\] \[= \int_{B(x,x_{n})\setminus B(z,z_{n})}|z-y|^{1-n}\{b(z)-b(y)\}|f(y )|dy\] \[+\int_{B(x,x_{n})\setminus B(z,z_{n})}|z-y|^{1-n}b(y)|f(y)|dy\] \[\leq C\int_{B(x,x_{n})\setminus B(z,z_{n})}|z-y|^{1-n+\theta}|f(y)| dy+\int_{B(x,x_{n})\setminus B(z,z_{n})}|z-y|^{1-n}b(y)|f(y)|dy\] \[= CI_{21}(z)+I_{22}(z).\]
By Lemma 6.6, we obtain
\[\sup_{0<r<x_{n}}\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}\left(I_{21}(z)z_{n}^{ \beta}\right)^{q^{*}}dz\leq C\]
and
\[\sup_{0<r<x_{n}}\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}\left(I_{22}(z)z_{n}^{ \beta}\right)^{q^{*}}dz\leq C.\]
By (6.2) and Theorem 3.4, we obtain for \(x\in\mathbb{H}\) and \(0<r<x_{n}\)
\[\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}(b(z)z_{n}^{\beta}\left|u (z)-u_{B(x,x_{n})}\right|^{q^{*}}dz\] \[\leq C\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}(b(z)z_{n}^{\beta}I_{ \mathbb{H},1}f(z))^{q^{*}}dz+C\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}(b(z)z_{ n}^{\beta}I_{2}(z))^{q^{*}}dz\] \[\leq C\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}(z_{n}^{\beta}I_{ \mathbb{H},1+\theta}f(z))^{q^{*}}dz+C\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}( z_{n}^{\beta}I_{\mathbb{H},1}(b|f|)(z))^{q^{*}}dz\] \[+C\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}\left(I_{21}(z)z_{n}^{ \beta}\right)^{q^{*}}dz+C\frac{r^{\sigma}}{|B(x,r)|}\int_{B(x,r)}\left(I_{22}( z)z_{n}^{\beta}\right)^{q^{*}}dz\] \[\leq C.\]
Thus the proof is completed.
Acknowledgements. We would like to express our thanks to the referees for their kind comments and helpful suggestions. |
2308.07959 | Shedding light on the MRI driven dynamo in a stratified shearing box | We study the magneto-rotational instability (MRI) driven dynamo in a
geometrically thin disc ($H/R\ll 1$) using stratified zero net flux (ZNF)
shearing box simulations. We find that mean fields and EMFs oscillate with a
primary frequency $f_{\rm dyn} = 0.017$ ($\approx 9$ orbital period), but also
have higher harmonics at $3f_{\rm dyn}$. Correspondingly, the current helicity,
has two frequencies $2f_{\rm dyn}$ and $4f_{\rm dyn}$ respectively, which
appear to be the beat frequencies of mean fields and EMFs as expected from the
magnetic helicity density evolution equation. Further, we adopt a novel
inversion algorithm called the `Iterative Removal Of Sources' (IROS), to
extract the turbulent dynamo coefficients in the mean-field closure using the
mean magnetic fields and EMFs obtained from the shearing box simulation. We
show that an $\alpha-$effect ($\alpha_{yy}$) is predominantly responsible for
the creation of the poloidal field from the toroidal field, while shear
generates back a toroidal field from the poloidal field; indicating that an
$\alpha-\Omega$-type dynamo is operative in MRI-driven accretion discs. We also
find that both strong outflow ($\bar{v}_z$) and turbulent pumping ($\gamma_z$ )
transport mean fields away from the mid-plane. Instead of turbulent
diffusivity, they are the principal sink terms in the mean magnetic energy
evolution equation. We find encouraging evidence that a generative helicity
flux is responsible for the effective $\alpha$-effect. Finally, we point out
potential limitations of horizontal ($x-y$) averaging in defining the `mean' on
the extraction of dynamo coefficients and their physical interpretations. | Prasun Dhang, Abhijit Bendre, Kandaswamy Subramanian | 2023-08-15T18:00:04Z | http://arxiv.org/abs/2308.07959v2 | # Shedding light on the MRI driven dynamo in a stratified shearing box
###### Abstract
We study the magneto-rotational instability (MRI) dynamo in a geometrically thin disc (\(H/R\ll 1\)) using stratified zero net flux (ZNF) shearing box simulations. We find that mean fields and EMFs oscillate with a primary frequency \(f_{\rm dyn}=0.017\) (\(\approx 9\) orbital period), but also have higher harmonics at \(3f_{\rm dyn}\). Correspondingly, the current helicity, has two frequencies \(2f_{\rm dyn}\) and \(4f_{\rm dyn}\) respectively, which appear to be the beat frequencies of mean fields and EMFs as expected from the magnetic helicity density evolution equation. Further, we adopt a novel inversion algorithm called the 'Iterative Removal Of Sources' (IROS), to extract the turbulent dynamo coefficients in the mean-field closure using the mean magnetic fields and EMFs obtained from the shearing box simulation. We show that an \(\alpha-\)effect (\(\alpha_{yy}\)) is predominantly responsible for the creation of the poloidal field from the toroidal field, while shear generates back a toroidal field from the poloidal field; indicating that an \(\alpha-\Omega\)-type dynamo is operative in MRI-driven accretion discs. We also find that both strong outflow (\(\bar{v}_{z}\)) and turbulent pumping (\(\gamma_{z}\) ) transport mean fields away from the mid-plane. Instead of turbulent diffusivity, they are the principal sink terms in the mean magnetic energy evolution equation. We find encouraging evidence that a generative helicity flux is responsible for the effective \(\alpha\)-effect. Finally, we point out potential limitations of horizontal (\(x-y\)) averaging in defining the'mean' on the extraction of dynamo coefficients and their physical interpretations.
keywords: accretion,accretion discs - dynamo - instabilities - magnetic fields - MHD - turbulence - methods: numerical.
## 1 Introduction
The problem of angular momentum transport is a key concept in a rotationally supported accretion disc (for a review, see Balbus & Hawley, 1998). The current consensus is that a weak magnetic field instability, namely magneto-rotational instability (MRI; Velikhov, 1959; Chandrasekhar, 1960; Balbus & Hawley, 1991; Balbus & Hawley, 1992) is responsible for outward angular momentum transport and drives mass accretion in a sufficiently ionized accretion disc. Although linear MRI ensures outward angular momentum transport, it must be studied in the non-linear phase to account for different observable phenomena in accretion discs.
MRI in an accretion disc is either studied in a local set-up (shearing box; Balbus & Hawley, 1992; Brandenburg et al., 1995; Hawley et al., 1995; Davis et al., 2010; Shi et al., 2010; Bodo et al., 2014; Bhat et al., 2016) or in a global simulation (Stone et al., 1999; Hawley, 2001; Hawley et al., 2013; Beckwith et al., 2011; Parkin & Bicknell, 2013; Hogg & Reynolds, 2016; Dhang & Sharma, 2019; Dhang et al., 2023). While a global approach is more desirable, it is computationally expensive. On the other hand, the shearing box approach offers an alternate path which is computationally less costly and can provide deep insights into the local processes in MRI-driven turbulence.
In the shearing-box approach (Goldreich & Lynden-Bell, 1965), we expand fluid equations to the lowest order of \(H/R\), where \(H\) is the density scale height and \(R\) is the local radius. Therefore, this approach is valid only for geometrically thin discs with \(H/R\ll 1\). Depending on whether the vertical component of gravity (\(g_{z}=-\Omega^{2}z\)) (producing a vertically stratified gas density) is considered in the momentum equation or not, shearing box simulations are of two types: stratified (\(g_{z}\neq 0\)) and unstratified (\(g_{z}=0\)). Further, depending on whether the computational domain can contain net magnetic fields, shearing box models can be classified into zero net flux (ZNF) and net flux (NF) models. Therefore, four possible combinations of the shearing-box model are: i) unstratified ZNF, ii) unstratified NF, iii) stratified ZNF and iv) stratified NF. This work considers a stratified ZNF shearing box model to explore the MRI dynamo in saturation.
Shearing box simulations provide a wide range of behaviour (e.g., convergence, turbulence characteristics etc.) depending on the shearing box model used (for details, we refer to readers to see Table 1 in Ryan et al. (2017)). However, it is to be noted that we will restrict our discussion to the isothermal (i.e. sound speed is constant) models where there is no explicit dissipation and the numerical algorithms provide the dissipation through truncation error at the grid scale. In the presence of an NF, unstratified shearing box simulations show a convergence (in terms of accretion stresses) and sustained turbulence (Hawley et al., 1995; Guan et al., 2009; Simon et al., 2009). On the other hand, stratified NF simulations present different accretion stresses depending on the net flux strength and sustained turbulence (Guan & Gammie, 2011; Bai & Stone, 2013). Unstratified ZNF mod
els showed intriguing behaviour. Earlier isothermal Unstratified ZNF studies (Fromang & Papaloizou, 2007; Pessah et al., 2007) found decreased accretion stress and turbulence with increased resolution, implying non-convergence. However, later Shi et al. (2016) recovered convergence using a box with a larger vertical extent than the radial extent. On the contrary, earlier stratified ZNF models (Davis et al., 2010) suggested that the models are converged till the resolution \(128/H\); however, recent studies (Bodo et al., 2011; Ryan et al., 2017) found the model loses convergent properties at higher resolution.
The convergence problem is closely related to the magnetic energy generation process in the MRI-driven flow. For the ZNF (absence of net flux) models, an MRI-driven dynamo must act to overcome the diffusion and sustain the zero net flux in the accretion flow. Earlier ZNF simulations in unstratified (Shi et al., 2016) and stratified (Davis et al., 2010; Bodo et al., 2014; Ryan et al., 2017) shearing boxes found MRI turbulence can self-generate large-scale magnetic fields attaining quasi-stationarity and sustaining turbulence. Riols et al. (2013) suggested that the non-linear MRI does not behave like a linear instability; rather, it provides a pathway for saturation via a subcritical dynamo process. This leads to the question of what kind of dynamo can be sustained in the MRI-driven accretion flow, small-scale or large-scale? The lack of convergence in ZNF models was attributed to the low numerical Prandtl number (Fromang & Papaloizou, 2007; however, see Simon et al., 2009) and hence the inefficiency of small-scale dynamo to operate at small Prandtl number (Schekochihin et al., 2005; Bodo et al., 2011). However, it is unclear what happens when convergence is recovered in unstratified ZNF simulations with tall boxes (Shi et al., 2016).
Studying MRI dynamo is also important for understanding the generation of coherent large-scale magnetic fields determining the level of transport (Johansen et al., 2009; Bai & Stone, 2013) and outflows in the accretion disc (von Rekowski et al., 2003; Stepanovs et al., 2014; Mattia & Fendt, 2022). MRI, in principle, can generate magnetic fields coherent over several scale-heights (Dhang et al., 2023) and acts locally as a mean field in the absence of any external flux influencing convergence and the disc dynamics.
Generally, stratified models generate a more coherent large-scale field over the unstratified models (for a comparison, see Shi et al., 2016). Cyclic behaviour of azimuthally averaged magnetic fields (mean fields), popularly known as the butterfly diagram, is a typical feature observed in the stratified shearing box simulations (Brandenburg et al., 1995; Gressel, 2010; Bodo et al., 2014; Ryan et al., 2017; Gressel & Pessah, 2022). However, note that the presence of a strong magnetic net flux (Bai & Stone, 2013; Salvesen et al., 2016), convection (Hirose et al., 2014; Coleman et al., 2017) etc. can alter the periodicity in the butterfly diagram. Although the cyclic behaviour of mean fields can be explained by invoking the interplay between shear and helicity (Brandenburg & Donner, 1997; Gressel & Pessah, 2015), some features, such as upward migration of the mean fields, still demand an explanation.
Several studies attempted to understand the underlying mechanism of MRI dynamo using different approaches. While some of the studies (Lesur & Ogilvie, 2008; Bai & Stone, 2013; Shi et al., 2016; Begelman & Armitage, 2023) invoked toy models to complete the generation cycles of radial and azimuthal fields, others (local: Brandenburg et al., 2008; Gressel, 2010; Shi et al., 2016; Gressel & Pessah, 2022; Mondal & Bhat, 2023, global: Dhang et al., 2020) used mean-field theory to investigate the large-scale field generation in the MRI-driven turbulent accretion flow. Most of the studies characterising the turbulent dynamo coefficients in the regime of mean-field dynamo theory used state of the art "Test Field" (TF) method (Gressel, 2010; Gressel & Pessah, 2015), while a few used direct methods such as linear regression (Shi et al., 2016), singular value decomposition (SVD; Dhang et al., 2020) to calculate dynamo coefficients in post-process or statistical simulations to carry out combined study of the large-scale dynamo and angular-momentum transport in accretion discs (Mondal & Bhat, 2023). In this work, we use a direct method, a variant of the cleaning algorithm (Hogbom CLEAN method; Hogbom, 1974), called 'Iterative Removal Of Sources' (IROS; Hammersley et al., 1992); mainly used in astronomical image construction to analyse MRI-dynamo in the mean-field dynamo paradigm. We modified the IROS method according to our convenience (for details, see section 4, also see Bender et al., 2023) and used it to determine the dynamo coefficients by post-processing the data obtained from the stratified ZNF shearing box simulation.
The paper is organised as follows. In section 2, we describe details of shearing box simulations, basics of mean field closure used and techniques of the IROS method. Section 3 describes the evolution of MRI to a non-linear saturated state, spatio-temporal variations of mean magnetic fields, EMFs and periodicities present in different observables. The spatial profiles of calculated turbulent dynamo coefficients, the reliability of the calculation method (using both EMF reconstruction and a 1D dynamo model) and contributions of each coefficient to the mean magnetic energy equation are described in section 4. In section 5, we discuss the plausible reasons behind different periodicities present (in mean magnetic fields, EMFs and helicities), comparison of our work with the previous works, the possible importance of a generative helicity flux and limitations of the averaging scheme and mean-field closure used in decoupling contributions from different dynamo coefficients. Finally we summarized our key results in section 6.
## 2 Method
### Shearing-box simulation
We perform stratified zero net flux (ZNF) shearing box simulations to study the MRI driven dynamo in a geometrically thin disc (\(H/R\ll 1\)). To do that, we solve ideal MHD equations in a Keplerian shearing box given by
\[\frac{\partial\rho}{\partial t}+\nabla.\left(\rho\mathbf{v}\right)=0, \tag{1}\] \[\frac{\partial\rho\mathbf{v}}{\partial t}+\nabla.\left(\rho \mathbf{vv}-\mathbf{BB}\right)+\nabla P=\rho\mathbf{g}_{*}-2\Omega\hat{z} \times\rho\mathbf{v},\] (2) \[\frac{\partial\mathbf{B}}{\partial t}=\nabla\times\left(\mathbf{v }\times\mathbf{B}\right) \tag{3}\]
using the PLUTO code (Mignone et al., 2007) with \(x,~{}y,~{}z\) as the radial, azimuthal and vertical directions respectively. Here, \(\rho,~{}P,~{}\mathbf{v}\) and \(\mathbf{B}\) denote density, thermal pressure, velocity and magnetic fields, respectively. The terms \(\mathbf{g}_{*}=\Omega^{2}\left(2qx\hat{x}-z\hat{z}\right)\) and \(2\Omega\hat{z}\times\rho\mathbf{v}\) represent the tidal expansion of the effective gravity and the Coriolis force respectively with \(\Omega\) denoting orbital frequency. We use an isothermal equation of state
\[P=\rho c_{s}^{2}, \tag{4}\]
which makes the energy equation redundant. Additionally, we use constrained transport (Gardiner & Stone, 2005) to maintain divergence free condition
\[\nabla.\mathbf{B}=0 \tag{5}\]
for magnetic fields. We use the HLLD solver (Miyoshi & Kusano, 2005) with second-order slope-limited reconstruction. Second-order Runge-Kutta (RK2) is used for time integration with the CFL number
0.33. Also note that despite our shearing-box model lacking explicit dissipation, we refer to it as the direct numerical simulation (DNS).
We initialize an unmagnetized equilibrium solution with density and velocity given by
\[\rho =\rho_{0}\,\exp\left(-\frac{z^{2}}{2H^{2}}\right), \tag{6}\] \[\mathbf{v} =-q\ \Omega\ x\ \hat{y} \tag{7}\]
where \(q=1.5\) and \(\rho_{0}\) is the mid-plane (\(z=0\)) density and
\[H=\frac{c_{s}}{\Omega} \tag{8}\]
is the thermal scale height. We set \(\rho_{0}=c_{s}=\Omega=1\), so that \(H=1\). Unless stated otherwise, all the length and time-scales are expressed in units of \(H\) and \(\Omega^{-1}\) respectively. We initialize a ZNF magnetic field given by
\[\mathbf{B}=\sqrt{\frac{2}{\beta_{0}}}\ \sin\left(\frac{2\pi x}{L_{x}}\right)\ \hat{z} \tag{9}\]
with \(\beta_{0}\) defining the strength of the field and \(L_{x},\ L_{y},\ L_{z}\) denoting the size of the shearing-box.
Our computational domain extends from \(-L_{x}/2<x<L_{x}/2\), \(-L_{y}/2<y<L_{y}/2\) and \(-L_{z}/2<z<L_{z}/2\). It has been found in earlier studies that shearing box results depend on the domain size; larger boxes tend to capture dynamo better than their smaller counterparts as well as smaller boxes demonstrate a transition to anomalous behaviour (e.g. see Simon et al., 2012; Shi et al., 2016). To avoid these discrepancies, we choose a shearing box of size \(L_{x}\times L_{y}\times L_{z}=3H\times 12H\times 8H\) with a grid resolution \(N_{x}\times N_{y}\times N_{z}=96\times 192\times 256\) giving rise to a resolution of \(32/H\) in the vertical direction. However, we must admit that there exists an issue with the convergence in stratified ZNF models as discussed in the section 1. We reserve the dependence of MRI dynamo on numerical resolution as a topic of future research investigation.
We use periodic and shearing-periodic (Hawley et al., 1995) boundary conditions in the \(y\) and \(x-\) boundaries, respectively. Outflow boundary conditions are implemented in the vertical (\(z\)) boundaries. A gradient-free condition is maintained for scalars and tangential components of vector fields at the boundaries. At the same time, \(v_{z}\geqslant 0\) for \(z>0\) and \(v_{z}\leqslant 0\) for \(z<0\) is set to restrict mass inflow into the domain at vertical boundaries.
### Mean field closure
Before describing the details of mean field dynamo theory and the closure used, we define what is meant by'mean' and 'fluctuation' in our work. We define mean magnetic fields (\(\mathbf{B}\)) as the \(x-y\)-averaged values as follows
\[\bar{\mathbf{B}}(z,t)=\frac{1}{L_{x}L_{y}}\int_{-L_{x}/2}^{L_{x}/2}\int_{-L_{ y}/2}^{L_{y}/2}\mathbf{B}(x,y,z,t)\ dx\ dy. \tag{10}\]
Fluctuating magnetic fields are defined as
\[\mathbf{B}^{\prime}(x,y,z,t)=\mathbf{B}(x,y,z,t)-\bar{\mathbf{B}}(z,t). \tag{11}\]
Mean and fluctuations of the \(x-\) and \(z-\) components of the velocity are defined in the same way as those for magnetic fields, while the mean and fluctuation of \(y-\)component of velocity are defined as
\[\bar{v}_{y}(x,y,z,t)=-q\Omega x,\ \ v_{y}^{\prime}=v_{y}-\bar{v}_{y}. \tag{12}\]
If we decompose the magnetic and velocity fields into mean and fluctuation; and insert them into the magnetic field induction equation, we obtain the mean-field equation
\[\frac{\partial\bar{\mathbf{B}}}{\partial t}=\nabla\times\left(\bar{\mathbf{ v}}\times\bar{\mathbf{B}}\right)+\nabla\times\bar{\mathcal{E}}; \tag{13}\]
where we assume that microscopic diffusivity is vanishingly small (ideal MHD limit). Here mean EMF
\[\bar{\mathcal{E}}=\overline{v^{\prime}\times B^{\prime}} \tag{14}\]
appears as a source term in equation 13. The crux of the mean-field dynamo theory is how to express mean EMF in terms of the mean magnetic fields. In general, the usual mean-field closure (Raedler, 1980; Brandenburg & Subramanian, 2005; Shukurov & Subramanian, 2021) is given by,
\[\bar{\mathcal{E}}_{i}(z)=\alpha_{ij}(z)\ \bar{B}_{j}(z)-\eta_{ij}(z)\ \bar{J}_{j}, \tag{15}\]
where we neglect higher than the first-order spatial derivatives and time derivatives of mean magnetic fields and \(\alpha_{ij}\), \(\eta_{ij}\) are the turbulent dynamo coefficients which characterize the dynamo; and \(\bar{J}_{j}=\epsilon_{jil}\partial_{i}\bar{B}_{l}(z)\) is the current. Further, while calculating turbulent dynamo coefficients using direct methods (e.g. SVD (Bentre et al., 2020; Dhang et al., 2020), linear regression (Squire & Bhattacharjee, 2016; Shi et al., 2016), it is also assumed that \(\alpha_{ij}\), \(\eta_{ij}\) are constant in time. However, we find that in our simulation of MRI-driven accretion flow, the current helicity, which is potentially a primary component determining the \(\alpha_{ij}\), shows a reasonably periodic change over time with a time period half the dynamo-period (for details, we refer the reader to section 3.3). This time-dependent feature of current helicity leads to considering a heuristic mean field closure defined as
\[\bar{\mathcal{E}}_{i}(z)=\left(\alpha_{ij}^{0}+\alpha_{ij}^{1}\cos(2\Omega_{ \mathrm{dyn}}t+\phi)\right)\bar{B}_{j}(z)-\eta_{ij}\bar{J}_{j} \tag{16}\]
to capture the time dependence in \(\alpha_{ij}\). Here \(\alpha_{ij}^{0}\) and \(\alpha_{ij}^{1}\) are the time-independent and time-dependent parts of \(\alpha_{ij}\) respectively and \(\Omega_{\mathrm{dyn}}=2\pi f_{\mathrm{dyn}}=2\pi/T_{\mathrm{dyn}}\), with \(f_{\mathrm{dyn}}\) and \(T_{\mathrm{dyn}}\) are the dynamo frequency and period respectively. Further, one expects that \(\eta_{ij}\) to be dominated by a DC component, because \(\eta_{ij}\)-s are generally determined by the turbulent intensity of the flow, not by helicities. Thus for simplicity, we adopt a time-independent \(\eta_{ij}\).
### Dynamo coefficient extraction method - IROS
To extract the dynamo coefficients from the shearing box simulations described in the previous section, we solve equation 16, in a least square sense, for \(\alpha_{ij}^{0}\), \(\alpha_{ij}^{1}\) and \(\eta_{ij}\). However, since its an underdetermined system of equations, we take advantage of the fact that these dynamo coefficients stay statistically unchanged during the quasi-stationary phase of evolution, and extract the time series of length \(N,\bar{\mathcal{E}}_{i}(z,t_{1}\ldots t_{N}),\bar{B}_{i}(z,t_{1}\ldots t_{N})\) and \(\bar{J}_{i}(z,t_{1}\ldots t_{N})\) from the DNS (\(i\in\{x,y\}\)). With these time series, we rewrite equation 16 at any particular \(z=z^{\prime}\) as
\[\mathbf{y}(z^{\prime},t)=\mathbf{A}(z^{\prime},t)\,\mathbf{x}(z^{\prime}), \tag{17}\]
where the matrices \(\mathbf{y}\), \(\mathbf{A}\) and \(\mathbf{x}\) are defined as,
\[\mathbf{y}(z^{\prime},t)=\begin{bmatrix}\mathcal{E}_{x}(z^{\prime},t_{1})& \mathcal{E}_{y}(z^{\prime},t_{1})\\ \mathcal{E}_{x}(z^{\prime},t_{2})&\mathcal{E}_{y}(z^{\prime},t_{1})\\ \vdots&\vdots\\ \mathcal{E}_{x}(z^{\prime},t_{N})&\mathcal{E}_{y}(z^{\prime},t_{1})\end{bmatrix}\]
\[\mathbf{A}^{\intercal}(z^{\prime},t)=\begin{bmatrix}\bar{B}_{x}(z^{\prime},t_{ 1})&\bar{B}_{x}(z^{\prime},t_{2})&\ldots\bar{B}_{x}(z^{\prime},t_{N})\\ \bar{B}_{y}(z^{\prime},t_{1})&\bar{B}_{y}(z^{\prime},t_{2})&\ldots\bar{B}_{y} (z^{\prime},t_{N})\\ \mathcal{C}_{x}(z^{\prime},t_{1})&\mathcal{C}_{x}(z^{\prime},t_{2})&\ldots \mathcal{C}_{x}(z^{\prime},t_{N})\\ \mathcal{C}_{y}(z^{\prime},t_{1})&\mathcal{C}_{y}(z^{\prime},t_{2})&\ldots \mathcal{C}_{y}(z^{\prime},t_{N})\\ -\bar{J}_{z}(z^{\prime},t_{1})&-\bar{J}_{x}(z^{\prime},t_{2})&\ldots-\bar{J}_{ x}(z^{\prime},t_{N})\\ -\bar{J}_{y}(z^{\prime},t_{1})&-\bar{J}_{y}(z^{\prime},t_{2})&\ldots-\bar{J}_{ y}(z^{\prime},t_{N})\end{bmatrix}\]
\[\mathbf{x}(z^{\prime},t)=\begin{bmatrix}\alpha_{xz}^{0}(z^{\prime})&\alpha_{ yz}^{0}(z^{\prime})\\ \alpha_{yz}^{0}(z^{\prime})&\alpha_{yz}^{0}(z^{\prime})\\ \alpha_{xz}^{1}(z^{\prime})&\alpha_{yz}^{1}(z^{\prime})\\ \eta_{xz}(z^{\prime})&\eta_{yz}(z^{\prime})\\ \eta_{xy}(z^{\prime})&\eta_{yy}(z^{\prime})\end{bmatrix}. \tag{18}\]
Here the terms \(\mathcal{C}_{i}(z^{\prime},t_{1})=\bar{B}_{i}(z^{\prime},t_{1})\,\cos\left(2 \Omega_{\mathrm{dyn}}t_{1}+\phi\right)\) (\(\forall i\in\{x,y\}\)). For simplicity, we assume \(\phi\) to be zero. To then determine the dynamo coefficients (\(\mathbf{x}\)) we pseudo-invert equation 17. This task is complicated firstly by the fact that both components of mean-field and current have additive correlated noise and secondly by the fact that the \(y\) component of the mean-field is typically much stronger compared to the \(x\) component, due to the rotational shear (and by consequence the \(x\) component of current is much stronger than its \(y\) component). Typical schemes of the least square minimisation in such cases tends to underestimate the dynamo coefficients that are associated with \(x\) and \(y\) components of mean-field and current respectively (i.e. \(\alpha_{xz}^{0}\), \(\alpha_{xz}^{1}\) and \(\eta_{iy}\)). To circumvent these issues, we rely upon the IROS method (Iterative removal of sources) (Hammersley et al., 1992) that we have recently adapted for such inversions in the dynamo context (Bendre et al., 2023). This method is based on Hogbom clean algorithm used in Radio Astronomy to construct an image by convolving multiple beams, iteratively locating and subtracting out the strongest source to model the rest of the dirty image. It is particularly useful when the relative contribution of some of the beams to the final image happens to be negligible. Such a situation is analogous to have only a few of the columns of \(\mathbf{A}\) (the beams) largely contribute to \(\mathbf{y}\) (an image). A brief outline of the method is as follows.
Firstly, at any particular \(z=z^{\prime}\) we set all the dynamo coefficients, \(\alpha_{ij}^{0}(z^{\prime})\), \(\alpha_{ij}^{1}(z^{\prime})\) and \(\eta_{ij}(z^{\prime})\) to zero, i.e we set \(\mathbf{x}(z^{\prime})=0\). Then to derive their zeroth order estimates we fit every \(i^{\mathrm{th}}\) column of \(\mathbf{y}(z^{\prime},t)\) (say \(\mathbf{y}_{i}(z^{\prime},t)\)), against the individual columns of \(\mathbf{A}(z^{\prime})\) (denoted as \(\mathbf{A}_{k}(z^{\prime})\)) separately as lines. The best amongst these four linear fits is decided based on the least of chi-square errors of the individual fits (\(\chi_{ik}^{2}(z^{\prime})=\sum_{i}(\mathbf{y}_{i}-\mathbf{A}_{k}\,\mathbf{x} _{ik})^{2}\)). Then the best fitted dynamo coefficients updated by adding to it, its zeroth order estimate multiplied by a small factor (\(\epsilon<1\)), called the loop-gain, while other coefficients are kept constant. For example if the chi-squared error associated with the line fit \(\mathcal{E}_{x}(z^{\prime},t_{1}\ldots t_{N})\) versus \(\bar{B}_{y}(z^{\prime},t_{1}\ldots t_{N})\) (i.e. \(\chi_{12}^{2}(z^{\prime})\)) is the least then \(\mathbf{x}_{2,1}(z^{\prime})\) (i.e. \(\alpha_{xy}^{2}\)) is updated by a factor of the slope multiplied by \(\epsilon\). Subsequently, the contribution to the EMF associated with the best fitted coefficient, also multiplied by the \(\epsilon\) is subtracted from the EMF component. For instance using the same example a factor of \(\epsilon\,\alpha_{xy}(z^{\prime})\,\bar{B}_{y}(z^{\prime},t_{1}\ldots t_{N})\) is subtracted from \(\mathcal{E}_{x}(z^{\prime},t_{1}\ldots t_{N})\). This residual EMF is then used as an actual EMF component, and this process is repeated a suitable number of times until either all the dynamo coefficients converge to their respective constant values or all four chi-squared errors get smaller than a certain predefined threshold. All the aforementioned steps are then repeated at every \(z=z^{\prime}\).
We apply this method with \(\epsilon=0.1\) for five hundred refinement loops to the time series of EMFs, mean-fields and currents obtained from the DNS data. While constructing these time series (from \(t=1000\Omega^{-1}\) to \(300\Omega^{-1}\)) with data dumping interval \(\Delta t_{\mathrm{dump}}=0.2\,\Omega^{-1}\) we make sure that they correspond to the quasi-stationary phase of the magnetic field evolution.
IROS method does not provide an estimate of errors on the calculated coefficients directly. We, therefore, calculate a statistical error of the dynamo coefficient by considering the five different realizations of time series. We construct five different time series of mean fields, currents and EMFs by skipping four data points in the time series. Specifically, the time series \((t_{1},t_{2},\ldots t_{N})\) (of all components of mean-field, current and EMF) are split into \((t_{1},t_{6}\ldots)\), \((t_{2},t_{7}\ldots)\), \((t_{3},t_{8}\ldots)\), \((t_{4},t_{9}\ldots)\) and \((t_{5},t_{10}\ldots)\). We use these time series to calculate five sets of dynamo coefficients and calculate their standard deviations to represent the errors on the calculated coefficients.
## 3 Results: Saturation of MRI, mean fields and emf-s
We now turn to the results of our shearing box simulation of MRI in a geometrically thin disc, investigate its dynamo action in addition to discussing several important properties which illuminates the nature of the MRI dynamo. Most of our analysis of magnetic field generation will focus on the saturated state of MRI, when the disc is in the quasi-stationary phase.
Figure 1: Top panel: Time history of Reynolds (\(\alpha_{\mathrm{Reg}}\)) and Maxwell (\(\alpha_{\mathrm{Max}}\)) stresses. Bottom panel: time history of the volume-averaged mean (\(\bar{B}^{2}\)) and fluctuating (\(B^{\prime 2}\)) magnetic energies.
### Saturation of MRI
First, consider the time evolution of accretion stresses and magnetic energies. This will also allow us to determine the quasi-stationary phase of the MRI-driven turbulence. The top panel of Fig. 1 shows the time history of accretions stresses (Reynolds and Maxwell). Normalized Reynolds and Maxwell stresses are defined as
\[\alpha_{\rm Rey}=\frac{\langle\overline{\rho\nu_{\rm v}^{\prime}\nu_{\rm b}^{ \prime}}\rangle}{\langle p_{g}\rangle}, \tag{19}\]
\[\alpha_{\rm Max}=\frac{\langle\bar{B}_{x}\bar{B}_{y}\rangle+\langle\overline{B _{x}^{\prime}B_{y}}\rangle}{\langle p_{g}\rangle}, \tag{20}\]
where the averages are done over the whole volume. Reynolds stress is due to the correlation of fluctuating velocity fields, while Maxwell stress is composed of a correlation between the fluctuating components as well as that between the mean components of the magnetic fields. Both the stresses grow exponentially during the linear regime of MRI, and eventually saturate around an average value when MRI enters into the non-linear regime. In our simulation, we find the time-averaged (within the interval \(t=(100-300)\)\(\Omega^{-1}\)) values of Reynolds and Maxwell stresses are to be \(\langle\alpha_{\rm Rey}\rangle=0.0048\) and \(\langle\alpha_{\rm Max}\rangle=0.0167\) respectively. The ratio of Maxwell to Reynolds stress is \(\alpha_{\rm Max}\rangle/\langle\alpha_{\rm Rey}\rangle=3.5\), close to \(4\), as predicted by Pessah et al. 2006 for \(q=1.5\) and similar to what is found in earlier numerical simulations (Nauman & Blackman, 2015; Gressel & Pessah, 2015).
The bottom panel of Fig. 1 shows how the volume-averaged mean (\(\langle\bar{B}^{2}\rangle\)) and fluctuating (\(\langle B^{\prime 2}\rangle\)) magnetic energies evolve over time. Like accretion stresses, magnetic energies oscillate about an average value in the quasi-stationary phase after the initial exponentially growing phase. It is also worth noting that the mean part of the magnetic field shows a larger time variation than the fluctuating part of the magnetic field. We point out an important point that the fluctuating magnetic field is stronger than the mean magnetic field, and the implication of this will be discussed in the latter part of the paper.
We see in Fig. 1 that the accretion stresses and magnetic energies start saturating around \(t=40\)\(\Omega^{-1}\). However, to remain safer, we consider the simulation in the time range \(t=(100-300)\)\(\Omega^{-1}\) for dynamo coefficient calculation in the quasi-stationary state.
### Evolution of mean fields and EMFs
The most preliminary diagnostic of the dynamo is to look at the spatio-temporal variation of the mean magnetic fields, popularly known as the butterfly diagram (e.g. see the review by Brandenburg & Subramanian, 2005). Fig. 2 shows the butterfly diagrams for mean magnetic fields \(\bar{B}_{x}\) and \(\bar{B}_{y}\) along with the mean EMFs \(\bar{\cal E}_{x}\) and \(\bar{\cal E}_{y}\). Here we note that the mean EMF acts as a source term in the mean magnetic field energy evolution equation. In particular, \(\bar{\cal E}_{y}\) is responsible for the generation of poloidal field (here \(\bar{B}_{x}\)) from a toroidal one due to an \(\alpha\)-effect, which itself naturally emerges by a combined action of stratification and rotation (Krause & Raedler, 1980) in our stratified shearing box simulation. At an early stage of evolution (around \(t\approx 2\) orbital period), both mean fields and EMFs show lateral stretches with changing the sign in the vertical direction, which is clearly due to channel modes of MRI (Balbus & Hawley, 1992; Balbus & Hawley, 1998). During saturation, both mean fields and EMFs show a coherent vertical structure which changes signs in time with a definite period. We find magnetic field components \(B_{y}\) and EMF \(\bar{\cal E}_{x}\) show a very coherent spatio-temporal variation with a time period of \(\approx 9\) orbital period (\(2\pi/\Omega\)), similar to the earlier studies of MRI dynamo (Brandenburg et al., 1995; Davis et al., 2010; Gressel, 2010; Gressel & Pessah, 2015; Ryan et al., 2017). This periodicity is semi-transparent in the butterfly diagram of \(\bar{B}_{x}\), while this is hardly apparent for \(\bar{\cal E}_{y}\). However, we note that periodicities exist in all components of mean fields and EMFs as will become clear below (see Fig. 4).
### Evolution of kinetic and current helicities
The generation of large-scale magnetic fields by a dynamo action is often associated with helicity in the fluid velocity field. Assuming isotropic homogeneous turbulence, Krause & Raedler (1980) suggested a kinetic \(\alpha\)-effect defined by
\[\alpha_{\rm kin}=-\frac{\tau_{c}}{3}\ \overline{v^{\prime}.\nabla\times v^{ \prime}} \tag{21}\]
responsible for magnetic field generation; where \(\tau_{c}\) is the correlation time, and \(K_{\rm hel}=\overline{v^{\prime}.\nabla\times v^{\prime}}\) is the kinetic helicity. It is suggested that \(\alpha_{\rm kin}\) accounting for the effects of the helical velocity field, takes the role of driver, while \(\alpha_{\rm mag}\)(Pouquet et al., 1976) defined by
\[\alpha_{\rm mag}^{\rm dyn}(z,t)=\frac{\tau_{c}}{3}\overline{v^{\prime}_{A}. \nabla\times v^{\prime}_{A}}, \tag{22}\]
is the non-linear response arising due to the Lorentz force feedback, gradually increasing and ultimately quenching the kinetic-\(\alpha\)(Blackman & Brandenburg, 2002; Subramanian, 2002). Here, \(v^{\prime}_{A}=\sqrt{B^{\prime 2}/\rho}\) is the Alfven velocity and \(C_{\rm hel}=\overline{v^{\prime}_{A}.\nabla\times v^{\prime}_{A}}\) is the current helicity. Ideally, the effective \(\alpha-\) effect, responsible to poloidal field generation, is supposed to be \(\alpha_{\rm dyn}=\alpha_{\rm kin}+\alpha_{\rm mag}\).
Fig. 3 shows the spatio-temporal variation of \(\alpha_{\rm kin}\) and \(\alpha_{\rm mag}\). We assume \(\tau_{c}\) to be same for both \(\alpha\)-s and \(\tau_{c}=\Omega^{-1}\). The \(\alpha_{\rm mag}\) changes sign with a time-period \(\approx 5\) orbital period (\(2\pi/\Omega\)), roughly half of the dynamo period, with which the mean fields and EMFs change sign, while \(\alpha_{\rm kin}\) does not show any explicit periodicity. We will postpone a detailed discussion on the periodicity of helicities to section 3.5 where we discuss periodicities associated with all important variables.
### Co-existence of small and large scale dynamos
Both kinetic and magnetic-\(\alpha\)-s are small close to the mid-plane while this is not true of the random kinetic and magnetic energies (see Fig. 6). At the same time, the amplitudes of the helicities increase away from the mid-plane. These features suggest that both small-scale and large-scale dynamo co-exist in MRI-driven dynamo (Blackman & Tan, 2004; Gressel, 2010). The MRI-driven small-scale dynamo dominates magnetic field generation close to the disc mid-plane (where stratification is unimportant). In contrast, at larger heights where stratification becomes important, a helicity-driven large-scale dynamo governs the magnetic field generation (Dhang & Sharma, 2019; Dhang et al., 2020). However, it is to be noted that \(\alpha_{\rm mag}\) is larger than \(\alpha_{\rm kin}\) by one order of magnitude, and hence it is very likely that the effective-\(\alpha\) will be predominantly due to \(\alpha_{\rm mag}\).
### Power spectra of mean fields, EMFs and helicities
The butterfly diagrams shown in the previous sections depict the apparent periodicities of mean fields, EMFs and helicities. We look at the power spectrum defined by
\[{\cal P}_{q}(f)=\frac{1}{z_{2}-z_{1}}\int_{z_{1}}^{z_{2}}dz\ \left|\int\tilde{q}(z,t)e^{ift}dt \right|^{2} \tag{23}\]
where \(\tilde{q}(z,t)\) is any generic quantity to investigate the periodicities in greater detail. Here the spatial average is done over different heights, namely \(z=0-H\), \(z=H-2H\) and \(z=2H-3.5H\).
Fig. 4 shows the power spectra of mean fields \(\bar{B}_{x}\), \(\bar{B}_{y}\) (top panels), mean EMFs \(\bar{\mathcal{E}}_{x}\), \(\bar{\mathcal{E}}_{y}\) (middle panels) and helicities \(K_{\rm hel}\), \(C_{\rm hel}\) (bottom panels). It is noticeable that power spectra for mean fields and spectra peaks at the primary frequency \(f_{\rm dyn}=0.017\), which was also visible in the butterfly diagrams. In addition to the primary frequency, the power spectra also show the presence of higher harmonics (at \(3f_{\rm dyn}\)), which got unnoticed in the earlier works of MRI dynamo. Similarly, power spectra of current helicity \(C_{\rm hel}\) also show the presence of higher harmonics (at \(4\ f_{\rm dyn}\)) in addition to the primary frequency at \(2f_{\rm dyn}\). However, kinetic helicity does not show any periodicity. Presence of a strong time variation in \(\alpha_{\rm mag}\) and its dominance over \(\alpha_{\rm kin}\) necessarily leads to the expectation that turbulent dynamo coefficients (\(\alpha-\) coefficients) should harbour a time-dependent part (\(\alpha_{ij}^{0}\)) along with the traditional time-independent part (\(\alpha_{ij}^{0}\)) as discussed in section 4.
## 4 Results: dynamo coefficients from iros
We obtained mean fields (\(\bar{B}_{x}\), \(\bar{B}_{y}\)), EMFs (\(\bar{\mathcal{E}}_{x}\), \(\bar{\mathcal{E}}_{y}\)) from the shearing-box simulation and use a modified version of IROS method (see section 4) to calculate time-independent and time-dependent turbulent dynamo coefficients characterizing the MRI dynamo. However, we find the \(x-y\)-averaging cannot remove all the signatures of the small-scale dynamo. The small-scale dynamo is expected to have a shorter correlation time of order few \(\Omega^{-1}\) and contribute noise at the higher frequency end compared to the large-scale dynamo. Therefore, we further smooth the mean fields and EMFs using a low-pass Fourier filter and remove contributions from the frequencies \(f>f_{c}\). We consider three cases: (i) \(f_{c}=0.05\) (\(\approx 3f_{\rm dyn}\)), (ii) \(f_{c}=0.12\) (\(\approx 6f_{\rm dyn}\)) and (iii) \(f_{c}\rightarrow\infty\) (unfiltered) to assess the effects of the small-scale dynamo on the dynamo coefficient extraction.
Figure 3: Spatio-temporal variation of \(\alpha_{\rm kin}^{\rm dyn}(z,t)\) and \(\alpha_{\rm mag}^{\rm dyn}(z,t)\) assuming \(\tau_{c}=\Omega^{-1}\). Both the helicities are small close to the mid-plane and become larger at larger heights. The \(\alpha_{\rm mag}\) flips sign with a time period \(\approx 5\) orbital period (\(2\pi/\Omega\)), roughly half the dynamo period, while \(\alpha_{\rm kin}\) does not show any periodicity.
Figure 2: Spatio-temporal variation of mean magnetic fields, \(\bar{B}_{x}\) (top left panel), \(\bar{B}_{y}\) (bottom left panel) and mean EMFs \(\bar{\mathcal{E}}_{x}\) (top right panel) and \(\bar{\mathcal{E}}_{y}\) (bottom right panel). Mean magnetic field component \(\bar{B}_{y}\) and y-component of EMF \(\bar{\mathcal{E}}_{x}\) show a coherent change in space and time (with a time period \(\approx 9\) orbital period (\(2\pi/\Omega\))), while the spatio-temporal patterns in \(\bar{B}_{x}\) and \(\bar{\mathcal{E}}_{y}\) are less coherent.
### Time independent dynamo coefficients
Fig. 5 shows the vertical profiles of time-independent dynamo coefficients \(\alpha^{0}_{ij}\) and \(\eta_{ij}\) for different values of \(f_{c}\). Four panels at the top illustrate the vertical profiles of coefficients (\(\alpha^{0}_{xx},\ \alpha^{0}_{yy},\ \eta_{xx},\ \eta_{xy}\)) associated with the x-component of EMF \(\bar{\mathcal{E}}_{x}\), while four panels at the bottom show profiles of these (\(\alpha^{0}_{yx},\ \alpha^{0}_{yy},\ \eta_{yx},\ \eta_{yy}\)) associated with the y-component of EMF \(\bar{\mathcal{E}}_{y}\).
The 'coefficient of most interest' out of the calculated ones is \(\alpha^{0}_{yy}\), which plays a vital role in producing the poloidal field (here \(B_{x}\)) out of the toroidal field (\(\bar{B}_{y}\)) (also see section 4.4). The coefficient \(\alpha^{0}_{yy}\) shows an anti-symmetric behaviour about the \(z=0\) plane, with a negative (positive) sign in the upper (lower)-half plane (for \(|z|<2\)). For \(|z|>2\), the sign of \(\alpha^{0}_{yy}\) tends to be positive (negative) in the upper (lower)-half plane. Earlier studies of MRI dynamo in local (Brandenburg, 2008; Gressel, 2010; Gressel & Pessah, 2015) and global (Dhang et al., 2020) frameworks also found a similar trend in \(\alpha^{0}_{yy}\). However, it is to be noted that our study suggests a stronger negative \(\alpha^{0}_{yy}\) in the upper-half plane compared to that in the earlier studies. The negative sign in the upper half plane is attributed to the buoyant rise of magnetic flux tubes under the combined action of magnetic buoyancy and shear (Brandenburg & Schmitt, 1998; Brandenburg & Subramanian, 2005; see also Tharakkal et al. (2023)). Brandenburg & Schmitt (1998) also suggested that negative \(\alpha_{yy}\) is responsible for the upward propagation direction of dynamo waves seen in the butterfly diagrams of MRI-driven dynamo simulations (e.g. see Fig. 2). Another different way of looking at the origin of the effective \(\alpha\) is to link it to the helicity flux as envisaged by Vishniac
Figure 4: Power spectra of mean fields \(\bar{B}_{x}\), \(\bar{B}_{y}\) (top panels), mean EMFs \(\bar{\mathcal{E}}_{x}\), \(\bar{\mathcal{E}}_{y}\) (middle panels) and helicities \(K_{\rm hel}\), \(C_{\rm hel}\) (bottom panels). Spatial averages are done over different heights: \(z=0-H\) (black lines), \(z=H-2H\) (green lines), and \(z=2H-3.5H\) (red lines). The zeroth frequency values are denoted by ‘asterisks’. Vertical dashed lines denote the dynamo frequency \(f_{\rm dyn}=0.017\) and its multiples.
2015 and Gopalakrishnan & Subramanian 2023. We discuss this possibility in section 5.3.
The off-diagonal terms of the \(\alpha\)-coefficients are related to turbulent pumping. This effect is responsible for transporting large-scale magnetic fields from the turbulent region to the laminar region. We found \(\alpha_{xy}^{0}\) and \(\alpha_{yz}^{0}\) to be antisymmetric and \(\alpha_{xy}^{0}>\alpha_{yx}^{0}\) unlike the previous studies (Brandenburg, 2008; Gressel & Pessah, 2015) which found \(\alpha_{yx}^{0}\approx\alpha_{xy}^{0}\). This resulted in a strong turbulent pumping \(\gamma_{z}=(\alpha_{yx}^{0}-\alpha_{xy}^{0})/2\), transporting large-scale magnetic fields from the disc to the corona as shown in the top panel of Fig. 6. We also compare the relative importance of turbulent pumping (\(\gamma_{z}\)) and wind (\(\bar{v}_{z}\)) in advecting the magnetic field upward (in the upper half-plane) at different heights. Vertical profiles of \(\gamma_{z}\) and \(\bar{v}_{z}\) in the top panel of Fig. 6 shows that at low heights (\(|z|<2.5\)), turbulent pumping is the dominant effect over the wind, while the effects of wind become comparable or larger over the pumping term at large scale-heights (see also Fig. 11).
The theory of isotropic kinematically forced turbulence predicts that \(\gamma_{z}\) is supposed to be in the direction of negative gradient of turbulent intensity (\(v^{\prime 2}\)) (Krause & Raedler, 1980), that is, in the negative z-direction (in the upper-half plane) in our simulation. This is opposite to what has been found in Fig. 6. However, it is to be noted that MRI turbulence in a stratified medium is neither isotropic nor homogenous. Minimal \(\tau\)-approximation (MTA) suggests that in a stratification and rotation-induced anisotropic turbulent medium, which includes the quasi-linear back reaction due to Lorentz forces,
\[\gamma_{z}^{\rm{MTA}}=-\frac{1}{6}\tau\nabla_{z}(\overline{v^{\prime 2}}- \overline{B^{\prime 2}})-\frac{1}{6}\tau^{2}\Omega\mathbf{\hat{z}}\times \nabla_{\mathbf{z}}(\overline{\mathbf{v^{\prime 2}}}+\overline{\mathbf{B^{\prime 2}}}), \tag{24}\]
where \(\tau\) is the correlation time and it is assumed that \(\rho=1\) (see equation (10.59) in Brandenburg & Subramanian, 2005). The last term in equation 24 vanishes because all the variables are functions of \(z\) alone. Therefore, equation 24 and the bottom panel of Fig. 6 illustrating the vertical profiles of \(\overline{v^{\prime 2}}\) and \(\overline{\nu_{A}^{0}}\) imply that sign of turbulent pumping obtained from MTA supports that obtained from extracted dynamo coefficients.
We found turbulent diffusion tensor \(\eta_{ij}\) to be anisotropic with \(\eta_{xx}>\eta_{yy}\) and having a significant contribution from the off-diagonal components \(\eta_{xy}\) and \(\eta_{yx}\). Different values of diagonal components of \(\eta_{ij}\) imply that mean field components \(\bar{B}_{x}\) and \(\bar{B}_{y}\) are affected differently by the vertical diffusion (also see section 4.4). It is worth mentioning that \(\eta_{yy}\approx 0\) for the \(f_{c}=0.05\) case, while it is slightly negative for the other two cases. This is somewhat different from the earlier studies (Gressel, 2010; Gressel & Pessah, 2015).
Figure 5: Vertical profiles of time-independent turbulent dynamo coefficients (\(\alpha_{ij}^{0}\), \(\eta_{ij}^{0}\)) in MRI simulation calculated using IROS method. A low-pass Fourier filter with a cut-off frequency \(f_{c}\) removes the contribution from the small-scale dynamo. We used two values of \(f_{c}\): \(f_{c}=0.05\) and \(f_{c}=0.12\). The results are compared to the case when IROS is applied to the unfiltered data obtained from DNS.
2015), which calculated dynamo coefficients using the TF method and found \(\eta_{xx}\approx\eta_{yy}>0\). Out of the two off-diagonal terms of the diffusion tensor, \(\eta_{yx}\) is of particular interest. It is suggested that a negative value of \(\eta_{yx}\) can generate poloidal fields by the shear-current effect (Squire & Bhattacharjee (2016)). However, we find \(\eta_{yx}\) to be always positive, nullifying the presence of a shear-current effect in our stratified MRI simulation.
### Time dependent dynamo coefficients
In the previous sections, We discussed the time-dependent nature of \(\alpha_{\rm mag}\). Effective \(\alpha\)-effect is expected to be determined by \(\alpha_{\rm mag}\), especially at the larger scale heights where it is of larger amplitude. While a tensor is expected to have the time-dependent part, \(\eta\)-tensor is supposed to have only the time-independent part, as it only depends on the turbulent intensity (see section 4). Fig. 7 shows the vertical profiles of components of time-dependent \(\alpha\)-tensor. We find that \(\alpha_{ij}^{1}\)-s in the fiducial case (\(f_{c}=0.05\)) are relatively smaller than the other two cases (\(f_{c}=0.12\) and Unfiltered). It is also interesting to note that the coefficients \(\alpha_{xx}\) and \(\alpha_{yx}\) associated with \(\bar{B}_{x}\) in the mean field closure (equation 16) have stronger time-dependence compared to those coefficients (\(\alpha_{xy}\) and \(\alpha_{yy}\)) associated with \(\bar{B}_{y}\). Additionally, it is to be noted that the amplitudes of the time-dependent part of \(\alpha\)-s are higher at larger scale heights. However, overall, the amplitudes of \(\alpha_{ij}^{1}\) are much smaller than the \(\alpha_{ij}^{0}\) implying that the time-independent \(\alpha-\)s are predominantly governing the dynamo action.
### Verification of method
To verify the reliability of the determined dynamo coefficients we reconstruct the EMFs using the calculated coefficients and run a 1D dynamo model.
#### 4.3.1 Reconstruction of EMFs
Fig. 8 shows butterfly diagrams of the EMFs (\(\bar{\cal E}_{x,f},~{}\bar{\cal E}_{y,f}\)) used to determine the turbulent dynamo coefficients and the EMFs (\(\bar{\cal E}_{x,r},~{}\bar{\cal E}_{y,r}\)) reconstructed using calculated coefficients and mean fields for \(f_{c}=0.05\). Here it is to be noted that \(\bar{\cal E}_{x,f},~{}\bar{\cal E}_{y,f}\) are the smoothed EMFs obtained by filtering (by using a low-pass filter) EMFs \(\bar{\cal E}_{x},~{}\bar{\cal E}_{y}\) from DNS respectively. We can see a close match between the broad features, such as the dynamo cycle period, in the smoothed and reconstructed EMFs, implying the goodness of fit.
Further, we investigate the residual of the filtered and reconstructed EMFs, defined by
\[\delta\bar{\cal E}_{i}=\bar{\cal E}_{i,f}-\bar{\cal E}_{i,r},~{}~{}i\in x,y. \tag{25}\]
Fig. 9 shows the histograms of the normalised residuals \(\delta\bar{\cal E}_{x}/|\bar{\cal E}_{x}|\) and \(\delta\bar{\cal E}_{y}/|\bar{\cal E}_{y}|\) calculated within the region of different heights, namely between \(0-H,H-2H\) and \(2H-3H\), for the \(f_{c}=0.05\) case. All the histograms peak about the region close to zero. However, a Gaussian fit of the histograms shows that the mean of the distribution always deviates from zero. Additionally, a careful comparison of histograms of \(\delta\bar{\cal E}_{x}/|\bar{\cal E}_{x}|\) and \(\delta\bar{\cal E}_{y}/|\bar{\cal E}_{y}|\) tells that fit is better for \(\bar{\cal E}_{x}\) than that for \(\bar{\cal E}_{y}\), especially at larger scale-heights. Better quality fit for \(\bar{\cal E}_{x}\) over \(\bar{\cal E}_{y}\) is expected as \(\bar{\cal E}_{x}\) obtained from DNS shows a more regular, coherent space-time variation when compared to \(\bar{\cal E}_{y}\).
#### 4.3.2 1D dynamo model
We additionally run a 1D dynamo model using the calculated dynamo coefficients and mean velocity field \(\bar{v}_{z}\). In particular we solve equation 13, or in component form
\[\frac{\partial\bar{B}_{x}}{\partial t} =\frac{\partial}{\partial z}\big{[}-(\bar{v}_{z}+\alpha_{yx}^{0} )\bar{B}_{x}-\alpha_{yy}^{0}\bar{B}_{y}+\eta_{yy}\frac{\partial\bar{B}_{x}}{ \partial z}-\eta_{yx}\frac{\partial\bar{B}_{y}}{\partial z}\big{]}\] \[\frac{\partial\bar{B}_{y}}{\partial t} =\frac{\partial}{\partial z}\big{[}-(\bar{v}_{z}-\alpha_{xy}^{0} )\bar{B}_{y}-\alpha_{xx}^{0}\bar{B}_{x}+\eta_{xx}\frac{\partial\bar{B}_{y}}{ \partial z}-\eta_{xy}\frac{\partial\bar{B}_{x}}{\partial z}\big{]}\] \[+q\Omega\,\bar{B}_{x}. \tag{26}\]
for \(\bar{B}_{x}\) and \(\bar{B}_{y}\) with \(\alpha_{ij}^{0}\) and \(\eta_{ij}\) obtained using the IROS method. We note that \(\bar{B}_{z}=0\) as a consequence of the zero net flux (ZNF) assumption in our model. The initial profiles of \(\bar{B}_{x}\) and \(\bar{B}_{y}\) at are taken directly from the DNS, at time \(t=100\Omega^{-1}\) roughly consistent with the beginning of the quasi-stationary phase in the DNS. \(\bar{v}_{z}\) profile is taken as a constant throughout the evolution and is also extracted from the direct simulations by averaging it over time throughout the quasi-stationary phase, over which it roughly stays constant. Additionally, for the profiles of dynamo coefficients \(\alpha_{ij}^{0}(z)\) and \(\eta_{ij}(z)\), we first smooth them with a box filter and also cut them off above and below three scale heights, and use them in the 1D dynamo model. We do this mainly to avoid the instability at boundaries noting that these profiles are sharply flayed outside of that range. Note that only the time independent parts of the dynamo coefficients are used in the mean field equations, since the contributions of \(\alpha_{ij}^{1}\) are negligible compared to the time-independent part.
Furthermore, it must be noted that there is a contribution to the diffusion from the mesh grids. We do a rough estimation of numerical
Figure 6: Top panel: profiles of turbulent pumping (\(\gamma_{z}\)) and mean vertical outflow (\(\bar{v}_{z}\)). They act in the same direction, transporting mean fields vertically outward. Bottom panel: vertical profiles of average fluctuating velocity (\(\overline{v^{\prime 2}}\)) and fluctuating Alfvén speed \(\overline{v^{\prime 2}_{A}}=\overline{B^{\prime 2}}/\overline{p}\). Minimal \(\tau\) approximation and profiles of \(\overline{v^{\prime 2}}\), \(\overline{v^{\prime 2}_{A}}\) suggest similar sign of \(\gamma_{z}\) as calculated using IROS.
Figure 8: Left panels: Comparison between x-component of EMF \(\tilde{\mathcal{E}}_{x,f}\) used to determine the turbulent dynamo coefficients,and EMF \(\tilde{\mathcal{E}}_{x,r}\) reconstructed using the turbulent dynamo coefficients. Right panels: Same as figures in left panels, but for the y-component of EMF.
Figure 7: Vertical profiles of time-independent turbulent dynamo coefficients (\(\alpha^{1}_{ij}\)) in MRI simulation calculated using IROS method. A low-pass Fourier filter with a cut-off frequency \(f_{e}\) is used to remove the contribution from the small-scale dynamo. We used two values of \(f_{e}\): \(f_{e}=0.05\) and \(f_{e}=0.12\). The results are compared to the case when IROS is applied to the unfiltered data obtained from DNS.
diffusion as follows \(\eta_{0}=v^{\prime}_{\rm rms}\Delta x\), where we consider the smallest one among the relevant velocities (\(v^{\prime}_{\rm rms},~{}c_{s},~{}v_{A}\)) in the problem. Therefore, we add a correction term \(\eta_{0}\approx 10^{-3}\) (with \(\Delta x=1/32\) and \(v^{\prime}_{\rm rms}=0.1\)) to the diagonal components of diffusivity tensor \(\eta_{ij}\) to consider the contribution from the mesh to the magnetic field diffusion. This also helps us to stabilize the 1D dynamo solution.
With this setup, we solve the system of equations with a finite difference method over a staggered grid of resolution \(\Delta z=1/32\), same as the \(z\) resolution of DNS. Outcome of this analysis are presented in Fig. 10, where top and bottom panels show the butterfly diagrams of \(\bar{B}_{x}\) and \(\bar{B}_{y}\) obtained using the 1D dynamo model respectively. We find both x and y-components of mean fields flip sign regularly with a cycle of \(\approx 9\) orbital period, similar to what is found in DNS (see Fig. 2). Thus, applying calculated coefficients to the 1D dynamo model successfully reproduces broad features of spatiotemporal variations mean magnetic fields.
### Mean magnetic energy equations
It is challenging to calculate dynamo coefficients uniquely in the presence of both shear and rotation (Brandenburg et al., 2008) as there are many unknowns (see also discussion in section 5.4). Therefore, it is worth seeing how different terms involving turbulent dynamo coefficients contribute to the mean magnetic energy equation to make physical sense. The mean magnetic energy evolution equation is obtained by taking the dot product of the mean-field equation (equation 13) with the mean magnetic field \(\mathbf{B}\) and given by
\[\frac{\partial}{\partial t}\left(\frac{1}{2}\bar{B}_{x}^{2}\right) =\mathcal{T}_{B_{x},v_{x}}+\mathcal{T}_{\alpha_{yx}}+\mathcal{T}_{ \alpha_{yy}}+\mathcal{T}_{\eta_{yx}}+\mathcal{T}_{\eta_{yy}}, \tag{27}\] \[\frac{\partial}{\partial t}\left(\frac{1}{2}\bar{B}_{y}^{2}\right) =\mathcal{T}_{B_{y},v_{x}}+\mathcal{T}_{\alpha_{xy}}+\mathcal{T}_{ \alpha_{xx}}+\mathcal{T}_{\eta_{xy}}+\mathcal{T}_{\eta_{xx}}+\mathcal{T}_{S}, \tag{28}\]
where
\[\mathcal{T}_{B_{x},v_{x}} =-\frac{1}{2}\bar{B}_{x}\,\frac{\partial}{\partial z}\left(\bar {v}_{z}\bar{B}_{x}\right),\] \[\mathcal{T}_{\alpha_{yx}} =-\frac{1}{2}\bar{B}_{x}\,\frac{\partial}{\partial z}\left(\alpha _{yx}\bar{B}_{x}\right),\] \[\mathcal{T}_{\alpha_{yy}} =-\frac{1}{2}\bar{B}_{x}\,\frac{\partial}{\partial z}\left(\alpha _{yy}\bar{B}_{y}\right),\] \[\mathcal{T}_{\eta_{yx}} =-\frac{1}{2}\bar{B}_{x}\,\frac{\partial}{\partial z}\left(\eta _{yx}\frac{\partial}{\partial z}\bar{B}_{y}\right),\] \[\mathcal{T}_{\eta_{yy}} =\frac{1}{2}\bar{B}_{x}\,\frac{\partial}{\partial z}\left(\eta _{yy}\frac{\partial}{\partial z}\bar{B}_{x}\right),\] \[\mathcal{T}_{B_{y},v_{x}} =-\frac{1}{2}\bar{B}_{y}\,\frac{\partial}{\partial z}\left(\bar {v}_{z}\bar{B}_{y}\right), \tag{29}\] \[\mathcal{T}_{\alpha_{xy}} =\frac{1}{2}\bar{B}_{y}\,\frac{\partial}{\partial z}\left(\alpha _{xy}\bar{B}_{y}\right),\] \[\mathcal{T}_{\alpha_{xx}} =\frac{1}{2}\bar{B}_{y}\,\frac{\partial}{\partial z}\left(\alpha _{xy}\bar{B}_{y}\right),\] \[\mathcal{T}_{\alpha_{xy}} =\frac{1}{2}\bar{B}_{y}\,\frac{\partial}{\partial z}\left(\alpha _{xy}\bar{B}_{y}\right),\] \[\mathcal{T}_{\eta_{xy}} =-\frac{1}{2}\bar{B}_{y}\,\frac{\partial}{\partial z}\left(\eta _{xy}\frac{\partial}{\partial z}\bar{B}_{x}\right),\] \[\mathcal{T}_{\eta_{xx}} =\frac{1}{2}\bar{B}_{y}\,\frac{\partial}{\partial z}\left(\eta _{xx}\frac{\partial}{\partial z}\bar{B}_{y}\right),\] \[\mathcal{T}_{\mathcal{S}} =\frac{1}{2}q\Omega\,\,\bar{B}_{x}\bar{B}_{y}.\]
Fig. 11 shows the space-time plots of different terms involving mean flow (\(\bar{v}_{x}\)) and turbulent dynamo coefficients (\(\alpha_{ij},~{}\eta_{ij}\)) in the mean magnetic energy evolution equations. The top six panels in Fig. 11 describe the terms in the x-component of the magnetic energy equation (equation 27), while the bottom seven panels illustrate terms in the y-component of the magnetic energy equation (28) at different heights and times.
Fig. 11 provides a fairly complicated picture to account for the generation-diffusion scenario of the mean magnetic fields. Broadly speaking, the poloidal field (\(\bar{B}_{x}\)) is predominantly generated by an
Figure 10: Butterfly diagrams of the mean magnetic fields \(\bar{B}_{x}\) and \(\bar{B}_{y}\) obtained by running 1D dynamo model. Both \(\bar{B}_{x}\) and \(\bar{B}_{y}\) flip sign regularly with a cycle of \(\approx 9\) orbital period, similar to that found in shearing box simulations (see Fig. 2).
Figure 9: Histograms of the residual EEMs, \(~{}\delta\bar{\mathcal{E}}_{i}=\bar{\mathcal{E}}_{i,f}-\bar{\mathcal{E}}_{i,r}\) calculated within region of different heights for \(f_{c}=0.05\) case. We normalise \(\delta\bar{\mathcal{E}}_{i}\) with the absolute values of the corresponding EMFs at the respective points. The Red dashed line shows a normal distribution fitting the histogram.
\(\alpha\)-effect (the term \(\mathcal{T}_{\alpha_{yy}}\) in Fig. 11). However, there is a significant contribution from \(\alpha_{yx}\) (the term \(\mathcal{T}_{\alpha_{yx}}\) in Fig. 11) in generating \(\bar{B}_{x}\) in larger scale-heights. Toroidal field generation is mainly due to the presence of shear, here differential rotation, (\(\mathcal{T}_{S}\) in Fig. 11) which converts poloidal fields to the toroidal fields. However, it is worth noting that there is a minute contribution from the \(\alpha_{xx}\), generating a toroidal field out of the poloidal field by an \(\alpha\)-effect, (as in an \(\alpha^{2}\)-\(\Omega\) dynamo). The dominance of \(\alpha\)-effect in generating a poloidal field and that of \(\Omega\)-effect (shear) in generating a toroidal field imply the presence of an \(\alpha-\Omega\) type dynamo in MRI-driven geometrically thin accretion disc. This is similar to what has been found in the study of the dynamo in an MRI-driven geometrically thick accretion disc (Dhang et al., 2020), implying universal action of \(\alpha-\Omega\)-dynamo in MRI-driven accretion flows.
Generally, it is expected that diagonal components of the diffusion tensor, \(\eta_{yy}\) and \(\eta_{xx}\) are primarily responsible for the diffusion of \(\bar{B}_{x}\) and \(\bar{B}_{y}\) respectively. However, our simulation finds that winds carry mean fields out of the computational box and act as a sink in the mean magnetic energy evolution equation, not the \(\eta\)-s.
Figure 11: Contributions of different terms involving mean flow (\(\bar{v}_{x}\)) and turbulent dynamo coefficients (\(\alpha_{ij},\ \eta_{ij}\)) to the x-(top six panels) and y-(bottom seven panels) components of mean magnetic energy evolution equation (equations 27 and 28). Poloidal field (\(\bar{B}_{x}\)) generation is primarily attributed to an \(\alpha\)-effect (the term \(\mathcal{T}_{\alpha_{yy}}\)), while shear (the term \(\mathcal{T}_{S}\)) dominates the toroidal field generation; thus implying and \(\alpha-\Omega\) type of dynamo. Winds carry mean fields out of the computational box and contribute largely as the sink term in the mean magnetic energy evolution equation.
## 5 Discussion
### Periodicities in the dynamo cycle
Investigations of spatio-temporal variation of different variables in our stratified shearing box simulations show a diverse range of periodicities. We observed that mean magnetic fields and EMFEs oscillate with a primary frequency \(f_{\rm dyn}=0.017\) (equivalent to 9 orbital periods), similar to what was found in earlier studies (Brandenburg et al., 1995; Gammie, 1996; Davis et al., 2010; Gressel, 2010; Ryan et al., 2017). The primary frequency is determined by the effective dispersion relation of the \(\alpha\)-\(\Omega\) dynamo (Brandenburg and Subramanian, 2005; Gressel and Pessah, 2015) with the \(\alpha\) dominated by the time-independent (DC) value of \(\alpha_{yy}\). The plausible origin of this DC value of \(\alpha_{yy}\) is discussed below in section 5.3.
Additionally, we observed the presence of higher harmonics at \(3f_{\rm dyn}\), which got unnoticed in earlier MRI simulations (see section 3.5). Unlike the mean fields and EMFEs, current helicity shows periodicities at different frequencies \(2f_{\rm dyn}\) and \(4f_{\rm dyn}\), respectively. The presence of the frequencies in the mean EMFEs, mean fields, and current helicities can be understood better if we follow the magnetic helicity density (\(h_{b}\)) evolution equation (e.g.see Blackman and Field (2000); Subramanian and Brandenburg (2006); Kleeorin and Rogachevskii (2022); Gopalakrishnan and Subramanian (2023)),
\[\frac{1}{2}\frac{\partial h^{b}}{\partial t}=-\vec{\mathcal{E}}.\vec{\mathbf{ B}}-\eta_{b}\mathcal{C}_{\rm hel}-\frac{1}{2}\nabla\cdot\mathcal{F}_{\mathcal{H}}, \tag{30}\]
where \(\mathcal{F}_{\mathcal{H}}\) is the helicity flux and the component of the EMF along the mean magnetic field generates mean magnetic and associated current helicities. Now consider the effect of the DC term in \(\alpha_{yy}\), which leads to a source \(\alpha_{yy}^{0}\vec{B}_{y}^{2}\) in equation 30. This leads to the generation of magnetic and current helicities at a primary frequency, twice that of \(\vec{B}_{y}\), i.e. \(2\ f_{\rm dyn}\). This current helicity can now add to the \(\alpha\)-effect, which combined with the mean field in the dynamo equation 13, can lead to secondary EMF and mean fields components oscillating at \(3f_{\rm dyn}\) which in turn sources helicity components at \(4f_{\rm dyn}\) and so on. These primary and secondary frequency components, limited by noise, are indeed seen from the analysis of our simulations.
### Dynamo coefficients, comparison with earlier studies
Earlier studies calculating turbulent dynamo coefficients using the simulation data and the mean field closure (equation 16) in the local (Brandenburg et al., 1995; Brandenburg, 2008; Gressel, 2010; Gressel and Pessah, 2015; Shi et al., 2016) and global (Flock et al., 2012; Hogg and Reynolds, 2018; Dhang and Sharma, 2019; Dhang et al., 2020) simulations of MRI-driven accretion discs used different methods. Earlier local (Brandenburg et al., 1995; Davis et al., 2010) and most of the global (Flock et al., 2012; Hogg and Reynolds, 2018) studies calculated only the "coefficient of interest" \(\alpha_{\phi\phi}\) (\(\alpha-\) effect) by neglecting the contributions of other terms in the mean-field closure. Many of the local studies (Brandenburg, 2008; Gressel, 2010; Gressel and Pessah, 2015) use the linear Test Field (TF) method during the run-time to calculate all the coefficients. A few local (e.g. Shi et al., 2016; Zier and Springel, 2022; Wissing et al., 2022, the current work) and global (Dhang et al., 2020) studies used direct methods to quantify dynamo coefficients. However, it is important to note that while several authors used a linear regression method assuming few constraints on the diffusion coefficients (namely, \(\eta_{xx}=\eta_{yy}\)), we use the IROS method without any constrained on the coefficients.
Like most of the earlier local and global studies, we find a negative \(\alpha_{yy}\) close to the mid-plane in the upper half-plane. However, direct methods seem to capture negative signs better than TF; which can be realised by comparing \(\alpha_{yy}\) profiles in our work (also in Shi et al. (2016)) and in Gressel (2010). Additionally, we find stronger turbulent pumping (compared to that in the TF method), transporting large-scale magnetic fields from the disc to the corona, similar to that found in global MRI-dynamo studies (Dhang et al., 2020).
Additionally, for the first time, we ventured to calculate the time-dependent part of \(\alpha_{ij}\) inspired by the periodic behaviour of \(\alpha_{\rm mag}\). However, we found that the amplitudes of the time-dependent part of \(\alpha-\)s (\(\alpha_{ij}^{0}\)) are much smaller than that of the time-independent \(\alpha-\)s (\(\alpha_{ij}^{0}\)). Therefore, we suspect that the time-independent \(\alpha-\)s are predominantly governing the dynamo action.
Diffusivity coefficients \(\eta_{ij}\) in our work are found to be quite different from that in the earlier local studies (Brandenburg, 2008; Gressel, 2010; Gressel and Pessah, 2015; Shi et al., 2016), with \(\eta_{xx}\neq\eta_{yy}\) and \(\eta_{yy}\approx 0\). Several earlier studies (Shi et al., 2016; Zier and Springel, 2022) found \(\eta_{xx}<0\)) in their unstratified and stratified MRI simulations after imposing a few constraints on the coefficients (e.g. \(\eta_{yy}=\eta_{xx},\eta_{xy}=0\) etc.) and they proposed shear-current effect (Raedler, 1980; Rogachevskii and Kleeorin, 2004; Squire and Bhattacharjee, 2016) generating poloidal fields in addition to \(\alpha\)-effect. Recently, Mondal and Bhat (2023) carried out statistical simulations of MRI in an unstratified ZNF shearing box and found \(\eta_{yx}<0\) proposing 'rotation-shear-current effect' and the 'rotation-shear-sorticity effect' responsible for generating the radial and vertical magnetic fields, respectively. However, like some other studies (TF: Brandenburg, 2008; Gressel, 2010; Gressel and Pessah, 2015), SPH:Wissing et al. (2022), we find \(\eta_{yx}\geq 0\), unless we impose a constraint on \(\eta_{yy}\) being a positive fraction of \(\eta_{xx}\). If we assume \(\eta_{yy}=f_{\eta}\,\eta_{xx}\) while calculating the coefficients, we find negativity of \(\eta_{yx}\) is an increasing function of the factor \(f_{\eta}\) (see Fig. 11 and Appendix). However, we find that the quality of fit is compromised slightly and histograms of the residual of filtered (input) and reconstructed EMFEs get broader (with higher standard deviation) with the assumption \(\eta_{yy}=f_{\eta}\,\eta_{xx}\). We refer the reader to see Appendix for details.
### Helicity flux and the DC \(\alpha-\)effect
In order to understand the DC value (time-independent) of the \(\alpha\)-effect, we take the time average of equation 30. The term \(\partial h_{b}/\partial t\) averages to zero, and one gets the well-known constraint (Blackman, 2016; Shukurov and Subramanian, 2021)
\[\langle\vec{\mathcal{E}}\cdot\vec{\mathbf{B}}\rangle=-\eta_{0}(\mathcal{C}_{\rm hel })-\frac{1}{2}\nabla\cdot\langle\mathcal{F}_{\mathcal{H}}\rangle, \tag{31}\]
where \(\langle\rangle\) indicates a time average. This shows that in the absence of helicity fluxes, the average EMF parallel to the mean field, responsible for the generation of poloidal from the toroidal mean field, is resistively (or catastrophically) quenched. Of the several helicity fluxes discussed in the literature, the generative helicity fluxes as envisaged in Vishniac (2015) and in Gopalakrishnan and Subramanian (2023) can source the DC component of \(\vec{\mathcal{E}}\cdot\vec{\mathbf{B}}\) without the preexistence of any mean field of initial helicities. Using equation (17) of Gopalakrishnan and Subramanian (2023), with mean vorticity \(\Omega(2-q)\hat{z}\) and noting that \(\alpha_{yy}\vec{B}_{y}^{2}\) dominates \(\vec{\mathcal{E}}\cdot\vec{\mathbf{B}}\), we estimate
\[\begin{split}\left(\alpha_{yy}^{0}\right)_{h_{c}}\approx-\frac{ \Omega\tau^{2}}{4\langle\vec{B}_{y}^{2}\rangle}\Bigg{[}\left(C_{1}v_{A}^{ \prime 2}+C_{3}v^{\prime 2}+C_{4}\frac{\lambda^{2}}{\tau^{2}}\right)\frac{\partial b^{ \prime 2}}{\partial z}\\ +C_{2}b^{\prime 2}\frac{\partial v^{\prime 2}}{\partial z}\Bigg{]},\end{split} \tag{32}\]
where \((C_{1},C_{2},C_{3},C_{4})=(7/45,-203/5400,403/8100,-1/6)\) and we have taken \(q=3/2\). Adopting estimates for the correlation
time \(\tau\sim\Omega^{-1}\), correlation length \(\lambda\sim H/2\) and using the vertical profiles of various physical variables from the simulation, we calculate the vertical profile of \(\left(\alpha_{yy}^{0}\right)_{h_{x}}\) due to the generative helicity flux. This is shown as a solid line in Fig. 12 and for comparison, we also show \(10\alpha_{yy}^{0}\) (for \(f_{c}=0.05\) case) from the IROS inversion. It is encouraging that the \((\alpha_{yy}^{0})_{h_{e}}\) predicted by the generative helicity flux is negative in the north and has a qualitatively similar vertical profile as that determined from IROS inversion. The amplitude, however, is larger, which perhaps indicates the importance also of the neglected diffusive and advective helicity fluxes which act as sink terms in equation 31.
### Vanishing \(\eta_{yy}\), missing information?
In section 4.4 we pointed out that wind carries away the mean magnetic field and acts as the effective sink of its energy. However, the poloidal field is also expected to be diffused by \(\eta_{yy}\), and a positive \(\eta_{yy}\) is required for diffusion. Instead, we find a vanishingly small (in some regions even negative) \(\eta_{yy}\), which leads us to two possible thoughts: either it is impossible to recover \(\eta_{yy}\) in the direct methods, or there is incompleteness in the closure we used to retrieve the coefficients. Here we discuss both possibilities.
It is clear from equation 16 that the turbulent diffusion coefficients are associated with the currents, which are calculated by taking the \(z\)-derivative of mean magnetic field components. Calculating derivative makes the currents noisy, especially \(J_{y}\), as it involves a derivative of \(\bar{B}_{x}\), which is fairly incoherent over space and time, as can be seen from the butterfly diagram of \(\bar{B}_{x}\) (Fig. 2). Additionally, also note that the \(y\)-component of EMF is also noisy. Thus, the coefficients associated with \(J_{y}\) and \(\bar{\mathcal{E}}_{y}\) turned out to be error-prone and difficult to calculate. This pattern has been noticed by earlier works (Squire & Bhattacharjee, 2016), which used direct methods other than IROS, used in the current work.
In general, mean EMF can be expressed in terms of symmetric, anti-symmetric tensors and mean fields as follows,
\[\bar{\mathcal{E}}=\bar{\alpha}\circ\bar{\mathbf{B}}+\bar{\gamma}\times\bar{ \mathbf{B}}-\bar{\eta}\circ\left(\nabla\times\bar{\mathbf{B}}\right)-\bar{ \delta}\times\left(\nabla\times\bar{\mathbf{B}}\right)-\bar{\kappa}\circ \left(\nabla\bar{\mathbf{B}}\right)^{\mathrm{sym}}, \tag{33}\]
where we neglect the higher than first-order spatial derivatives and time derivatives of mean fields (Raedler, 1980; Brandenburg & Subramanian, 2005; Schrinner et al., 2007; Simard et al., 2016). If we define mean fields and EMFs as the \(x-y\)-averaged quantities, then mean field closure reduces to equation 15. The symmetrised coefficients in equation 33 and non-symmetrised coefficients in equation 15 are related as
\[\tilde{\alpha}_{xx} =\alpha_{xx}, \tag{34}\] \[\tilde{\alpha}_{yy} =\alpha_{yy},\] \[\tilde{\gamma}_{z} =\frac{1}{2}\left(\alpha_{yx}-\alpha_{xy}\right),\] \[\tilde{\eta}_{xx}+\tilde{\kappa}_{xyz} =\eta_{xx},\] \[\tilde{\eta}_{yy}-\tilde{\kappa}_{yxz} =\eta_{yy},\] \[\tilde{\delta}_{z} =\frac{1}{2}\left(\eta_{xy}-\eta_{yx}\right)+\frac{1}{2}\left( \tilde{\kappa}_{xxx}+\tilde{\kappa}_{yyz}\right).\]
Therefore it is evident from equation 34 that it is impossible to decouple a few coefficients (coefficients in the last three identities) as there are more unknown coefficients than independent variables (\(\bar{\mathbf{B}},\ \bar{\mathcal{E}}\)) and the actual diffusion coefficients (\(\tilde{\eta}_{ij}\)) might be different from the calculated ones (\(\eta_{ij}\)).
## 6 Summary
We carried out stratified zero net flux (ZNF) simulations of MRI and characterised the MRI-driven dynamo using the language of mean field dynamo theory. The turbulent dynamo coefficients in the mean-field closure are calculated using the mean magnetic fields and EMFs obtained from the shearing box simulation. For this purpose, we used a cleaning (or inversion) algorithm, namely IROS, adapted to extract the dynamo coefficients. We verified the reliability of extracted coefficients by reconstructing the EMFs and reproducing the cyclic pattern in mean magnetic fields by running a 1D dynamo model. Here we list the key findings of our work:
* We find mean fields and EMFs oscillate with a primary frequency \(f_{\mathrm{dyn}}=0.017\) (\(\approx 9\) orbital period). Additionally, they have higher harmonics at \(3f_{\mathrm{dyn}}\). Current helicity \(\alpha_{\mathrm{mag}}\) has two frequencies \(2f_{\mathrm{dyn}}\) and \(4f_{\mathrm{dyn}}\) respectively. These frequencies can be understood from mean-field dynamo effective dispersion relation and helicity density evolution equation, respectively (for details, see section 5.1).
* Our unbiased inversion and subsequent analysis show that an \(\alpha-\)effect (\(\alpha_{yy}\)) is predominantly responsible for the generation of poloidal field (here \(\bar{B}_{x}\)) from the toroidal field (\(\bar{B}_{y}\)). The differential rotation creates a toroidal field from the poloidal field completing the cycle; indicating that an \(\alpha-\Omega\)-type dynamo is operative in MRI-driven accretion disc.
* We find encouraging evidence that the effective DC \(\alpha-\)effect can be due to a generative helicity flux (section 5.3).
* We find strong wind (\(\bar{v}_{z}\)) and turbulent pumping (\(\gamma_{z}\)) carry out mean fields away from the mid-plane. Interestingly, they act as the principal sink terms in the mean magnetic energy evolution equation instead of the turbulent diffusivity terms.
* The unbiased inversion finds an almost vanishing \(\eta_{yy}\), while \(\eta_{xx}\) and \(\eta_{yx}\) are positive. However \(\eta_{yx}\) and \(\eta_{yy}\) are strongly correlated; if one imposes an arbitrary prior that \(\eta_{yy}=f_{y}\eta_{xx}\), then one finds an increasingly negative \(\eta_{yx}\) for increasing \(f_{\eta}\).
* We point out that defining mean fields by planar averaging can necessarily introduce degeneracy in determining all the turbulent dynamo coefficients uniquely. This may have important consequences for the physical interpretation of the dynamo coefficients.
Figure 12: Vertical profiles of \(\alpha_{yy}^{0}\) (for \(f_{c}=0.05\) case) obtained from IROS inversion and \((\alpha_{yy}^{0})_{h_{e}}\), expected from helicity flux.
## Acknowledgements
We thank Prateek Sharma, Oliver Gressel and Dipankar Bhattacharya for valuable discussions on numerical set-up, dynamo and IROS. All the simulations are run using the Computing facility at IUCAA.
## Data Availability
The data underlying this article will be shared on reasonable request to the corresponding author.
## Appendix A Dynamo coefficients with constraints on \(\eta_{yy}\)
Diffistivities are challenging to calculate in any direct methods (SVD, linear regression, IROS), as they involve the spatial derivatives of the mean fields. Primarily, we find that \(\eta_{yy}\) and \(\eta_{xy}\) are noisy as they are related to spatial derivatives of \(B_{x}\), which is itself quite noisy (e.g. see butterfly diagram in Fig. 2. Some earlier studies (Squire & Bhattacharjee, 2016; Shi et al., 2016) put constraints on calculating \(\eta\)-s, try to alleviate this issue. E.g. Shi et al. (2016) imposed the constraint that \(\eta_{yy}=\eta_{xx}\) in the shearing box simulation of MRI and found a negative \(\eta_{yx}\), implying the presence of a shear-current effect.
We have, on the other hand, done an unbiased inversion, as it is not clear if such constraints are actually obeyed by MRI-driven turbulence. Nevertheless, for completeness, we explore here a more generalized constraint on \(\eta_{yy}\), given by \(\eta_{yy}=f_{\eta}\,\eta_{xx}\), and calculate only those coefficients that appear in the mean-field closure for \(\tilde{\mathcal{E}}_{y}\), as those related to \(\tilde{\mathcal{E}}_{x}\) remain unaffected. Fig. 11 shows the vertical profiles of \(\alpha_{yx}^{0}\), \(\alpha_{yy}^{0}\), \(\eta_{yx}\) and \(\eta_{yy}\) for different values of \(f_{\eta}\) and for \(f_{c}=0.05\). The coefficients \(\alpha_{ij}\) remain almost unaffected, while \(\eta_{ij}\) change significantly with change in \(f_{\eta}\). There is a clear trend that the more positive the \(\eta_{yy}\) (or larger the imposed \(f_{\eta}\)), the more negative is \(\eta_{yx}\). This implies a clear correlation between \(\eta_{yx}\) and \(\eta_{yy}\).
Further, we investigate the histograms of the residual EMFs \(\delta\tilde{\mathcal{E}}_{i}\) to check the goodness of the fits. The \(x-\)components of the residual EMFs remain unaffected as expected, while histograms for \(\delta\tilde{\mathcal{E}}_{y}\) get slightly broader with the increase in \(f_{\eta}\). This implies that the imposition of constraints on \(\eta_{yy}\) compromises the quality of fits, but not greatly because \(\alpha_{ij}\) are the significant contributors in the fitting of EMFs, not the \(\eta_{ij}\).
|
2306.09200 | ChessGPT: Bridging Policy Learning and Language Modeling | When solving decision-making tasks, humans typically depend on information
from two key sources: (1) Historical policy data, which provides interaction
replay from the environment, and (2) Analytical insights in natural language
form, exposing the invaluable thought process or strategic considerations.
Despite this, the majority of preceding research focuses on only one source:
they either use historical replay exclusively to directly learn policy or value
functions, or engaged in language model training utilizing mere language
corpus. In this paper, we argue that a powerful autonomous agent should cover
both sources. Thus, we propose ChessGPT, a GPT model bridging policy learning
and language modeling by integrating data from these two sources in Chess
games. Specifically, we build a large-scale game and language dataset related
to chess. Leveraging the dataset, we showcase two model examples ChessCLIP and
ChessGPT, integrating policy learning and language modeling. Finally, we
propose a full evaluation framework for evaluating language model's chess
ability. Experimental results validate our model and dataset's effectiveness.
We open source our code, model, and dataset at
https://github.com/waterhorse1/ChessGPT. | Xidong Feng, Yicheng Luo, Ziyan Wang, Hongrui Tang, Mengyue Yang, Kun Shao, David Mguni, Yali Du, Jun Wang | 2023-06-15T15:35:31Z | http://arxiv.org/abs/2306.09200v2 | # ChessGPT: Bridging Policy Learning and Language Modeling
###### Abstract
When solving decision-making tasks, humans typically depend on information from two key sources: (1) Historical policy data, which provides interaction replay from the environment, and (2) Analytical insights in natural language form, exposing the invaluable thought process or strategic considerations. Despite this, the majority of preceding research focuses on only one source: they either use historical replay exclusively to directly learn policy or value functions, or engaged in language model training utilizing mere language corpus. In this paper, we argue that a powerful autonomous agent should cover both sources. Thus, we propose **ChessGPT**, a GPT model bridging policy learning and language modeling by integrating data from these two sources in Chess games. Specifically, we build a large-scale game and language dataset related to chess. Leveraging the dataset, we showcase two model examples **ChessCLIP** and **ChessGPT**, integrating policy learning and language modeling. Finally, we propose a full evaluation framework for evaluating language model's chess ability. Experimental results validate our model and dataset's effectiveness. We open source our code, model, and dataset at [https://github.com/waterhorse1/ChessGPT](https://github.com/waterhorse1/ChessGPT).
## 1 Introduction
In recent years, large language models (LLMs) based on transformer architectures [52] have show-cased remarkable capabilities far exceeding their original design as simple language modeling tools. This was especially notable following the advent of ChatGPT [34]. Stemming from causal language modeling, a plethora of recent studies have concentrated on developing efficient and powerful LLM base models [15; 6; 50; 5; 48], supervised fine-tuned models [47; 12; 3; 22] and models leveraging Reinforcement Learning from Human Feedback (RLHF) [21; 56; 46].
Concurrently, there has been a growing trend in employing Large Language Models (LLMs) as foundational elements for decision-making systems. These systems either depend on the expressive capacity of transformer architectures to execute imitation learning, thereby modeling complex behaviors [11; 20; 4], or they harness the common knowledge embedded within LLMs to facilitate the policy learning process [54; 16; 14; 2]. However, the dynamic interplay between policy learning and language modeling has been scarcely addressed. Human decision-making typically involves both: we draw upon historical policy interaction to refine our policy and also employ our thoughts for strategic consideration, mostly in natural language form. Based on this logic, we argue that the study of natural language understanding and policy learning should not be isolated. To advance exploration in this realm, we choose one classic game: **Chess**, as a practical testbed for initial steps in this direction.
Chess, one of the oldest and most universally played board games, presents an ideal testbed due to the wealth of both policy data and language data. In terms of policy data, it is reported that over ten million games are played daily on Chess.com, the most frequented online chess platform. Regarding language data, a myriad of chess-related knowledge is readily accessible online in various forms and mediums, ranging from game analysis, books, puzzles, and news, to online tutorials, wikis, and even YouTube videos. Building on these resources, we have constructed a comprehensive pipeline dedicated to research on chess policy learning and language modeling. Specifically, we provide:
**Datasets** We curated a large-scale game and language dataset for chess. Our dataset comprises numerous game data from online chess databases recording how humans and chess engines game replay. It also includes a language dataset that encapsulates chess knowledge in a natural language format, as well as a mixed game-language dataset, which offers the most straightforward interrelated data including articles, discussion, or commentary (language) on specific chess game replay (game).
**Models** We introduce two models, ChessCLIP and ChessGPT, leveraging our datasets. These models showcase the potential for AI to learn from a mixture of replay data and language knowledge.
**Evaluations** We design an extensive set of tasks to evaluate our models' abilities from three distinct perspectives: modeling ability, to gauge the model's proficiency in tracking game state; value judgement ability, measuring the model's capacity for value assessment and chess knowledge; and policy ability, to test the model's capability in decision-making. Our experimental results confirm that our models consistently outperform other LLM baselines in all evaluation tasks.
Our work primarily pursues two objectives. Firstly, we construct the whole pipeline on chess as an initial step in promoting research on the interaction/interplay between policy learning and language learning, as well as on the potential of language as a tool for action and understanding. Secondly, our efforts have yielded valuable by-products: the ChessGPT/CLIP models. These models possess practical applicability - they could potentially serve as effective Chess AI assistants for humans.
## 2 Related work
The pursuit of creating artificial intelligence capable of playing chess can be traced back to the very beginning of the history of computer science [51]. Chess engines today achieve superhuman-level performance by utilizing human knowledge [9] or self-play [42]. Recently, there has been increasing interest in improving the interpretability [29] of these systems and their alignment with human behavior [30] besides strong performance. A chess engine that aligns with human behavior may unlock many exciting opportunities, for example, they can be used as a personalized tutor for chess beginners [30]. Some research efforts also concentrated on employing LLMs to learn policies in Chess [32; 45]. However, these studies mainly center on small-scale datasets or limited training.
There has been increasing interest in leveraging Internet-scale knowledge for creating agents capable of generalizing across many tasks and capabilities [54; 16]. For example, MineDojo [16] introduced a framework on Minecraft for understanding how to enable artificial agents to learn in an open-ended environment. We care more about the interplay between language modeling and policy learning.
## 3 A large-scale game and language dataset for chess
We introduce a large-scale game & language dataset by collecting all chess-related materials from the Internet. Our dataset can be mainly divided into four categories: (1) The Game dataset, encompassing online chess match replay data involving worldwide human players and chess engines of varying skill levels. (2) The Language dataset, principally recording chess-associated knowledge, analyses, discussions, and news in the form of natural language (3) The Mixed Game-Language dataset, incorporating both game data and human natural language elements (such as game analysis or comments) in alignment. (4) The instruction-tuning and conversation dataset, consisting of instruction data and conversation data related to chess. We include comprehensive descriptions, examples and the procedure of data collection and pre-processing in appendix C.
### Game dataset
Game replay data provide the most direct method for both humans and machines to grasp the play mechanics of Chess. In chess, these data are commonly stored in the Portable Game Notation (PGN) format which is a standard plain text format as illustrated in fig. 1. A PGN starts with some headers that include metadata about the game. These headers include information such as the name of players, the Elo ratings, the opening play, and the game outcome. The headers are followed by a move text section that records the moves played by the two players in turn. The moves may be further annotated with comments enclosed in braces.
Previous work [30] uses the moves recorded in PGNs for policy learning. The moves are interpreted as actions in a Markov Decision Process and the state position can be reconstructed by loading the PGN into a chess engine. However, PGNs may contain additional useful information beyond the individual moves made. For example, the Elo ratings in the headers may inform us about the relative strength of the players. Additional information included in the comments of the move text section can also be useful - some of the moves are annotated with evaluations generated by computer chess programs that predict the current advantage of the players. These additional annotations may be useful from a reinforcement learning perspective, e.g., for value function learning. For this reason, we curated the game dataset with all of this information intact to facilitate policy learning.
**Lichess dataset** We collect five months of online game data from the Lichess database [28], culminating in a total of 18 million game replay records for online game players.
**Pro-player dataset** In the Lichess dataset, the majority of player Elo-ratings range between 1000 and 2000. To diversify our game dataset with more skilled matches, we also incorporated an additional 440,000 game records from 245 professional chess players. These professionals typically hold notably higher Elo ratings within the range of 2000 to 2800.
**CCRL** Chess engines like StockFish and LeelaZero have attained a proficiency level far beyond what any human player can currently reach. Considering this, we additionally incorporate the _Computer Chess Rating Lists_ (CCRL) [10], which is a dataset of chess games played by computer chess engines. The CCRL dataset comprises a considerable collection of chess games, specifically 1.6 million, all of which are played by computer chess engines and stored in PGN format. The Elo-ratings of chess engines fall in the range of 2800-3700.
**Chess puzzles** A chess puzzle represents a particular chessboard configuration, designed to present a distinct challenge or objective for the solver. Chess puzzles often require players to find the best move or sequence of moves to achieve a specific goal, such as checkmating the opponent's king, or finding a tactical combination. In our game dataset, we integrate 3.3M puzzles sourced from the Lichess puzzle dataset. Each puzzle within this collection is annotated with its rating, theme and solution.
**Chess modeling dataset** We observe that most chess rule descriptions are conveyed in natural language, posing a challenge for machine learning models since they statistically require a large volume of model data to accurately comprehend the chess rules [40]. To address this issue, we build a synthetic chess modeling dataset leveraging the python-ches library [37]. We collect chess game data from a one-month dump of the Lichess dataset, deliberately distinct from the month used in our own Lichess dataset. we design several model-based tasks including converting PGN to FEN, transferring UCI to FEN, and predicting legal moves, etc, resulting in 1.9M data samples.
### Language dataset
**Existing dataset** Numerous existing datasets comprise general internet crawl data from platforms like CommonCrawl or Wikipedia. We establish a filtering pipeline to extract only chess-related language corpus from pre-existing language corpus, including C4 [39], Pile [18], Oscar [33], Wikipedia [17] and RedPajama [48]. These datasets extend the scope of our language data beyond mere game-play.
Figure 1: Example of Chess replay in Portable Game Notation (PGN) format.
**Chess blogs** Numerous chess websites often publish insightful blogs, sharing their analyses and perspectives on various aspects of chess gameplay. Such blog data is incredibly valuable, as it encompasses game-specific analysis, forming a vital link between the concrete chess game data and its interpretation in natural language form. We manually select approximately 30 chess-related websites and scrape 60k blog articles.
**Chess books** Similar to chess blogs, chess books can provide long and detailed analysis of the game. We extract approximately 8k chess-related books from online library to enrich our language dataset.
**Chess forums** Chess forum serves as a platform for a large amount of chess-related dialogues and conversations involving a diverse range of users. These platforms encompass high-quality question-and-answer pairs, as seen in platforms like StackExchange, or more generalized discussions on various chess-related topics, commonly found in dedicated chess-specific forums. We mainly scrape chess forum data from 5 chess-specific forum platforms and StackExchange, using requests and playwright. This process results in a collection of 130K posts, representing a wealth of diverse views, queries, and discourses related to the world of chess.
### Mixed game-language dataset
**Annotated chess game** An annotated chess game is a chess game accompanied by written commentary and analysis. In an annotated game, each move made by the players is explained and evaluated, providing insights into the thought process, strategic considerations, and tactical ideas behind the moves. Here is an example of an annotated PGN with Sicilian Defense opening:
_1.e4 c5 [The game starts with the Sicilian Defense, one of the most popular and aggressive responses to 1.e4. Black aims to control the center and create imbalances early on.]_
These annotated games inherently maintain the correspondence between board state and human language, serving as an exceptionally high-quality data source to align a model with complex human intentions and judgements. We amass annotated games from seven sources, five of which are collected from the internet while the rest two are commercial datasets. In total, we collect 220k annotated games with 1.3M board-language pairs.
**Youtube transcripts** Drawing inspiration similarly from MineDoJo [16], a YouTube video can naturally serve as a mixed game-language dataset by aligning video clips with natural language transcripts based on timestamps. Rather than generating image-language pairs directly, we develop a pipeline that accurately applies OCR (Optical Character Recognition) to chessboard screenshots to generate FEN (Forsyth-Edwards Notation), a system that describes the chess state in a language format. We gather approximately 40k chess videos, resulting in 100M words in English transcripts and 3M board-language pairs, thus establishing a substantial mixed game-language dataset.
### Instruction-tuning & conversation dataset
Supervised fine-tuning is a crucial component to train large language model (LLM) to follow instructions [34]. In addition to the comprehensive chess materials mentioned before, we also collect instruction-tuning and conversation datasets which can be used to finetune the pre-trained LLM base model, thereby enhancing its instruction-following and dialogue capability.
**Instruction-tuning data from GPT-4** Inspired by Alpaca [47], we use the self-instruct technique [53] to generate high-quality, instruction-following data through GPT-4 [8]. Specifically, we manually construct 200 seed prompts for chess-related questions or instructions. These prompts serve as few-shot examples, guiding GPT-4 towards more coherent and relevant generation. Finally, we generate around 4k instruction-response pairs using this pipeline.
**Conversation data from Reddit** The instruction data collected from GPT-4 are mainly in a single-step form, which means only one round of question-answer pair is included. To mitigate this issue, we collect multi-step conversation data about chess on Reddit. Reddit allows users to interact by commenting on posts and responding to other comments, creating a nested structure of responses. This nested structure can be easily converted to a conversation tree by treating the comment's reply as a child node for that reply. A rich source of conversation data can then be acquired by navigating from the root node to each leaf node via every available path. In all, we choose 6 chess-related sub-reddits and collect 100k human conversations about chess.
Large-scale pretraining
We will showcase two models - **ChessCLIP** and **ChessGPT** trained on the large-scale dataset.
### ChessCLIP
CLIP (Contrastive Language-Image Pre-Training) [38] is a neural network trained on a variety of modalities (e.g. image, text). By conducting contrastive learning on a large amount of paired data, CLIP bridges the image and language modality, enabling the model to understand vision by language information and vice versa. Our mixed game-language dataset in section 3.3 has a similar paired structure because the annotation is naturally paired with its preceding game trajectories. Based on this subset, we can train a **ChessCLIP** to bridge the modality of policy and language. Specifically, by denoting the chessboard state \(S\) at timestep \(t\) as \(S_{t}\), and the annotation language as \(L_{t}\), the data pair at timestep \(T\) can be represented by \(\big{(}(\{S_{t}\}_{t=T-k}^{t=T},a_{T}),L_{T}\big{)}\) where \(\{S_{t}\}_{t=T-k}^{t=T}\) is a stacked \(k\) history states and \(a_{T}\) is the last move.
We want to emphasize more on what ChessCLIP can do by aligning the policy modality and the language modality. Firstly, ChessCLIP offers a similarity metric given one PGN and a text description. Just like the application of large-scale image/text retrieval using CLIP, ChessCLIP can help users conduct PGN/text retrieval - search for game based on text or search for comments based on specific game. In addition, because of the low-dimensional feature of action space compared to vision or language space (there only exists a few legal moves for a given chess state ), we can directly conduct search algorithms to maximize the similarity to generate action based on one text description using ChessCLIP. For example, given a chessboard state and a text description, ChessCLIP can generate a move by iterating through all legal moves and finding one move that returns the largest similarity. By the same logic, ChessCLIP can directly generate move sequences (multiple actions) using greedy search or beam search. We refer the reader to appendix D.1.1 for more discussions.
**Implementation details** We preprocess the annotated PGNs to produce board/text pairs which we feed separately to the board and text encoders. In particular, for every move in the PGN, we extract the comments attached to the move as well as the board state. We encode the board positions and moves using the same scheme as those used by Leela Chess Zero (lc0) [24], which is similar to the encoding used by AlphaZero [42] for encoding positions and moves in chess. Concretely, the board positions are encoded as a \(\mathcal{R}^{8\times 8\times 112}\) feature map and the actions are encoded as a \(\mathcal{R}^{1858}\) vector. We instantiate a ChessCLIP model with a pair of text encoder and a board/action encoder. For the text encoder, we only fine-tune the last two layers of pretrained text encoder from OpenAI CLIP model. For the board/action encoder, we use a ResNet [19] architecture that conditions on the action encoding via a modified FiLM layer [36]. Please refer to appendix D.1.1 for implementation details.
### ChessGPT
The Generative Pretraining Transformer (GPT-3) [7] is an autoregressive language model that uses deep learning techniques to generate human-like text. GPT-3 is trained by casual language modeling, which aims to predict the next word in a sentence given all the previous words. Following the same logic, we train a GPT-like model using all chess materials introduced in section 3. Unlike other policy behavior data in robots [25] or video games [31], the chess state and move data can be represented in merely textual format. Thanks to this feature, we can directly treat chess as a text game and the imitation learning objective for policy learning can be directly covered by casual language modeling over the game dataset provided in section 3.1.
**Implementation details** We follow common implementations of training a domain-specific instruction-following LLM. Firstly we conduct base-model fine-tuning using chess corpus introduced in section 3.1, 3.2 and 3.3. Due to computational constraints, we choose to finetune the RedPajama-3B-base [48] model, which is an open-souce replication of LLaMA [50]. The base model adopts the GPT-NeoX [6] architecture, a GPT-3 [7] variant with a few modifications such as rotary positional embedding, parallel attention computation, and different initialization. The base-finetuning brings us our base model: **ChessGPT-Base**. After base-finetuning, we conduct supervised fine-tuning by supervised learning on question/conversation response using data introduced in section 3.4 and general conversation data from OASST1 [22], Dolly2 [13], Alpaca-GPT4 [35], and Sharegpt [41],
forming our chat model: **ChessGPT-Chat**. We leave further RLHF (Reinforcement Learning from Human Feedback) training for future work. Refer to appendix D.1.2 for more details.
## 5 Evaluation and benchmark
In this section, we present a comparative analysis between ChessGPT trained on our database with other baseline LLMs. The purpose of our experiments is to assess the performance of ChessGPT in three primary dimensions: Chess modeling ability, Value judgement ability, and Policy ability. The Chess Modeling capability focuses on the language model's proficiency in accurately tracking the game state and predicting valid moves. Regarding the Value judgement ability, we assess the model's precision in evaluating the worth of a chess game, encompassing the identification of advantageous positions and the calculation of situation scores. Lastly, the Policy capability gauges the model's aptitude for generating optimal moves based on a given position. By thoroughly examining these sub-categories, we can comprehensively evaluate and contrast the efficacy of different models in chess-related tasks. We choose the following models as baselines: LLaMA-7B [50], RedPajama-Base-3B [48], and compare them with ChessCLIP, ChessGPT-Base-3B, and ChessGPT-Chat-3B.
### Chess modeling ability
**Chess state tracking** We utilized Big-bench's State Tracking in Chess task [44; 49] to evaluate language models' ability to track the state of chess games encoded in UCI notation. The task involves predicting the legal ending square given the game prefix and starting square of the current move. For example, if the input UCI notation is "\(f2f4\)\(d7d5\)\(g1\)", the expected output would be \(["h3","f3"]\), as the chess piece on square \(g1\) can only move to those two positions. The task dataset includes real and synthetic games, divided into short, medium, and long categories based on move count. The evaluation measures correctness across all games using a specified output regex. Notably, the ChessCLIP is unsuitable for modeling tasks, so we do not include it in the comparison.
Table 1 presents a performance analysis of all models on the task. Our Base and Chat models consistently outperformed baselines in all tasks. This indicates their strong ability to track the state of chess games. However, the ChessGPT-Chat model exhibited slightly lower performance, suggesting a potential trade-off between language capabilities and state tracking. Nevertheless, the results underscore the effectiveness of our dataset-trained LLM models for chess state tracking.
**Board state tracking** We performed additional evaluations involving UCI to FEN and PGN to FEN conversions. In the UCI to FEN experiment, the target was replaced with FEN format, while in the PGN to FEN experiment, UCI was converted to PGN format as input and the target was replaced with FEN format. The similarity was measured using Levenshtein distance, which was normalized to a range of 0 to 1 [55]. These evaluations focused on assessing the model's capability to track the overall state of the chessboard by representing the state of each chess piece using FEN notation.
Table 2 illustrates the results of these evaluations. It is evident that compared to tracking the state of an individual chess piece, tracking the entire chessboard state becomes more challenging. The similarity scores between the two baselines were consistently below \(10\%\), indicating a lack of global chess piece state tracking ability. However, the ChessGPT achieves an average similarity score higher than \(90\%\). These results demonstrate that our dataset-trained model excels in capturing and reproducing the global chess piece state in both UCI to FEN and PGN to FEN conversions.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{4}{c}{LLM Models (\%)} \\ \cline{2-5} Tasks & LLAMA-7B & RedPajama-Base & ChessGPT-Base & ChessGPT-Chat \\ \hline Real Short & 29.5 \(\pm\) 1.4 & 23.2 \(\pm\) 1.3 & **99.5 \(\pm\) 0.2** & **98.5 \(\pm\) 0.4** \\ Real Med & 39.3 \(\pm\) 1.5 & 38.2 \(\pm\) 1.5 & **97.7 \(\pm\) 0.5** & **97.8 \(\pm\) 0.4** \\ Real Long & 53.0 \(\pm\) 1.6 & 51.9 \(\pm\) 1.6 & **98.1 \(\pm\) 0.4** & **97.6 \(\pm\) 0.4** \\ Syn Short & 31.3 \(\pm\) 1.4 & 24.9 \(\pm\) 1.3 & **94.2 \(\pm\) 0.7** & **92.3 \(\pm\) 0.8** \\ Syn Med & 39.9 \(\pm\) 1.6 & 37.7 \(\pm\) 1.5 & **94.6 \(\pm\) 0.7** & **88.9 \(\pm\) 1.0** \\ Syn Long & 45.8 \(\pm\) 1.5 & 42.2 \(\pm\) 1.5 & **92.8 \(\pm\) 0.8** & **85.1 \(\pm\) 1.1** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Bigbench State Tracking in Chess
### Value judgement ability
In this part, we evaluate the model's ability of value judgement. Specifically, we want to assess the model from two perspectives: (1) its ability to align with the true value function given a chessboard state (the true value are evaluated by chess engines in enough search depths) in the evaluation of **State value multi-choice**, and (2) its ability to align with human judgement and human knowledge in the evaluation of **Chess Annotation Multi-choice** and **Opening multi-choice**.
**State value multi-choice** Here we evaluate the model's ability to see whether it can determine which side holds the advantage for a given PGN. We construct an evaluation dataset consisting of \(3000\) game snippets and utilize Stockfish 15 with a depth of 18 to calculate the winning rate for the white pieces. By categorizing the winning rate into three intervals: \(0-33\%\) for black advantage, \(34-66\%\) representing a balanced state, and \(67-100\%\) for white advantage, we construct the state-value multiple-choice task. During experiments, we discovered that an additional '{' suffix to the prompt can significantly enhance the performance of the base model. This is due to '{' consistently serving as the initial symbol for annotation in annotated PGNs. Consequently, we carried out our evaluation under two distinct prompt settings and report our results w.r.t multi-choice grade shown in table 3.
**Chess annotation multi-choice** The annotations within an annotated PGN can be viewed as a reflection of human evaluative judgement. To examine the degree to which the model's value aligns with human value, we extract 3k game-language pairs from the annotation dataset as the test set. By randomly selecting three annotations from the test set as incorrect options, we construct the chess annotation four-choice task. We report the multi-choice grade results over two prompts in Table 4.
**Opening multi-choice** A chess opening refers to the initial moves made by players at the beginning of a chess game. There are numerous chess openings, each with its own name, characteristics, and strategic goals. For example, the Sicilian defense: _1. e4 c5_ is one of the most popular and aggressive chess openings for Black. We use the Lichess opening dataset [27] including 3.5k opening PGNs and their corresponding names, to formulate two tasks: (1) PGN2Opening five-choice task, which aims at choosing the correct opening name for a given PGN, and reversely, (2) Opening2PGN five-choice task, aiming at choosing the correct PGN for a given opening name. We report the result in table 5.
In general, our trio of models surpasses the performance of two baseline language models across these four tasks in all settings. This result confirms that our models are more effectively aligned with both the true value function and human judgement/knowledge. Both ChessGPT-Base and ChessGPT-chat deliver outstanding performance in the state-value task and the opening task. Notably, ChessCLIP displays a surprisingly high level of proficiency in the annotation task and the opening task. This result reveals the model's capacity to extract human judgement and knowledge solely from annotations, even without training in any actual chess games.
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline & & \multicolumn{4}{c}{Models (\%)} \\ \cline{2-6} Prompt Setting & LLAMA & RedPajama & ChessGPT-Base & ChessGPT-Chat & ChessCLIP \\ \hline W/O \{ suffix & 33.2 \(\pm\) 0.7 & 31.1 \(\pm\) 0.7 & **43.1 \(\pm\) 0.8** & **52.8 \(\pm\) 0.8** & N/A \\ With \{ suffix & 26.9 \(\pm\) 0.7 & 29.7 \(\pm\) 0.8 & **53.7 \(\pm\) 0.8** & **53.5 \(\pm\) 0.8** & **38.1 \(\pm\) 0.8** \\ \hline \hline \end{tabular}
\end{table}
Table 3: State value multi-choice
### Policy evaluation
**Checkmate in one** We incorporate the checkmate-in-one task from Big-Bench [44] into our evaluation methods. This task is designed to challenge the model's ability to identify a move in a given PGN that would result in a checkmate. By doing so, it measures the model's capacity to comprehend and apply the rules of chess. The model is essentially required to discern a move that not only places the opponent's king under attack but also ensures that the king cannot evade capture in the next move.
We also find adding an additional instruction suffix like _[Now white/black can checkmate in one]_ can largely enhance the base model performance. We report the result in two prompts with two metrics (exact-string match as ESM and multi-choice-grade as MC) in Table 6. our ChessGPT-Base model and ChessGPT-Chat model show a really great checkmate ability by surpassing two LLM baselines by a large margin. ChessCLIP does not perform well in this task, because there does not exist much annotation data regarding checkmate-in-one behavior in the annotation dataset.
**General policy** In order to assess the model's generalization ability, we introduced Elo Rating as a factor in the task, aiming to evaluate its capacity to identify PGN and related keywords and generate the appropriate next move within the specified skill level. Model's selection of the next legal move is assigned a move score, which is normalized based on the win rate observed in the raw data. Table 7 presents the results representing the performance of different models in selecting the most suitable move for white chess. Notably, all models surpassed the performance of the random policy (\(\approx 50\%\)) as the Elo Ratings correspond to relatively high skill levels among human players.
Further analyzing the performance of different models across varying Elo Ratings is crucial for understanding the observed results. The minor variations in move scores for different Elo Rating scenarios in table 8 indicate that ChessGPT-Base may struggle to effectively incorporate Elo Rating information into its decision-making process. This could be due to the model's limited understanding of the nuanced characteristics associated with distinct Elo Ratings. The complexity of the task and the challenges in accurately accounting for diverse playing styles further contribute to the limited variations in move scores across different Elo Ratings. Consequently, neglecting this information can lead to the model learning an average policy for each Elo Rating, resulting in subpar overall performance. Similar findings were observed in the black chess test, and to further validate this viewpoint, we conducted input attention visualization. Refer to appendix D.1.2 for more details.
To clarify, the dataset we have presented encompasses a wide range of games and varying Elo ratings, as shown in Figure 1, which possesses the potential to effectively capture and generalize intricate patterns and policies associated with different Elo levels. However, the current training method might not sufficiently emphasize these nuanced features. This highlights a potential direction for future research, which involves enhancing the model's ability to better integrate and utilize metadata such as Elo Rating and other auxiliary data. By addressing these aspects, the model's overall generalization can be further improved.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & \multicolumn{4}{c}{Models (\%)} \\ \cline{2-6} Prompt Setting & LLAMA & RedPajama & ChessGPT-Base & ChessGPT-Chat & ChessCLIP \\ \hline W/O \{ suffix & 29.8 \(\pm\) 0.8 & 27.4 \(\pm\) 0.7 & **33.2 \(\pm\) 0.9** & **35.7 \(\pm\) 0.9** & N/A \\ With \{ suffix & 29.6 \(\pm\) 0.8 & 28.4 \(\pm\) 0.8 & 38.8 \(\pm\) 0.9 & 34.7 \(\pm\) 0.9 & **63.6 \(\pm\) 0.9** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Chess Annotation Multi-choice
\begin{table}
\begin{tabular}{c c} \hline \hline LLM Models & Move Score \\ \hline LLAMA & 55.1 \(\pm\) 1.1 \\ RedPajama & 56.4 \(\pm\) 0.9 \\ ChessGPT-Base & 59.6 \(\pm\) 1.0 \\ ChessGPT-Chat & 60.3 \(\pm\) 1.0 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Elo Rating 1700-2000
\begin{table}
\begin{tabular}{c c} \hline \hline & Models (\%) \\ \cline{2-3} Prompt Setting & LLAMA & RedPajama & ChessGPT-Base & ChessGPT-Chat & ChessCLIP \\ \hline Opening2PGN & 43.0 \(\pm\) 0.9 & 26.5 \(\pm\) 0.8 & **92.2 \(\pm\) 0.5** & **94.7 \(\pm\) 0.4** & 73.0 \(\pm\) 0.8 \\ PGN2Opening & 20.0 \(\pm\) 0.7 & 20.7 \(\pm\) 0.7 & 49.3 \(\pm\) 0.9 & 55.8 \(\pm\) 0.9 & **80.5 \(\pm\) 0.7** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Opening2PGN and PGN2Opening
### Qualitative results
We also perform qualitative comparison between our models (ChessGPT-Chat and ChessGPT-Base) and the baselines. We ask the language models a series of questions ranging from factual knowledge of chess as well as requesting the models to perform some operational tasks related to chess. We found that ChessGPT-base performed similarly to RedPajama: both models can sometimes produce factual answers for some of the questions but they failed to generate coherent answers when asked to perform tasks such as providing commentary on chess moves or converting the PGN notation to FEN. ChessGPT-Chat gives more factual answers and demonstrates better performance when prompted to generate analysis and perform other chess-related tasks. Refer to appendix E for qualitative analysis.
## 6 Conclusion
In this paper, we introduce a new large-scale dataset and benchmark on chess to encourage of study of the interplay between historical policy data and natural language knowledge. We accompany our dataset with an evaluation framework for assessing language models' capability in chess. We showcase two models, **ChessCLIP** and **ChessGPT**, that demonstrate promising results for learning the interplay between language and action. Nevertheless, our results indicate that we are only beginning to understand how to bridge the gap between policy learning and language modeling and we discuss more on the future directions of our dataset in appendix F. We hope that our dataset and benchmark can make future research on policy and language alignment more accessible.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & \multicolumn{5}{c}{Models (\%)} \\ \cline{2-6} Setting & LLAMA & RedPajama & ChessGPT-Base & ChessGPT-Chat & ChessCLIP \\ \hline With suffix (ESM) & 1.6 \(\pm\) 0.2 & 0.0 \(\pm\) 0.0 & **71.4 \(\pm\) 0.7** & 56.8 \(\pm\) 0.8 & N/A \\ With suffix (MC) & 2.6 \(\pm\) 0.3 & 0.0 \(\pm\) 0.0 & **66.1 \(\pm\) 0.8** & 11.3 \(\pm\) 0.5 & 2.9 \(\pm\) 0.3 \\ W/O suffix (ESM) & 1.7 \(\pm\) 0.2 & 0.0 \(\pm\) 0.0 & 26.5 \(\pm\) 0.8 & **59.4 \(\pm\) 0.8** & N/A \\ W/O suffix (MC) & 2.2 \(\pm\) 0.3 & 0.0 \(\pm\) 0.0 & 13.6 \(\pm\) 0.6 & **15.4 \(\pm\) 0.6** & N/A \\ \hline \hline \end{tabular}
\end{table}
Table 6: Checkmate in One |
2306.06133 | The WARP Reactor Concept | The WARP Reactor Concept promises orders of magnitude increase of intense ion
beam energies and respective radiation yields at a fraction of the size and
cost over existing z-pinch class accelerators allowing the economically viable
study of new Relativistic High Energy Density Physics regimes for probing the
intersection between General Relativity and Quantum Field Theory along with
game-changing direct applications from rep-rated Magnetized Liner Inertial
Fusion devices for energy production and advanced propulsion to multi-pulse
compact flash x-ray/neutron radiography sources for assessing nuclear weapons
stockpile. An overview of the WARP Reactor Concept is presented. | Michael G. Anderson, James K. Walters, Enrique M. Anaya, Don A. Max, William A. Stygar, Anthony J. Link | 2023-06-08T18:41:22Z | http://arxiv.org/abs/2306.06133v1 | # The WARP Reactor Concept
###### Abstract
The WARP Reactor Concept promises orders of magnitude increase of intense ion beam energies and respective radiation yields at a fraction of the size and cost over existing z-pinch class accelerators allowing the economically viable study of new Relativistic High Energy Density Physics regimes for probing the intersection between General Relativity and Quantum Field Theory along with game-changing direct applications from rep-rated Magnetized Liner Inertial Fusion devices for energy production and advanced propulsion to multi-pulse compact flash x-ray/neutron radiography sources for assessing our Nation's aging nuclear weapons stockpile. An overview of the WARP Reactor Concept is presented.
_warp reactor, particle accelerator, dense plasma focus, z-pinch, relativistic high energy density physics, quantum gravity, magnetized liner inertial fusion, advanced propulsion, compact flash x-ray and neutron source radiography_
## I Introduction
The Wave Accelerated Ring Pinch or "WARP" Reactor [1,2] (patent pending), iso-view shown in Figure 1, is expected to solve key issues ranging from our present energy dependence on finite fossil fuels and its associated climate impact to our aging nuclear weapons stockpile. The WARP Reactor promises orders of magnitude increase of ultra-intense ion beam energies and respective high radiation yields at a fraction of the size and cost over other z-pinch class accelerators [3-5] allowing the economically viable and environmental friendly study of new Relativistic High Energy Density (RHED) Physics regimes for probing the intersection between General Relativity and Quantum Field Theory (i.e. Warp/Unruh/Casimir effects) [6-24] along with game-changing direct applications from rep-rated Magnetized Liner Inertial Fusion (MagLIF) devices for energy production and advanced propulsion to multi-pulse compact flash x-ray/neutron radiography sources [3-5,25] for assessing our Nation's aging nuclear weapons stockpile.
## II Novelty
The WARP Reactor concept is a novel, modular and compact pulsed power-driven radiation source intended for nuclear fusion energy production, advanced propulsion, accessing new RHED physics regimes and flash radiography/interrogation techniques. WARP utilizes state-of-the-art pulsed power modules to drive its "WARP Core" which consists primarily of two Dense Plasma Focuses (DPFs) [26,27] and two Ion Ring Marx Generators (IRMGs) fired directly at one another. The WARP Reactor's dramatic performance boost is achieved with the use of a novel WARP Core which injects two tubular dense plasma and ion beams from opposite ends of a
double-barreled DPF head with embedded IRMG-driven reflex triodes [28] and through magnetic cusps into an axial seed B-field to form co-rotating ion rings which merge near the mid-plane of the device and are subsequently radially compressed and azimuthally accelerated up to 1000 times the initial ion beam energies during the axial magnetic flux compression phase driven by the DPF plasma liner implosion. Two DPF and IRMG heads are implemented to dramatically reduce the size and cost of the drivers and increase ion ring capture efficiency along with the added benefits of favorable magnetic field line curvature throughout implosion process due to higher velocity shear-stabilized DPF plasma pinch flows near each gun muzzle and greater tuning capability for properly timing the implosion and ion beam generation, injection and compression of the two colliding and subsequently merged ion rings onto a solid or high energy density plasma target.
Fig. 1: Iso-view of The WARP Reactor
## III Strategic Importance
WARP directly aligns with the DOE and NNSA missions and core competencies as an economically viable and climate-friendly rep-rated MagLIF device for nuclear fusion energy production as well as a multi-pulse compact flash x-ray/neutron source for assessing our aging nuclear weapons stockpile. In addition to developing the next-generation pulsed power architectures, this Strategic Initiative will help to benchmark present high-performance computing, simulation, and data science with respect to more disruptive and imaginative ultra-intense plasma/beam configurations. Finally, WARP success would fortify LLNL's place at the forefront of the subsequent RHED physics and technology revolution.
## IV WARP Reactor Physics
The WARP Reactor conceptual design, models, simulations, and targeted performance characteristics for the various applications pull directly from standard fusion plasma, beam, accelerator and relativistic physics along with a modified Einstein Field Equation (EFE) [29] and respective Figures Of Merit (FOM) for assessing the validity of a recently proposed Naive Quantum Gravity (NQG) theory [30].
### _Ring Pinch and Acceleration Physics_
The physics behind charged particle ring radial compression and azimuthal acceleration [31-40] in the WARP Core is as follows: Magnetic flux (\(\Phi_{x}\)) compression (i.e. conservation of seed \(\Phi_{x}\) during DPF z-pinch driven imploding liners: \(\Phi_{x}\)= \(B_{x}\pi\)n\({}^{2}\) = constant) creates a "Magnetic Wave" (i.e. seed \(\mathrm{B_{z}}\) amplitude rapidly swells/increases since flux conservation dictates that \(\mathrm{B_{z}}\propto r^{2}\)) which forces (i.e. F = qV x B) charged particle rings to radially compress (i.e. decrease in Larmor radius: \(\mathrm{r_{L}=\gamma mV\omega/qB}\)) and due to the conservation of canonical angular momentum (i.e. \(\mathrm{p_{0}^{\circ}/\gamma B_{z}=constant}\)) and adiabatic flux conservation condition (i.e. \(\mathrm{p_{0}^{\circ}/B_{z}=constant}\) or \(\mathrm{p_{0}\,r=constant}\)) the charged particle ring azimuthal velocities scale as: \(\mathrm{V_{0}\propto(n/\tau)}\) for \(\gamma=1\). Since \(\Phi_{x}\) and \(\mathrm{p_{0}\,r}\) are conserved in this device, final charged particle ring energy is: \(\mathrm{E_{\mathrm{r}}=\gamma_{2}\gamma Nm(V\omega)^{2}\sim E_{z}(r/\tau)^{2}}\). Finally, for relativistic charged particle motion in pulsed B-fields, \(\gamma\) varies due to \(\nabla\) x \(\mathbf{E}=-(\delta\mathrm{B/\delta t})\) and therefore \(\mathrm{E_{\mathrm{r}}\sim E_{z}(B/B_{z})}\).
### _Fusion Plasma Physics_
The principal formulas used in the WARP Reactor conceptual models and simulations are the standard fusion plasma physics [41-45] equations in MKS units unless specifically identified otherwise (1)-(11) for: \(N_{n}-\) the number of fusion-generated neutrons; \(\beta-\) plasma to magnetic pressure; \(E_{f}\) - total fusion energy; \(E_{p}\) - plasma energy; \(E_{b}\) - bremsstrahlung and \(E_{s}-\)synchrotron radiation energy; \(G_{S}-\) scientific and \(G_{E}-\) engineering gains; \(E_{sale}\) - fusion energy for sale; \(\nu_{4}-\) Alfven velocity; \(\tau_{R}-\) magnetic reconnection time scales along with plasma and particle beam propagation modes (i.e. \(\beta\)\(>\) 1 for diamagnetic drift mode; \(\beta\)\(<<\) 1 for collective mode; \(\beta\)\(<<\) 1 with polarization E-field shorted for single particle mode).
\[N_{n}\)\(\sim n^{2}<\ \varpi>V\ \tau \tag{1}\]
\[\beta=\frac{n\ k\ T}{B^{2}/2\mu_{0}} \tag{2}\]
\[E_{f}=\ N_{n}E_{r} \tag{3}\]
\[E_{p}=\frac{3}{2}n\ V\ k\ T \tag{4}\]
\[Eb\sim 10^{-38}Z^{2}\ n^{2}\ T[\varpi]^{0.5}\ V\ \tau \tag{5}\]
\[E_{s}\sim\frac{2Kq^{2}\ \gamma^{4}\ c}{3\ r^{2}}\ N_{e}\ \tau \tag{6}\]
\[G_{S}=\frac{E_{f}}{E_{p}} \tag{7}\]
\[G_{E}=\frac{E_{f}}{E_{T}} \tag{8}\]
\[E_{sale}=f[E_{f}-E_{p}-E_{b}-E_{s}] \tag{9}\]
\[\nu_{A}=\frac{B}{\sqrt{\mu_{0}\rho}} \tag{10}\]
\[\tau_{R}=\frac{L^{z}}{\delta v_{A}} \tag{11}\]
Where \(n\) is the plasma density; \(<\varpi\)\(>\) is the fusion reaction rate; \(V\) is the plasma/beam/ring volume; \(\tau\) is the confinement time; \(k\) is the Boltzmann constant; \(T\) is plasma temperature; \(B\) is the magnetic field; \(\mu_{0}\) is the vacuum permeability; \(E_{r}\) is the fusion energy per reaction; \(Z\) is the atomic number; \(K\) is the Coulomb constant; \(q\) is the charge; \(\gamma\) is the Lorentz factor; \(c\) is the speed of light in vacuum; \(N_{e}\) is the number of electrons; \(E_{T}\) is the total stored energy of reactor; \(f\) is the conversion efficiency; \(\rho\) is the mass density; \(L\) is the half-length of current sheet; and \(\delta\) is the current sheet half-thickness.
### _Relativistic Formulas, Modified EFE, NQG and FOM_
In addition to the standard relativistic formulas for the Lorentz factor (12), momentum (13) and energy (14), we also introduce a modified EFE (15) with a NQG addition (16) that may be accessible by the WARP Reactor for verification or invalidation of the theory along with relevant FOM such as spacetime curvature (17), gravitational potential (18) and frame-dragging effects (19).
\[\gamma=\frac{1}{\sqrt{\frac{1-\overline{\nu}^{\prime}}{c^{4}}}} \tag{12}\]
\[\vec{p}=\gamma m\vec{\upsilon} \tag{13}\]
\[E=\gamma mc^{2} \tag{14}\]
\[G_{\mathrm{\nu}\nu}=\frac{8\pi G}{c^{4}}\ (\mathrm{S+A)\ T_{\mathrm{\nu}\nu}} \tag{15}\]
\[T_{uv}\to Re\big{[}\frac{\psi_{f}\,\hat{T}_{\mu\nu}\,\psi_{\hat{\imath}}}{<f\ |\ i>}\big{]} \tag{16}\]
\[C_{\alpha\alpha}=(S+A)\frac{G\ M}{c^{2}\,V} \tag{17}\]
\[\Phi_{\alpha\alpha}=(S+A)\frac{G\ M}{c^{2}\,R} \tag{18}\]
\[\Omega_{\alpha\alpha}=(S+A)\frac{G\ I\ \omega}{c^{2}\,R^{3}} \tag{19}\]
\(G_{\mu\nu}\)- Einstein curvature tensor; T\({}_{\mu\nu}\) - energy-momentum tensor; 8nG/c\({}^{4}\) - energy-momentum to curvature coupling constant in vacuum; \(G\) - Newton's gravitational constant; "S" - Sarfatti plasma metamaterial effects; "A" - Anderson Unruh/Casimir threshold effects; \(T_{\mu\nu}\) - Sutherland NQG addition; \(\hat{T}_{\mu\nu}\)- energy-momentum operator; \(\psi_{\hat{\imath}}\) and \(\psi_{\hat{\imath}}^{\ast}\) - initial and final conjugate wavefunctions, respectively; \(<f\ |\ i>\) - final and initial boundary conditions; FOM: \(C_{\alpha\alpha}\)- spacetime curvature; \(\Phi_{\alpha\alpha}\) - gravitational potential and \(\Omega_{\alpha\alpha}\) - frame-dragging effect; \(M\) - ring mass; \(R\) - ring radius; \(I\) - ring moment of inertia; \(\omega\) - ring angular velocity.
## V WARP Reactor Technology
The WARP Reactor utilizes tried-and-true Shiva Star-like "TEMPEST" Marx Modules to drive its dual Dense Plasma Focus head and state-of-the-art Impedance-matched Marx Generators (or Linear Transformer Drivers for rapid rep-rate operation) to drive the dual charged particle beam-ring reflex triodes. Figures 2, 3 and 4 show a side-view end-view and cross-sectional view of the full-scale WARP Reactor, respectively. The WARP Reactor consists primarily of 40 TEMPEST modules, 2 Ion Ring Marx Generators and the central WARP Core.
### _TEMPEST Marx Modules_
Forty TEMPEST Marx modules drive the dual DPF heads. A cross-sectional view of a single TEMPEST Marx module is provided in Figure 5 which consists of a more robust version of the Shiva Star design with upgraded 1.2MA railgap switches, a seismically-rated welded frame capacitor assembly along with a modified HV output header for the flexible high current coaxial cable connections. Each TEMPEST Module is primarily comprised of four super-duty railgap switches, aluminum parallel plate transmission lines and twenty-four +/-60kV, 250kA high energy density capacitors with a total energy stored per module of \(\sim\)260kJ and a total 40 module TEMPEST system storage of \(>\)10MJ and capable of delivering \(\sim\)60MA to the DPF loads.
two back-ends of the dual Reflex Triodes through the IRMG Post-hole convolutes. Finally, the primary WARP Core and most novel central components consist of coaxial dual DPF (\(\sim\)13cm diameter) and IRMG heads with embedded Reflex Triodes (\(\sim\)10cm diameter).
## VI WARP Reactor Machine Metrics
Table I shows the major pulsed power parameters and DPF plasma liner and Ion Beam/Ring metrics, respectively, along with order of magnitude comparisons with one of the largest ion beam accelerators in recent times, PBFA II. The WARP-X prototype is a 1/10-scale version of the full-scale Reactor which can deliver up to 5MA in \(<\) 3us into the DPF plasma liner whereas the 4-stage prototype version of the Ion Ring Marx Generators can generate initial ion beam energies and currents of up to 400keV with 200kA, respectively. The lower section of Table I provides a comparison of plasma liner/beam/ion ring parameters between PBFA II/Z [52, 53] and WARP devices with the 60MA WARP-R (Reactor) machine showing order of magnitude increases in ion beam energy (GeV-level) and current (20MA at implosion stagnation) with 25% acceleration efficiencies.
dynamic interrogation applications and/or accessing new RHED physics regimes.
To aid in visualizing the WARP Core Operations we have created a movie with respective phases identified. Figures 13-17 show the movie's progression from FRAME 1: the DPF plasma lift-off and run-down phases followed by FRAME 2: the Ion Beams Generation and DPF run-in phases then FRAME 3: the Ion Rings Generation and Merging phases followed by FRAME 4: the DPFs and Ion Ring Implosion and Acceleration phases and finally FRAME 5: the WARP Fusion phase, respectively.
and ohmic heating; (4) adiabatic heating by compression; (5) fuel opacity and radiative loss; (6) B-flux compression with; (7) magnetized electron and ion thermal conduction losses; (8) end losses; (9) enhanced losses due to mixing; (10) D-D and D-T primary fusion reactions for fuel ratios; and (11) \(\alpha\)-particle fuel heating. However, in order to expedite comparisons between our multi-physics models/simulations and semi-analytical models, we have created a simplified version of their MagLIF model in Mathematica which also allows rapid scanning of WARP Reactor input parameter space for obtaining desired operation scenarios and higher performance regimes.
In addition to the multi-physics models and simulations we of course have created 3D Computer Aided Design (CAD) engineering models with associated Finite Element Analysis (FEA) for electrical and mechanical stresses along with circuit models and simulations for the TEMPEST system, IRMGs (both for single-pulse IMG and rep-rated LTD varieties) and WARP Core dynamic loads, as shown in Figures 18 and 19, respectively.
Unfortunately, due to budget and computational constraints, a somewhat reduced simulation effort was performed for a mini-WARP Core in the Chicago particle-in-cell code in hybrid-kinetic mode where the electrons are treated as an inertia-less fluid using MHD equations for the electron response while the ions are treated as kinetic particles. Figure 20 is a graph of the R-Btheta (or radially enclosed current) and a graph of the respective plasma density at a specific time and both as a function of radial and axial position in centimeters. Figures 21-25 are snapshots of the simulation for the various phases from the dual DPF initial plasma generation, run-down, run-in to the merging and final implosion/pinch phases.
Fig. 21: Mini-WARP Core dual-DPF plasma generation phase
Fig. 22: Mini-WARP Core dual-DPF plasma run-down phase
Fig. 23: Mini-WARP Core dual-DPF plasma run-in phase
Fig. 24: Mini-WARP Core dual-DPF plasma merging phase
Fig. 18: WARP Reactor CAD modeling, FEA simulations, circuit models and analysis for multi-pulse IRMG (LTD-version) operations
Fig. 20: Mini-WARP Core R-Btheta (top) and plasma density (bottom)
DPF plasmoid initial characteristics and propagation mode via the diamagnetic drift mechanism for this parameter regime. Table IV shows the ion beam propagates in the collective mode unless the polarization E-field is shorted by background electrons. Finally, Table V displays the Reflex Triode and initial ion beam parameters along with final compression (millimeters scale) and accelerations (\(\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$ \text{$\text{$\text{$\text{$\text{$\text{$}}$}$}$}$}}}}}$}}$}\) m/s\({}^{2}\)). NOTE: of particular interest to Warp, Unruh, dynamic Casimir and QG effects, the electron ring mode of operation indicates submicron implosion radii and \(\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$ \text{$\text{$\text{$\text{$\text{$\text{$\text{$}}}$}$}$}}}}}$}}$}$}\)\(\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{ $\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$ $}}}$}$}$}}}}}}$}}$}$}\)\(\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{ $\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{${ $\text{$\text{$\text{$\text{$\text{$$}}}}}$}$}$}}}}}}$}}$}\)\(\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{ $\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$$\text{$ $\text{$\text{$}}}}}$}$}}}}}}$}}$}$}\)\(\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{ $\text{$\text{$\text{$\text{$\text{$\text{$\text{$\text{$$\text{$\text{$$ \text{$\text{$\text{$$}}}}}}$}$}}}}}}$}}$}$}\)\(\text{$\text{$\text{$\text{$\text{$\text{\text{$\text{$\text{$\text{ $\text{$\text{$\text{$\text{$\text{$\text{$\text{$$\text{$\text{$$\text{$$ \text{$}}}}}}}}}}}}$}}$}$}$}\)\(\text{$\text{$\text{$\text{$\text{\text{$\text{$\text{\text{$\text{$\text{ $\text{$\text{$\text{$\text{$\text{$\text{$\text{$$\text{$$\text{$$}}}}}$}$}}}}}}$}}$}$}$}\)\(\text{$\text{$\text{$\text{$\text{\text{$\text{\text{$\text{$\text{ $\text{$\text{$\text{$\text{$\text{$\text{${\text{$\text{$$$\text{$$$}}}}$}$}}}}}}}}$}$}$}$}$}\)\(\text{$\text{$\text{$\text{$\text{\text{$\text{\text{$\text{\text{$\text{$\text{ $\text{$\text{$\text{$\text{$\text{$\text{$$\text{$$$}}}}$}}}}}}}}$}}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}}$}}}}}}}}}}}}}}}\)\)\)\)\)\)\)\)\,.\,\
energy-momentum to spacetime curvature coupling due to plasma metamaterial effects via the Sarfatti "S" field factor; the initiation of direct spacetime metric phase change generated by accelerating and imploding RHED charged particle rings beyond Unruh/Casimir thresholds via the Anderson "A" field factor and QG effects via Sutherland's NQG theory with predicted major results provided in Table 9.
## IX Conclusion
In conclusion, we envision the WARP Reactor as a more compact, modular and economically viable magneto-inertial fusion device for energy production (i.e. G \(>\) 19/pulse, nT \(\sim\) 5.2x10\({}^{21}\) keVs/m\({}^{3}\)) or advanced propulsion and/or a superradiant flash x-ray/neutron source for dynamic radiographic applications (i.e. x-ray/neutron yields per pulse \(>\) 3.5MJ/6.6x10\({}^{18}\), Avg. Luminance \(\sim\) 10\({}^{25}\) x-ray photons / s mm\({}^{2}\) mrad\({}^{2}\)) along with providing access to new relativistic high energy density physics regimes (i.e. multi-MA, GeV-level charged plasma/particle beam-target interactions at multi-TPa, Unruh, Dynamic Casimir & QG effects). The WARP devices would fulfill the immediate need for an intermediate-level machine for z-pinch, beam and pulsed-power flow studies along with the added benefit of recruiting the next-generation of
RHED plasma and accelerator scientists, engineers and technicians. Finally, WARP would be an ideal platform for prototyping novel pulsed power architectures for continuous rep-rate nuclear fusion and radiographic movie operations thereby enabling us to continue our collaborations across the Department of Energy (DOE)/Department of Defense (DOD) complexes along with forging new university and private industry partners through our Cooperative Research and Development Agreement (CRADA) and Strategic Partnership (SPP) programs.
We leave you with the following Gedankenexperiment we call "A Twisted Compression of the Ehrenfest Paradox" along with three conjectures on how one might enhance energy-momentum to spacetime curvature coupling. Imagine you are one of a multitude of elementary charged particles which make up a rotating high-energy-density charge/current neutralized particle ring, embedded within an axial seed magnetic field, that is radially compressed toward zero radius and azimuthally accelerated to ultra-relativistic velocity during sufficient flux compression. According to you and a nearby inertial observer, what happens to the local spacetime surrounding said ring-vortex at the end of the implosion phase?
Finally, the three possible methods for enhancing energy-momentum to spacetime curvature coupling are as follows:
1. Multi-layer RHED plasma and charged particle ring confinement of THz radiation to create plasma/ring metamaterial effects.
2. Azimuthal acceleration beyond Unruh threshold of multi-pass RHED plasma and charged particle rings to generate Leidenfrost-like vortex layers which create spacetime phase transition.
Fig. 26: Mathematica models for proposed modified EFE and FOM with Sarfait and Anderson fields
3. Implosion beyond Casimir threshold of RHED plasma and charged particle rings to generate internal negative energy density which also provides additional confinement mechanism to the \(>\)5kT magnetic fields in order to prevent plasma/ring metamaterial rapid disassembly.
## Acknowledgment
The authors would very much like to thank Keith LeChien, Stephen Sampayan and Nathan Meezan for collaboration, reviews, critical discussions and support. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
|
2310.12878 | Information Theoretical Approach to Detecting Quantum Gravitational
Corrections | One way to test quantum gravitational corrections is through black hole
physics. In this paper, We investigate the scales at which quantum
gravitational corrections can be detected in a black hole using information
theory. This is done by calculating the Kullback-Leibler divergence for the
probability distributions obtained from the Parikh-Wilczek formalism. We
observe that the quantum gravitational corrections increase the
Kullback-Leibler divergence as the mass of the black hole decreases, which is
expected as quantum gravitational corrections can be neglected for larger black
holes. However, we further observe that after a certain critical value, quantum
gravitational corrections tend to decrease again as the mass of the black hole
decreases. To understand the reason behind this behavior, we explicitly obtain
Fisher information about such quantum gravitational corrections and find that
it also increases as the mass decreases, but again, after a critical value, it
decreases. This is because at such a scale, quantum fluctuations dominate the
system and we lose information about the system. We obtain these results for
higher-dimensional black holes and observe this behavior for Kullback-Leibler
divergence and Fisher information depending on the dimensions of the black
hole. These results can quantify the scale dependence and dimension dependence
of the difficulty in detecting quantum gravitational corrections. | Behnam Pourhassan, Xiaoping Shi, Salman Sajad Wani, Saif-Al-Khawari, Farideh Kazemian, İzzet Sakallı, Naveed Ahmad Shah, Mir Faizal | 2023-10-19T16:32:53Z | http://arxiv.org/abs/2310.12878v2 | # Information Theoretical Approach to Detecting Quantum Gravitational Corrections
###### Abstract
One way to test quantum gravitational corrections is through black hole physics. In this paper, We investigate the scales at which quantum gravitational corrections can be detected in a black hole using information theory. This is done by calculating the Kullback-Leibler divergence for the probability distributions obtained from the Parikh-Wilczek formalism. We observe that the quantum gravitational corrections increase the Kullback-Leibler divergence as the mass of the black hole decreases, which is expected as quantum gravitational corrections can be neglected for larger black holes. However, we further observe that after a certain critical value, quantum gravitational corrections tend to decrease again as the mass of the black hole decreases. To understand the reason behind this behavior, we explicitly obtain Fisher information about such quantum gravitational corrections and find that it also increases as the mass decreases, but again, after a critical value, it decreases. This is because at such a scale, quantum fluctuations dominate the system and we lose information about the system. We obtain these results for higher-dimensional black holes and observe this behavior for Kullback-Leibler divergence and Fisher information depending on the dimensions of the black hole. These results can quantify the scale dependence and dimension dependence of the difficulty in detecting quantum gravitational corrections.
###### Contents
* I Introduction
* II Quantum Gravitational Corrected Geometry
* III Kullback-Leibler divergence
* IV Fisher Information
* V Stability at Quantum Scales
* VI Conclusion
* VII Acknowledgements
Introduction
Various different proposals for quantizing gravity lead to different theoretical modifications of low-energy quantum phenomena [1; 2; 3; 4; 5; 6]. Thus, it is important to detect effects produced by quantum gravity, and various tests have been proposed to detect such quantum gravitational effects [7; 8]. Among them, it is speculated that black hole physics can be used to test quantum gravitational effects [9; 10]. Although several tests for quantum gravity have been proposed, the scale at which quantum gravity effects are likely to be observed has not been rigorously discussed. To properly classify and quantify the dependence of quantum gravitational effects on the scale, we analyze the probability distribution of particles emitted from a black hole in the Parikh-Wilczek formalism [79; 80]. As the Parikh-Wilczek formalism considers the back reaction of particles emitted during black hole evaporation, it is sensitive to changes in the geometry due to quantum gravitational effects. This feature of the formalism thus allows the determination of the change in the probability distribution of the emitting particles due to these effects. To quantify the scale dependence of such quantum gravitational effects, we use information-theoretical techniques. We start by using the Kullback-Leibler divergence [73; 74], which measures the difference between two statistical probability distributions. While not a metric due to its asymmetry, Kullback-Leibler divergence effectively quantifies the deviation between two distributions. Thus, a higher Kullback-Leibler divergence would mean that the corrected probability distribution would have a higher deviation from the original probability distribution. This, in turn, would make it easier to experimentally detect the effects of such corrections. Accordingly, the Kullback-Leibler divergence can clearly quantify and classify the effects of quantum gravitational corrections. We observe that Kullback-Leibler divergence first increases as the mass of the black hole decreases, but beyond a certain critical value, it starts decreasing. We claim that this is due to the loss of information on very small scales. To quantify this claim, we use Fisher information, which effectively quantifies information about a parameter that can be obtained from a given distribution [85; 86]. We thus directly use Fisher information to analyze the information we obtain about quantum gravitational corrections. We demonstrate that Fisher information about quantum gravity also first increases and then decreases beyond a certain critical value. Moreover, we find that this critical value for the Kullback-Leibler divergence and Fisher information depends on dimensions. Our approach allows us to thoroughly evaluate the influence of quantum gravity on black holes across various scales and dimensions. This can be directly related to the possibility of detecting quantum gravitational corrections. We also note that quantum gravitational effects can occur at different scales in different dimensions due to the lowering of the Planck scale in higher dimensions [56; 57; 83; 84]. This observation has motivated the study of higher-dimensional Schwarzschild black holes [69; 70; 71; 72]. Thus, we analyze the effects of quantum gravitational corrections using such higher-dimensional Schwarzschild black holes. This allows us to explicitly investigate the effects of dimensions on quantum gravitational corrections, which can be determined by using an effective quantum corrected metric and by introducing the concept of a novel quantum mass [62]. In fact, this quantum mass becomes instrumental in analyzing changes in the probability distribution of emitted particles in the Parikh-Wilczek formalism [79; 80].
The phenomenological consequences of quantum gravitational corrections to black hole thermodynamics have been used to propose tests of quantum gravitational effects [11; 12; 13; 14], noting that the standard black hole thermodynamics is a semi-classical theory, which is obtained using quantum field theory in curved spacetime [15; 16; 17; 18]. In this approach, the thermodynamic properties of black holes are obtained by neglecting the quantum gravitational corrections. Using this equilibrium description, the entropy of black holes scales with the area of their event horizon, while their temperature scales with the surface gravity [17; 18]. This approximation holds true only for sufficiently large black holes. At these larger scales, the temperature is exceedingly low, allowing us to disregard thermal fluctuations and use equilibrium thermodynamics. However, as black holes reduce in size due to Hawking radiation [19; 20; 21], quantum gravitational corrections can no longer be neglected. Similarly, at such smaller scales, the temperature rises and thermal fluctuations cannot be dismissed. These fluctuations can be explored as perturbative corrections to equilibrium thermodynamics, leading to logarithmic corrections to the entropy of a black hole [22; 23; 25; 26; 27]. Furthermore, it is known that the geometry can be derived from thermodynamics using the Jacobson formalism [28]. Therefore, quantum fluctuations in geometry can be connected to thermal fluctuations in equilibrium thermodynamics using the Jacobson formalism [29]. This insight has spurred research into quantum gravitational corrections for various black hole scenarios [30; 31; 32; 33; 34; 35]. Additionally, the holographic principle [36; 37] has been used to analyze quantum gravitational corrections in black hole thermodynamics [38; 39; 40; 41; 42; 43; 44]. This was achieved by investigating the back reaction of the geometry using the finite \(N\) limit of boundary conformal field theory. These corrections correspond to \(\alpha^{\prime}\) corrections in the geometry, which, in turn, give rise to higher curvature corrections. These higher curvature corrections are known to introduce modifications to the standard thermodynamics of black holes [45; 46]. Various other approaches have also been used to study the effects of quantum gravity on black holes [38; 39; 40; 41; 42; 43; 44]. These corrections correspond only to perturbative quantum gravitational corrections to geometry, which are obtained from perturbative thermal corrections to equilibrium thermodynamics. These corrections occur at a scale so small that neglecting the perturbative corrections to equilibrium thermodynamics is not permissible. However, this scale is still much larger than the Planck scale, near which the full non-perturbative quantum gravitational corrections become significant, making it impossible to analyze the geometry
using only perturbative quantum gravitational corrections.
Non-perturbative quantum gravitational corrections to the thermodynamics of various black holes have been investigated [47; 48; 49], leading to the observation that such corrections can introduce non-trivial modifications to the thermodynamics of black holes. Non-perturbative quantum gravitational corrections to the thermodynamics of black holes have also been derived using string theoretical effects [51; 52; 53]. Additionally, general arguments based on the properties of conformal field theories have been used to obtain such corrections [50]. The influence of full non-perturbative quantum gravitational corrections on the thermodynamic behavior of small four-dimensional Schwarzschild black holes [58], Born-Infeld black holes [59], AdS black holes [60], Myers-Perry black holes [61], and a system of M2-M5 branes [62] have been investigated. These non-perturbative corrections introduce an exponential term to the black hole entropy, consequently modifying other thermodynamic quantities. Furthermore, these corrections play a crucial role in the short-distance stability analysis of quantum-scale black holes, affecting their heat capacity, which, in turn, determines their stability. This has significant implications for black hole evaporation, including the black hole information paradox [63; 64; 65; 66; 67; 68]. Thus, it becomes important to properly classify the effects of such corrections at various scales and in various dimensions, which we accomplish in this paper.
## II Quantum gravitational corrected geometry
In this section, we will analyze the effects of quantum gravitational corrections on the geometry of a higher-dimensional Schwarzschild black hole. We begin by first reviewing the properties of a higher-dimensional black hole. The metric of \(D\)-dimensional Schwarzschild black hole can be expressed as follows [75]:
\[ds^{2}=-f(r)dt^{2}+\frac{dr^{2}}{f(r))}+r^{2}d\Omega_{D-2}^{2}, \tag{1}\]
where the metric function \(f(r)\) is given by,
\[f(r)=1-\frac{16\pi G_{D}M}{(D-2)\Omega_{D-2}r^{D-3}}. \tag{2}\]
Here, \(G_{D}\) represents the \(D\)-dimensional Newton's constant, and \(M\) represents the black hole mass. Additionally, \(d\Omega_{D-2}^{2}\) represents the metric of the \(D-2\)-dimensional unit sphere with an area of \(\Omega_{D-2}=2\pi^{\frac{D-1}{2}}/\Gamma\left(\frac{D-1}{2}\right)\). Using this metric, the radius of the horizon can be determined by the condition \(f(r=r_{0})=0\) and can be explicitly expressed as \(r_{0}=(16\pi G_{D}M/(D-2)\Omega_{D-2})^{\frac{1}{D-3}}\). The original equilibrium entropy for the black hole is given by \(S_{0}=\Omega_{D-2}r_{0}^{D-2}/4G_{D}\). Substituting this expression for the radius of the horizon into \(S_{0}\), the original equilibrium entropy of the black hole can be written as
\[S_{0}=\frac{1}{4G_{D}}\Omega_{D-2}\left(\frac{16\pi G_{D}M}{(D-2)\Omega_{D-2} }\right)^{\frac{D-2}{D-3}}. \tag{3}\]
Hawking temperature \(T_{0}\) of the \(D\)-dimensional Schwarzschild black hole is given by \(T_{0}=(D-3)/4\pi r_{0}\). It has been argued that non-perturbative quantum gravitational effects produce an exponential correction to the equilibrium entropy of the black hole, such that [50; 51; 52; 53; 54; 55]
\[S_{Q}=S_{0}+\eta e^{-S_{0}}, \tag{4}\]
where \(S_{Q}\) is the quantum gravitationally corrected entropy of the black hole and \(\eta=[0,1]\) is a parameter.
It has been observed that the thermodynamic behavior of small black holes is sensitive to the value of \(\eta\)[58; 59; 60; 61]. In the limit where \(\eta=0\), we obtain the original equilibrium entropy, and in the limit where \(\eta=1\), we obtain the corrections predicted by string theory [51; 52; 53]. However, since the parameter governing quantum gravitational corrections to the black hole depends on the details of the approach [30; 31; 32; 33; 34; 35], we shall treat \(\eta\) as a general parameter and analyze the behavior of the black hole for various values of \(\eta\)[58; 59; 60; 61].
We can now utilize this quantum gravitationally corrected entropy to analyze the effects of quantum gravitational corrections on the geometry of a \(D\)-dimensional Schwarzschild black hole. It is well-established that this geometry can be derived through thermodynamics in the Jacobson formalism [28]. Consequently, at short distances, quantum fluctuations in the geometry can be linked to thermal fluctuations in the thermodynamics of the black hole [29]. Therefore, it becomes possible to use corrections to the equilibrium entropy of a black hole to derive quantum gravitational corrections to its metric. This approach is designed to ensure that the quantum-corrected metric directly yields the quantum gravitational corrected entropy. In pursuit of this objective, we introduce a modified metric for a
Schwarzschild black hole, which naturally generates the corrected entropy in an exponential form. Thus, the original metric (1) undergoes modification to a new quantum-corrected effective metric, with \(f(r)_{Q}\) replacing the original \(f(r)\), such that
\[f(r)_{Q}=1-\frac{16\pi G_{D}\left(1-\eta e^{-\frac{\pi r_{0}^{D-2}}{G_{D}}} \right)^{D-3}M}{(D-2)\Omega_{D-2}r^{D-3}}. \tag{5}\]
The standard entropy obtained from this quantum-corrected metric is the quantum gravitational-corrected entropy of the black hole given in Eq. (4). This quantum-corrected metric can also be used to investigate the effects of quantum gravitational corrections on various thermodynamic quantities. Consequently, we observe that the temperature is modified due to quantum gravitational effects as \(T_{Q}=f^{\prime}(r_{0})_{Q}/4\pi=(D-3)/4\pi r_{0}(1-\eta e^{-S_{0}})\).
However, to analyze the importance of a quantum gravitationally corrected black hole, it is useful to define a novel quantum mass [62]. This will also become important for analyzing the impact of quantum gravitational corrections on the system. Such a quantum mass would reduce to the original mass when quantum gravitational corrections are neglected. However, in the limit these quantum gravitational corrections are considered, the quantum mass replaces the original mass in the metric. Using the expression for the quantum corrected metric given by Eq. 5, we can define a novel quantum mass for the \(D\)-dimensional Schwarzschild black hole as
\[M_{Q}=\left[1-\eta\exp\left(-\frac{\pi r_{0}^{D-2}}{G_{D}}\right)\right]^{D-3}M. \tag{6}\]
For large black holes, the effects of quantum gravitational corrections can be neglected, and the analysis can be performed using the quantum mass, which will coincide with the analysis conducted using the original mass of the black hole; deviation will occur only at quantum scales.
## III Kullback-Leibler divergence
In the previous sections, we analyzed the modifications to a higher-dimensional black hole from quantum gravitational corrections. Now, we quantify the deviations from the standard behavior due to such corrections. This will be done using Kullback-Leibler divergence [73; 74], which measures how much two probability distributions differ from each other. Even though Kullback-Leibler divergence is not symmetric, and hence not a metric, it does give a realistic estimation of how far a given probability distribution is from another probability distribution. It is possible to obtain the probability distribution of the particles emitted from a black hole during its evaporation using the Parikh-Wilczek formalism [79; 80], which considers the back reaction of the emitted particles on the black hole geometry and proposes that the entropy of the black hole can be expressed in terms of the probability distribution of the emitted particles. As the entropy of the black hole is modified by quantum gravitational corrections, this probability distribution in the Parikh-Wilczek formalism will also be modified. We will explicitly analyze such modifications to the probability distribution of the emitted particles, and use them to estimate the deviation of the behavior of a quantum-corrected black hole from the original black hole. For a higher dimensional Schwarzschild black hole, we assume that \(p_{n}\) is the probability that the black hole evaporates by radiating \(n\) particle. We can write this probability as
\[p_{n}=\frac{\Omega_{n}}{\Omega_{total}}, \tag{7}\]
where \(\Omega_{n}\) is the number of microstates when a particle evaporates into \(n\) particle, and \(\Omega_{total}=\sum_{1}^{\infty}\Omega_{n}\) is the total number of the possible microstates of the system. It has been demonstrated that for a Schwarzschild black hole of mass \(M\), \(p_{n}\) can be expressed in terms of \(M\)[80]. Although the expression was explicitly analyzed for a four-dimensional black hole, the arguments used to obtain this distribution [80] are very general and hold in any dimension. Thus, we can write this probability distribution for a higher-dimensional Schwarzschild black hole as
\[p_{n}=\frac{(4\pi G_{D}M^{2})^{n-1}}{(n-1)!}e^{-4\pi G_{D}M^{2}}. \tag{8}\]
The probability distribution \(p_{n}\) is normalized. As the black hole evaporates by emitting \(n\) particles, the probability that one among \(\Omega_{n}\) microstates occurs is \(q_{\alpha}\). Since it is assumed that all of the \(\Omega_{total}\) microstates occur with the same
probability, we can write \(q_{\alpha}=1/\Omega_{n}\). The total probability distribution for both the probabilities of the number of particles that a four-dimensional Schwarzschild black hole emits during its evaporation and the possible microstates has been obtained [80]. This can be easily generalized to higher dimensions, where this probability distribution can be written as
\[P_{n}(0)=q_{\alpha}\times p_{n}=\frac{1}{\Omega_{total}}=\frac{1}{\sum_{n=1}^{ \infty}\frac{(4\pi G_{D}M^{2})^{n-1}}{(n-1)!}e^{-4\pi G_{D}M^{2}}}=e^{-4\pi G_{ D}M^{2}}, \tag{3.3}\]
where the zero in \(P_{n}(0)\) denotes the absence of any quantum gravitational corrections. Here, we observe that this probability distribution depends on the mass of the black hole.
As the arguments are general, these also hold for a quantum-sized black hole, with the mass being replaced by the novel quantum mass of the black hole. This is because the quantum mass naturally reproduces the quantum-corrected entropy of the quantum-sized black hole. As the entropy obtained from the Parikh-Wilczek formalism [79; 80] has to be considered with the entropy of the black hole obtained from standard methods, the quantum gravitationally corrected entropy produced by the Parikh-Wilczek formalism [79; 80] should also coincide with the quantum gravitationally corrected entropy produced using standard methods. Thus, we can also write the probability distribution for a quantum-corrected higher-dimensional Schwarzschild black hole, using the novel quantum mass \(M_{Q}=[1-\eta\exp-(\pi r_{0}^{D-2}/G_{D})]^{D-3}M\)
\[P_{n}(\eta)=\frac{1}{\sum_{n=1}^{\infty}\frac{(4\pi GM_{Q}^{2})^{n-1}}{(n-1)! }e^{-4\pi GM_{Q}^{2}}}=e^{-4\pi GM_{Q}^{2}}. \tag{3.4}\]
Since \(\eta\) is a very small constant, the equation for the modified mass can be approximated as \(M_{Q}=M(1+(3-D)\eta e^{-\frac{\pi\pi D-2}{g}})\). It may be noted that at \(P_{n}(\eta)|_{\eta=0}=P_{n}(0)\). We can use the Kullback-Leibler divergence [73; 74] to measure deviations from quantum gravitational corrections. The Kullback-Leibler divergence is given by
\[D_{KL}(P_{n}(\eta)||P_{n}(0)) = \sum_{n=1}^{\Omega_{total}}P_{n}(\eta)\log\frac{P_{n}(0)}{P_{n}( \eta)}=\sum_{n=1}^{\Omega_{total}}e^{-4\pi GM_{Q}^{2}}\log\frac{e^{-4\pi GM_{Q }^{2}}}{e^{-4\pi GM^{2}}} \tag{3.5}\] \[= e^{-4\pi M^{2}G\eta\alpha}\log\frac{e^{-4\pi GM_{Q}^{2}}}{e^{-4 \pi GM^{2}}}=e^{(4\pi GM^{2}\eta\alpha)}(4\pi GM^{2}\eta\alpha).\]
where \(\alpha=(D-3)e^{-\pi r^{D-2}/G}\).
It is interesting to note that for this case, although \(D_{KL}(P_{n}(\eta)||P_{n}(0))\) is well defined, \(D_{KL}(P_{n}(0)||P_{n}(\eta))\) is not. The reason is that for two distributions \(P(X)\) and \(Q(Y)\) to have a valid \(D_{KL}(P(X)||Q(Y))\), \(\forall P(X)=0\), we must have \(Q(Y)=0\) and \(\Re_{Y}\subseteq\Re_{X}\). The first condition is easily satisfied by both divergences when the distributions are taken to be \(P_{n}(\eta)\) and \(P_{n}(0)\). We see the range of the variable for \(P_{n^{\prime}}(\eta)\in[1,\Omega_{total}^{\prime}=e^{4\pi GM_{Q}^{2}}]\) and \(P_{n}(0)\in[1,\Omega_{total}=e^{4\pi GM^{2}}]\). It is easy to see that \(M\geq M_{Q}\) for all positive values of \(\eta\), hence, \(\Re_{n^{\prime}}\subseteq\Re_{n}\). Thus, only \(D_{KL}(P_{n}(\eta)||P_{n}(0))\) is well defined.
We observe from Fig. 1, that the Kullback-Leibler divergence depends not only on the size of the black hole but also on its dimensions. As the dimensions increase, the Kullback-Leibler divergence becomes significant even for relatively larger masses. This implies that in higher dimensions, quantum gravitational effects can occur even at larger scales. It has already been pointed out that the Planck scale decreases as dimensions increase, and, consequently, quantum gravitational effects can become significant [83; 84]. Here, we have been able to rigorously demonstrate it using Kullback-Leibler divergence. However, what is rather unexpected is that the Kullback-Leibler divergence again decreases, as the mass becomes lower than a certain critical value. Thus, it seems that to measure quantum gravitational corrections, a certain critical scale exists, and this scale depends on the dimensions. Above this scale, the effect of these corrections becomes negligible, which is expected. Also, going below this critical scale, the effect of the corrections diminishes. The reason for such a reduction seems to be that below this scale, the quantum fluctuations dominate the systems, and we seem to lose information about the system. The loss of information corresponds to a loss of making predictions, and hence the Kullback-Leibler divergence between the original and corrected probability distributions vanishes. This also implies that there is a certain critical value at which we can test quantum gravity. We observe that as we increase the dimensions, the range of this value reduces, and this makes it harder to detect quantum gravitational corrections in extra dimensions.
## IV Fisher information
In the previous section, we analyzed the Kullback-Leibler divergence between the original and corrected probability distributions and observed that it reduces with masses lower than a certain critical value. We speculated that this was due to a loss of information. In this section, we will quantify this argument by using the concept of Fisher information. Fisher information measures the information about a parameter that can be obtained from a probability distribution [85; 86]. To analyze how much information can be obtained about quantum gravitational corrections from the probability distribution of particles emitted from a black hole during its evaporation, we obtain Fisher information of the parameter \(\eta\). As Fisher information is related to Kullback-Leibler divergence [87; 88], we measure Fisher information by first defining Kullback-Leibler divergence between two corrected probability distributions over different values of \(\eta\); namely: \(P_{n}(\eta)\) and \(P_{n}(\eta+\delta\eta)\). We measure the Fisher information associated with \(\eta\), as we move in the \(\eta\) space between \(\eta\) and \(\eta+\delta\eta\). In the limiting case, we will get our previous results back, when we set \(\eta=0\). Now we can use the Taylor expansion to expand \(P_{n}(\eta+\delta\eta)\), and find its relation to \(P_{n}(\eta)\)
\[P_{n}(\eta+\delta\eta)\approx P_{n}(\eta)+\delta\eta\frac{\partial P_{n}(\eta )}{\partial\eta}=P_{n}(\eta)+\delta P_{n}(\eta). \tag{10}\]
Here, we define the correction to the probability distribution from \(\eta\) to \(\eta+\delta\eta\) as \(\delta P_{n}(\eta)\). We can write the Kullback-Leibler divergence between the probability distributions at \(\eta\) and \(\eta+\delta\eta\) as
\[D_{KL}(P_{n}(\eta+\delta\eta)||P_{n}(\eta)) = \sum_{n=1}^{\Omega}(P_{n}(\eta+\delta\eta)\log\frac{P_{n}(\eta+ \delta\eta)}{P_{n}(\eta)}). \tag{11}\]
We write the argument of the logarithm as \((1+\delta P_{n}(\eta)/P_{n}(\eta))\), and thus obtain
\[D_{KL}(P_{n}(\eta+\delta\eta)||P_{n}(\eta)) = \sum_{n=1}^{\Omega}\left(P_{n}(\eta)+\delta P_{n}(\eta)\log\frac{ P_{n}(\eta)+\delta P_{n}(\eta)}{P_{n}(\eta)}\right) \tag{12}\] \[= \sum_{n=1}^{\Omega}\left(P_{n}(\eta)\log\left(1+\frac{\delta P_{ n}(\eta)}{P_{n}(\eta)}\right)\right)+\sum_{n=1}^{\Omega}\log\left(1+\frac{ \delta P_{n}(\eta)}{P_{n}(\eta)}\right)\delta P_{n}(\eta)\right).\]
We expand the logarithm term up to the second order in \(\delta P_{n}\), and neglect other higher order terms. Thus, using \(\log(1+\delta P_{n}(\eta)/P_{n}(\eta))\approx(\delta P_{n}(\eta)/P_{n}(\eta))-(( \delta P_{n}(\eta))^{2}/2P_{n}(\eta)^{2})\), we write
\[D_{KL}(P_{n}(\eta+\delta\eta)||P_{n}(\eta))\approx\sum_{n=1}^{\Omega}\delta P_{ n}(\eta)+\frac{1}{2}\sum_{n=1}^{\Omega}\left(P_{n}(\eta)\frac{(\delta P_{n}( \eta))^{2}}{P_{n}(\eta)^{2}}\right). \tag{10}\]
We can use the expression for the probability distributions, and obtain an explicit expression for the Kullback-Leibler divergence in terms of the mass of the black hole. Moreover, we express \(\delta P_{n}(\eta)=\delta\eta\partial P_{n}(\eta)/\partial\eta\), and \((\delta P_{n}(\eta))^{2}=(\delta\eta)^{2}(\partial P_{n}(\eta)/\partial\eta)^ {2}\). Putting these back in the expression above we can express the Kullback-Leibler divergence as
\[D_{KL}(P_{n}(\eta+\delta\eta)||P_{n}(\eta)) \approx \sum_{n=1}^{\Omega}\delta\eta\frac{\partial P_{n}(\eta)}{\partial \eta}+\sum\frac{1}{2}(\delta\eta)^{2}P_{n}(\eta)\Bigg{(}\frac{1}{P_{n}(\eta)} \frac{\partial P_{n}(\eta)}{\partial\eta}\Bigg{)}^{2} \tag{11}\] \[\approx \delta\eta\frac{\partial\sum_{n=1}^{\Omega}\exp{4\pi GM^{2}(1+ \eta\alpha)}}{\partial\eta}\] \[+\frac{1}{2}\delta\eta^{2}\sum_{n=1}^{\Omega}\exp{4\pi GM^{2}(1+ \eta\alpha)}\Big{(}\frac{\partial}{\partial\eta}(\log(\exp{4\pi GM^{2}(1+\eta \alpha)})\Big{)}\Big{)}^{2}.\]
Since the argument say \(F((P_{n}(\eta))\) of the summations in the expressions above does not have an explicit \(\eta\) dependence, we can write \(\sum_{n=1}^{\Omega}F(P_{n}(\eta))=\Omega F(P_{n}(\eta))\). Thus, we can write
\[D_{KL}(P_{n}(\eta+\delta\eta)||P_{n}(\eta)) \approx (\delta\eta)^{2}\sum(\frac{1}{2}\!\exp{4\pi GM^{2}(1+\eta\alpha)} \frac{\partial}{\partial\eta}(log(\exp{-4\pi GM^{2}(1+\eta\alpha)}))^{2}. \tag{12}\]
The first time is zero as \(\Omega=\exp{4\pi GM^{2}(1+\eta\alpha)}\) and so \(\sum_{n=1}^{\Omega}\exp{-4\pi GM^{2}(1+\eta\alpha)}=1\). Moreover, since Kullback-Leibler divergence has a minima at \(\eta=0\), the first derivative of Kullback-Leibler divergence is zero at \(\eta=0\),
\[D_{KL}(P_{n}(\eta+\delta\eta)||P_{n}(\eta))=\frac{1}{2}(\delta\eta)^{2}{\bf E }\left(\left(\frac{\partial\log P(\eta)}{\partial\eta}\right)^{2}\right). \tag{13}\]
Here \({\bf E}\) denotes the expectation value. Thus, the Fisher Information \(FI\) can be obtained from Kullback-Leibler divergence as
\[FI(\eta)\approx\frac{\partial^{2}}{\partial^{2}\eta}D_{KL}(P_{n}(\eta+\delta \eta)||P_{n}(\eta))|_{\delta\eta=0}={\bf E}\Bigg{(}\frac{\partial\log P(\eta)} {\partial\eta}\Bigg{)}^{2}\Bigg{|}_{\delta\eta=0}. \tag{14}\]
Now using the explicit expression for the Kullback-Leibler divergence between a corrected value and an original value at \(\eta=0\), we can also write an explicit expression for the Fisher information as (with \(\alpha=(D-3)e^{-\pi r^{D-2}/G}\) )
\[FI\approx(8\pi GM^{2}\alpha)^{2}. \tag{15}\]
We observe from 2, that the Fisher information about quantum gravity decreases for larger black holes; this is expected, as the quantum gravitational corrections only have negligible effects at larger scales. We also observe that Fisher information decreases with mass after a certain critical value. This explains the decrease in the Kullback-Leibler divergence, as the quantum fluctuations dominate at such a scale, and we lose all information about the system. Therefore, there seems to be a critical scale, at which we have maximum Fisher information about the quantum gravitational corrections, and it is around such a scale that the Kullback-Leibler divergence also maximizes between the original and quantum gravitationally corrected probability distributions. Using Fisher information, we could identify the reason for the decreases in Kullback-Leibler divergence below this scale. We also observe that, like the Kullback-Leibler divergence, the dependence of Fisher information with scale also depends on the dimensions.
## V Stability at quantum scales
We will observe that quantum gravitational corrections can change the stability at quantum scales. This can be done by first using the quantum corrected metric to explicitly calculate the corrections to other thermodynamic quantities.
Therefore, we can express the Helmholtz free energy of the black hole corrected for quantum gravitational effects as
\[F_{Q} = -\int S_{Q}dT_{Q}=\frac{\Omega_{D-2}}{16\pi G_{D}}r_{0}^{D-3}+\frac{ \eta(D-3)}{4\pi(D-2)}A\ \Gamma\left(\frac{-1}{D-2},A\ r_{0}^{D-2}\right), \tag{10}\]
where \(A\equiv(\Omega_{D-2}/4G_{D})^{\frac{1}{D-2}}\). We can use this expression for quantum gravitationally corrected Helmholtz free energy to obtain the expression for quantum gravitationally corrected internal energy
\[E_{Q}=F_{Q}+T_{Q}S_{Q} = \frac{\Omega_{D-2}}{16\pi G_{D}}r_{0}^{D-3}+\eta\frac{(D-3)}{4\pi (D-2)}A\Gamma(\frac{-1}{D-2},Ar_{0}^{D-2}) \tag{11}\] \[+\frac{(D-3)}{r_{0}(1-\eta e^{-S_{0}})}(S_{0}+\eta e^{-S_{0}}).\]
We observe that various thermodynamic quantities for a higher-dimensional black hole can also be directly obtained from the quantum gravitationally corrected metric. However, to analyze the stability of a quantum gravitationally corrected black hole, it is useful to define a novel quantum mass [62]. We can explicitly express the quantum gravitationally corrected specific heat in terms of the novel quantum mass as follows:
\[C_{Q} = \frac{-[S^{\prime}(M_{Q})_{Q}]^{2}}{S^{\prime\prime}(M_{Q})_{Q}},=-\frac{[S^{\prime}_{0}(1-\eta e^{-S_{0}})]^{2}}{S^{\prime\prime}_{0}(1-\eta e ^{-S_{0}})+\eta(S^{\prime}_{0})^{2}e^{-S_{0}}} \tag{12}\] \[= -\frac{(D-2)S_{0}(1-\eta e^{-S_{0}})^{2}}{1+[(D-2)S_{0}-1]\eta e^ {-S_{0}}}\]
In equation 12, the numerator is clearly positive, which means that in order to have a positive specific heat, the denominator should be negative. Using the condition for thermodynamics stability, \(C_{Q}\geq 0\), we obtain \(1+[(D-2)S_{0}-1]\eta e^{-S_{0}}\leq 0.\) Now for example, for \(D=4\) with \(G_{4}=G\), we have, \(1+(8\pi GM^{2}-1)\eta e^{-4\pi GM^{2}}\leq 0\).
For a fixed mass of a black hole, we can find the lower limit for \(\eta\) for which the black hole is stable. This is illustrated in Fig. 3, where the lower limit of \(\eta\) is plotted with respect to \(r_{0}\). In Fig. 3, we observe three different regions. On the right, we plot the case with \((D-2)S_{0}>1\). The corrections are negative at large radii, and the correction terms vanish at large radii. In the center, we plot the case with \((D-2)S_{0}\approx 1\). Here the correction coefficient diverges. n the left, we plot the case with \((D-2)S_{0}<1\). The correction coefficient \(\approx 1\) as the horizon radius decreases. The
Figure 2: \(FI\) as a function of \(M\) i.e size of the blackhole for \(D=4\),\(D=5\), \(D=6\), and \(D=11\).
third case agrees with the earlier observations about quantum gravitational corrections [50], so physically \(\eta\approx 1\). We know that the Schwarzschild black hole is unstable (at \(\eta=0\)), but in the presence of the exponential correction with the correction coefficient \(\eta\geq 1\), it may become stable at small radii. In plots of Fig. 4, we can see the behavior of the specific heat in terms of \(r_{0}\) and see that in the presence of the exponential correction, the Schwarzschild black hole is stable at small radii. Although, we have only analyzed two cases, i.e., \(D=4\) (Fig. 4 (a)) and \(D=5\) (Fig. 4 (b)), we expect to find similar behavior in other dimensions.
## VI Conclusion
In this paper, we analyze the effects produced by quantum gravitational corrections on a black hole. This was done by calculating the Kullback-Leibler divergence between the probability distribution of the original particles emitted during the evaporation of a black hole, and the probability distribution corrected by quantum gravitational corrections. We observe that Kullback-Leibler divergence depends both on the scale and dimensions of the system, and increases as the mass decreases, as expected. However, after a critical value, it starts to decrease as the mass decreases. This is because after this critical point, the quantum fluctuations dominate, and we lose information about the system. This is explicitly demonstrated to be the case using Fisher information. We also observe that Fisher information about quantum gravitational corrections exhibits similar behavior; first increasing and then decreasing as the mass decreases. Fisher information first increases as quantum gravitational corrections become significant as the size of the black hole decreases. However, after a certain critical value, the quantum fluctuations dominate the system, and we again start losing information about quantum gravitational corrections. This explains why Fisher information first increases and then decreases as the the size of the black hole decreases. This observation has direct implications for detecting quantum gravitational corrections. It easiest to measure the effects produced by quantum gravity at the scale at which Kullback-Leibler divergence is maximum. Similarly, as the range of this value for which the Kullback-Leibler divergence is high also decreases with increasing dimensions, it will be very hard to detect quantum gravitational effects in extra dimensions. This is further quantified by the amount of Fisher information about quantum gravitational corrections. Thus, we have been able to quantify the scale dependence and dimension dependence of the difficulty of detecting quantum gravity. We have also commented on the stability of these black holes at quantum scales.
## VII Acknowledgements
I.S. would like to acknowledge networking support of COST Action CA18108 - Quantum gravity phenomenology in the multi-messenger approach. He also thanks to TUBITAK and SCOAP3 for their support.
Figure 3: The lower limit of \(\eta\) versus \(r_{0}\) to satisfy the stability condition (positive specific heat).
## Data Availability
There is no associated data in this paper.
|
2307.11766 | Three-way Decisions with Evaluative Linguistic Expressions | We propose a linguistic interpretation of three-way decisions, where the
regions of acceptance, rejection, and non-commitment are constructed by using
the so-called evaluative linguistic expressions, which are expressions of
natural language such as small, medium, very short, quite roughly strong,
extremely good, etc. Our results highlight new connections between two
different research areas: three-way decisions and the theory of evaluative
linguistic expressions. | Stefania Boffa, Davide Ciucci | 2023-07-15T14:45:33Z | http://arxiv.org/abs/2307.11766v1 | # Three-way Decisions with Evaluative Linguistic Expressions
###### Abstract
We propose a linguistic interpretation of three-way decisions, where the regions of acceptance, rejection, and non-commitment are constructed by using the so-called evaluative linguistic expressions, which are expressions of natural language such as small, medium, very short, quite roughly strong, extremely good, etc. Our results highlight new connections between two different research areas: three-way decisions and the theory of evaluative linguistic expressions.
_Keywords--_ Three-way decisions, Rough sets, Probabilistic rough sets, Evaluative linguistic expressions, Explainable Artificial Intelligence
## 1 Introduction
The theory of three-way decisions (TWD) divides a finite and non-empty universe into three disjoint sets, which are called positive, negative, and boundary regions. These regions respectively induce positive, negative, and boundary rules: a positive rule makes a decision of acceptance, a negative rule makes a decision of rejection, and a boundary rule makes an abstained or non-committed decision [1, 2]. The concept of three-way decisions was originally introduced in Rough Set Theory [1, 3] and until today, it has been widely studied and applied to many decision-making problems (see [4, 5, 6, 7] for some examples). Thus, several approaches have been proposed to generate the three regions; one of them is based on probabilistic rough sets, which generalizes probabilistic rough sets [8, 9] where the three regions are constructed using a pair of thresholds and the notion of conditional probability (in this case, the regions are called probabilistic positive, negative, and boundary regions).
The contribution of this article is to provide a linguistic interpretation of the positive, negative, and boundary regions. So, we propose a three-way decision method based on the concept of _evaluative linguistic expressions_, which are expressions of natural language such as _small, medium, very short, quite roughly strong, extremely good, etc_. These are already considered in the majority of applications of fuzzy modelling. Since we use evaluative linguistic expressions to evaluate the size of sets, we focus on the expressions involving the adjectives _small_, _medium_, and _big_ that can be preceded by an adverb; examples are _very small_, _roughly medium_, and _extremely big_. Mathematically, an evaluative linguistic expression is modelled by a function \(Ev:[0,1]\rightarrow[0,1]\).The formal theory of evaluative linguistic expressions is introduced and explained in [10, 11, 12, 13].
The positive, negative, and boundary regions of a non-empty and finite universe \(U\) are defined here starting from a subset \(X\) of \(U\), an equivalence relation \(\mathcal{R}\) on \(U\) (i.e. \(\mathcal{R}\) is reflexive, symmetric and transitive), an evaluative linguistic expression \(Ev\), and
a pair of thresholds \((\alpha,\beta)\) with \(0\leq\beta<\alpha\leq 1\). Then, an object \(x\) belongs to the positive region when _the size of \([x]_{\mathcal{R}}\cap X\) evaluated w.r.t. \(Ev\) is at least \(\alpha\)_, where \([x]_{\mathcal{R}}\) is the equivalence class of \(x\) w.r.t. \(\mathcal{R}\). Analogously, \(x\) belongs to the negative region when _the size of \([x]_{\mathcal{R}}\cap X\) evaluated w.r.t. \(Ev\) is at most \(\beta\)_. Finally, the remaining elements form the boundary region. In order to obtain the three regions, the size of \(X\cap[x]_{\mathcal{R}}\) is quantified using a fuzzy measure [14, 15].
The role of evaluative linguistic expressions in the context of three-way decisions can be better understood with the following example.
**Example 1**.: _Suppose that the number of buses between the University of Buenos Aires and the rest of the city has to be increased from 7 am to 8 am. Thus, we intend to understand which city areas need buses the most, as resources are limited. Let us denote the areas of the city with \(A_{1},\ldots,A_{n}\) and map each area \(A_{i}\) with the set \(S_{A_{i}}\) made of all students of the university who live in \(A_{i}\). Thus, \(S_{A_{1}},\ldots,S_{A_{n}}\) can be seen as the equivalence classes w.r.t. the relation \(\mathcal{R}\) on the set of all students of the University of Buenos Aires living in the city: \(x\mathcal{R}y\) if and only if \(x\) and \(y\) live in the same area. Based on a survey, we also consider a set \(X\) made of all students that usually take a bus to the university in the slot time [7 am, 8 am]. We also choose \((\alpha,\beta)=(0.3,0.6)\) and \(Ev=extremely big\). We construct three regions in the following way. The positive region is the union of \(S_{A^{\prime}_{1}},\ldots,S_{A^{\prime}_{k}}\) (with \(\{A^{\prime}_{1},\ldots,A^{\prime}_{k}\}\subseteq\{A_{1},\ldots,A_{n}\}\)) so that the amount of students of \(S_{A_{i}}\) that take a bus from 7 am to 8 am is "extremely big" with a value of at least 0.6. Similarly, the negative region is the union of \(S_{A^{\prime}_{1}},\ldots,S_{A^{\prime}_{k}}\) (with \(\{A^{\prime}_{1},\ldots,A^{\prime}_{k}\}\subseteq\{A_{1},\ldots,A_{n}\}\)) so that the amount of students of \(S_{A^{\prime}_{1}}\) that take a bus from 7 am to 8 am is extremely big with a value of at most 0.3. All other students form the boundary region. The final decision is immediate: the buses are certainly increased for the areas \(A^{\prime}_{1},\ldots,A^{\prime}_{k}\), but not for \(A^{*}_{1},\ldots,A^{\prime}_{k}\). Furthermore, the decision is postponed for the remaining areas (thati is, for each \(A_{i}\notin\{A^{\prime}_{1},\ldots,A^{\prime}_{k}\}\cup\{A^{*}_{1},\ldots,A^{ *}_{h}\}\)). In order to make a decision in those areas, for example, we could take into account the workers (besides the students) that need a bus in the slot of time [7 am, 8 am]._
The choice of \(Ev\) depends on the context where the three regions are used. Indeed, in the previous example, we have chosen _extremely big_ in order to select the areas where a large number of students catch the bus from 7 am to 8 am. However, if we focus on the inverse problem (namely we need to eliminate some existing bus rides), then we should identify the areas where there are fewer students taking the bus in the time slot [7 am, 8 am]. Therefore, in this case, the evaluative linguistic expression _extremely small_ is more appropriate to construct the three regions.
A significant contribution of this article is providing a linguistic and novel interpretation of the positive, negative, and boundary regions already determined with probabilistic rough sets. Consequently, the reasons for decisions of acceptance, rejection, and non-commitment can be explained in terms of expressions of natural language. Of course, the advantage is that non-technical users dealing with TWD models can better understand the reliability of the procedures related to the final decisions. This is in line with the scope of _Explainable Artificial Intelligence (XAI)_, which is a new approach to AI emphasizing the ability of machines to give sound motivations about their decisions and behaviour [16].
The article is organized as follows. The next section reviews some basic notions regarding probabilistic three-way decisions and the concept of evaluative linguistic expressions. Also, the notion of fuzzy measure is recalled. Section 3 presents a new model of three-way decisions based on the theory of evaluative linguistic expressions. As a consequence, a linguistic generalization of Pawlak rough sets is introduced. Finally, Section 4 connects the TWD models based on evaluative linguistic expressions and probabilistic rough sets. In particular, confining to the evaluative linguistic expressions modelled by increasing functions, we find the class of thresholds so that the corresponding probabilistic positive, negative, and boundary regions are equal to those
generated by a given evaluative linguistic expression.
## 2 Preliminaries
In the following, we consider a finite universe \(U\), a subset \(X\) of \(U\), and an equivalence relation \(\mathcal{R}\) on \(U\) (i.e. \(\mathcal{R}\) is reflexive, symmetric, and transitive). Moreover, we indicate the equivalence class of \(x\in U\) w.r.t. \(\mathcal{R}\) with \([x]_{\mathcal{R}}\).
### Three-way decisions with probabilistic rough sets
This subsection recalls the fundamental notions of three-way decisions based on probabilistic rough sets.
Viewing \(X\) and \([x]_{\mathcal{R}}\) as events of \(U\), the symbol \(Pr(X|[x]_{\mathcal{R}})\) denotes the _conditional probability_ of \(X\) given \([x]_{\mathcal{R}}\), i.e.
\[Pr(X|[x]_{\mathcal{R}})=\frac{|[x]_{\mathcal{R}}\cap X|}{|[x]_{\mathcal{R}}|}. \tag{1}\]
Then, three special subsets of \(U\) are determined by using (1) and a pair of thresholds as shown by the next definition.
**Definition 1**.: _Let \(\alpha,\beta\in[0,1]\) such that \(\beta<\alpha\), the \((\alpha,\beta)\)-probabilistic positive, negative and boundary regions are respectively the following:_
1. \(POS_{(\alpha,\beta)}(X)=\{x\in U\ |\ Pr(X|[x]_{\mathcal{R}})\geq\alpha\}\)_,_
2. \(NEG_{(\alpha,\beta)}(X)=\{x\in U\ |\ Pr(X|[x]_{\mathcal{R}})\leq\beta\}\)_,_
3. \(BND_{(\alpha,\beta)}(X)=\{x\in U\ |\ \beta<Pr(X|[x]_{\mathcal{R}})<\alpha\}\)_._
We put
\[\mathcal{T}_{(\alpha,\beta)}(X)=\{POS_{(\alpha,\beta)}(X),NEG_{(\alpha,\beta)} (X),BND_{(\alpha,\beta)}(X)\} \tag{2}\]
and we say that \(\mathcal{T}_{(\alpha,\beta)}(X)\) is a _tri-partition_ of \(U\) due to the following remark 1.
Footnote 1: By a tri-partition, we mean a partition of \(U\) made of three equivalence classes. On the other hand, \(\{POS_{(\alpha,\beta)}(X),NEG_{(\alpha,\beta)}(X),BND_{(\alpha,\beta)}(X)\}\) could collapse into a bi-partition or the whole universe when one or two of its sets are empty.
**Remark 1**.: _The three regions of of \(\mathcal{T}_{(\alpha,\beta)}(X)\) are mutually disjoint, i.e. \(A\cap B=\emptyset\) for each \(A,B\in\{POS_{(\alpha,\beta)}(X),NEG_{(\alpha,\beta)}(X),BND_{(\alpha,\beta)}( X)\}\) with \(A\neq B\), and they cover the universe \(U\), i.e._
\[POS_{(\alpha,\beta)}(X)\cup NEG_{(\alpha,\beta)}(X)\cup BND_{(\alpha,\beta)}( X)=U. \tag{3}\]
In the context of three-way decisions, the following rules are considered: let \(x\in U\),
* if \(x\in POS_{(\alpha,\beta)}(X)\), then \(x\) is accepted;
* if \(x\in NEG_{(\alpha,\beta)}(X)\), then \(x\) is rejected;
* if \(x\in BND_{(\alpha,\beta)}(X)\), then we abstain on \(x\).
The values \(Pr(X|[x]_{\mathcal{R}})\) represents the _accuracy_ or _confidence_ of the rules:
* the higher \(Pr(X|[x]_{\mathcal{R}})\) is, the more confident we are that \(x\in POS_{(\alpha,\beta)}(X)\) is correctly accepted,
* the lower \(Pr(X|[x]_{\mathcal{R}})\) is, the more confident we are that \(x\in NEG_{(\alpha,\beta)}(X)\) is correctly rejected.
Definition 1 is strictly related to the notion of probabilistic rough sets.
**Definition 2**.: _The \((\alpha,\beta)\)-probabilistic rough set of \(X\) is the pair_
\[(\mathcal{L}_{(\alpha,\beta)}(X),\mathcal{U}_{(\alpha,\beta)}(X)),\]
_where_
\[\mathcal{L}_{(\alpha,\beta)}(X)=POS_{(\alpha,\beta)}(X)\quad\text{and}\quad \mathcal{U}_{(\alpha,\beta)}(X)=POS_{(\alpha,\beta)}(X)\cup BND_{(\alpha,\beta )}(X)\text{,}\]
_which are respectively called \((\alpha,\beta)-\) lower and upper approximations of \(X\)._
**Remark 2**.: _When \(\alpha=1\) and \(\beta=0\), \((\mathcal{L}_{(\alpha,\beta)}(X),\mathcal{U}_{(\alpha,\beta)}(X))\) is the rough set \((\mathcal{L}(X),\mathcal{U}(X))\) of \(X\) defined by Pawlak in [3], namely_
\[(\mathcal{L}(X),\mathcal{U}(X))=(\{x\in U\ |\ [x]_{R}\subseteq X\},\{x\in U\ |\ [x]_{R}\cap X \neq\emptyset\}). \tag{4}\]
_The sets \(\mathcal{L}(X)\) and \(\mathcal{U}(X)\) are respectively called lower and upper approximations of \(X\) w.r.t. \(\mathcal{R}\)._
### Evaluative Linguistic Expressions
This subsection reviews concepts and results that are found in [10, 17] and it recalls the notion of fuzzy measure.
_Evaluative linguistic expressions_ are special expressions of natural language, which people commonly employ to evaluate, judge, estimate, and in many other situations. Examples of evaluative linguistic expressions are _small, medium, big, about twenty-five, roughly one hundred, very short, more or less deep, not very tall, roughly warm or medium-hot_, etc. For convenience, we will often omit the adjective "linguistic" and use only the term "evaluative expressions". The simplest evaluative expressions are called _pure evaluative expressions_ and have the following structure:
\[\langle\text{linguistic hedge}\rangle\langle\text{TE-adjective}\rangle,\]
where
* a linguistic hedge is an adverbial modification such as _very, roughly, approximately, significantly_ and
* a TE-adjective is an adjective such us _good, medium_, _big_, _short_, etc. TE stands for _trichotomous evaluative_, indeed TE-adjectives typically form pairs of antonyms like _small_ and _big_ completed by a middle member, which is _medium_ in the case of _small_ and _big_. Other examples are "_weak, medium-strong,_ and strong" and "_soft, medium-hard,_ and _hard_".
The _empty linguistic hedge_ is employed to deal with evaluative expressions made of only a TE-adjective; hence, _small_, _medium_, and _big_ are considered evaluative expressions. Other pure evaluative expressions are the fuzzy numbers like _about twenty-five_. Two or more pure evaluative expressions can be connected to form _negative evaluative expressions_ like _"NOT very small"_ and _compound evaluative expressions_ like _"very expensive AND extremely small"_ and _"very expensive OR extremely small"_.
The semantics of evaluative expressions is based on the essential concepts of _context_, _intension_, and _extension_.
* The _context_ is a state of the world at a given time and place in which an evaluative expression appears. Each context is represented by a linearly ordered scale, which is bounded by \(s\) and \(b\). Moreover, a context is given by a triple \(\omega=\langle s,m,b\rangle\), where \(s\) is the "most typical" small value, \(m\) is the "most typical" medium value, and \(b\) is the "most typical" big value. For example, suppose that evaluative expressions are used to evaluate the size of apartments. If we are thinking of apartments for one person, then we could choose \(\omega_{1}=\langle 40,70,100\rangle\) as context, which means that flats measuring \(40\ m^{2}\), \(70\ m^{2}\), and \(100\ m^{2}\) are
typically small, medium and big, respectively. On the other hand, when changing context and thinking of apartments for a family of 5 people, the context \(\omega_{5}=\langle 70,120,160\rangle\) is more appropriate.
* The _intension_ of an evaluative expression is a function mapping each context into a fuzzy set of a given universe. Taking up the previous example, we consider a universe made of four apartments \(\{a_{1},a_{2},a_{3},a_{4}\}\), then the intension of \(small\) is the map \(Int_{small}\) that assigns to the context \(\omega_{5}\) the fuzzy set \(A_{\omega_{5}}\) so that \(A_{\omega_{5}}(a_{i})\) is the degree to which \(a_{i}\)_is small in the context \(\omega_{5}\)_ (namely, \(a_{i}\)_is small for 5 people_).
* The _extension_ of an evaluative expression \(Ev\) is a fuzzy set determined by the intension of \(Ev\), given a context \(\omega\). Concerning the previous example, \(Int_{small}(\omega_{5})=A_{\omega_{5}}\) is an example of an extension of _small_.
In this article, we confine to the TE adjectives _small_, _medium_, and _big_ because we use evaluative expressions to evaluate the size of sets. So, let \(X\) be a subset of a universe \(U\), we will say that the size of \(X\) w.r.t. to the size of \(U\) is _very small_, _extremely big_, etc. Furthermore, we confine to the _standard context_, which is \(\langle 0,0.5,1\rangle\). Finally, since sizes are expressed by means of a fuzzy measure (by Example 2, the measure of the size of a set \(X\) is a value of \([0,1]\)), the extensions of our evaluative expressions are functions from \([0,1]\) to \([0,1]\), which have a specific formula. The extension of an evaluative expression like \(\langle linguist\)\(hedge\rangle\langle TE-adjective\rangle\) with \(TE-adjective\in\{small,medium,big\}\) is obtained by composing two functions, one models the linguistic hedge and the other models the TE-adjective. The function describing a linguistic hedge depends on three parameters, which are experimentally estimated (see [17] for more details).
In what follows, we provide the formula of \(\neg Sm:[0,1]\rightarrow[0,1]\), \(BiVe:[0,1]\rightarrow[0,1]\), and \(BiEx:[0,1]\rightarrow[0,1]\), which are the extensions of the evaluative expressions _not small_, _very big_, and _extremely big_, where the context \(\langle 0,0.5,1\rangle\) is fixed 2.
Footnote 2: We have got the formulas of \(BiVe\) and \(BiEx\) using the function \(\nu_{a,b,c}(LH(\omega^{-1}))\) and Table 5.1 given in [17]. Concerning the formula of \(\neg Sm\), we have considered that \(\neg Sm(x)=1-Sm(x)\). After that, we have found the formula of \(Sm\) using the function \(\nu_{a,b,c}(RH(\omega^{-1}))\) and Table 5.1 of [17].
\[\neg Sm(x)=\begin{cases}1&\text{if }x\in[0.275,1],\\ 1-\dfrac{(0.275-x)^{2}}{0.02305}&\text{if }x\in(0.16,0.275)\\ \dfrac{(x-0.0745)^{2}}{0.01714}&\text{if }x\in(0.0745,0.16]\\ 0&\text{if }x\in[0,0.0745]\end{cases} \tag{5}\]
\[BiVe(x)=\begin{cases}1&\text{if }x\in[0.9575,1],\\ 1-\dfrac{(0.9575-x)^{2}}{0.00796}&\text{if }x\in[0.895,0.9575),\\ \dfrac{(x-0.83)^{2}}{0.00828}&\text{if }x\in(0.83,0.895),\\ 0&\text{if }x\in[0,0.83].\end{cases} \tag{6}\]
\[BiEx(x)=\begin{cases}1&\text{if }x\in[0.995,1],\\ 1-\dfrac{(0.995-x)^{2}}{0.00495}&\text{if }x\in[0.95,0.995),\\ \dfrac{(x-0.885)^{2}}{0.00715}&\text{if }x\in(0.885,0.95),\\ 0&\text{if }x\in[0,0.885].\end{cases} \tag{7}\]
**Remark 3**.: _The evaluative expressions \(\neg Sm\), \(BiVe\), and \(BiEx\) have a special role: they are respectively used to construct the formula of fuzzy quantifiers many, most, and almost all [18]._
A further class of linguistic expressions is \(\{\Delta_{t}:[0,1]\to\{0,1\}\ |\ t\in[0,1]\}\), where given \(t\in[0,1]\) the formula of \(\Delta_{t}\) is the following: let \(a\in[0,1]\),
\[\Delta_{t}(a)=\begin{cases}1&\text{if }a\geq t\\ 0&\text{otherwise.}\end{cases} \tag{8}\]
In the sequel, we need the notion of fuzzy measure [14, 15].
**Definition 3**.: _Let \(U\) be a finite universe, a mapping \(\varphi:2^{U}\to\mathbb{R}\) is called fuzzy measure if and only if_
1. \(\varphi(\emptyset)=0\)_;_
2. _if_ \(X\subseteq Y\) _then_ \(\varphi(X)\leq\varphi(Y)\)_, for each_ \(X,Y\subseteq U\) _(monotonicity)._
A fuzzy measure \(\varphi\) is called _normalized_ or _regular_ if \(\varphi(U)=1\).
In this paper, we focus on the normalized fuzzy measure given by the next example.
**Example 2**.: _Let \(U\) be a finite universe, the function \(f:2^{U}\to\mathbb{R}\) that assigns \(\dfrac{|Y|}{|U|}\) to each \(Y\subseteq U\) is a fuzzy measure._
_The value \(\dfrac{|Y|}{|U|}\) belongs to [0,1] and measures "how much \(Y\) is large with respect to \(U\) in the scale \([0,1]\)"._
Let us observe that in Probability theory \(\dfrac{|Y|}{|U|}\) represents _"how likely the event \(Y\) is to occur"_.
## 3 Three-way decisions with linguistic expressions
This subsection proposes a novel model for three-way decisions, which is based on the concept of evaluative linguistic expressions previously described.
In the sequel, we use the symbol \(\mathcal{E}\) to denote the collection of the extensions of all evaluative expressions in the context \(\langle 0,0.5,1\rangle\).
Therefore, let \(Ev\in\mathcal{E}\), let \(X\subseteq U\), and let \(\alpha,\beta\in[0,1]\) with \(\beta<\alpha\), three regions of \(U\) are determined. In particular, the region of a given element \(x\in U\) is found by taking into account the following steps:
1. computing \(Ev\left(\dfrac{|X\cap[x]_{\mathcal{R}}|}{|[x]|_{\mathcal{R}}}\right)\), which is the evaluation of _the size of \(X\cap[x]_{\mathcal{R}}\) w.r.t. the size of \([x]_{\mathcal{R}}\)_ by using \(Ev\);
2. comparing \(Ev\left(\dfrac{|X\cap[x]_{\mathcal{R}}|}{|[x]|_{\mathcal{R}}}\right)\) with the thresholds \(\alpha\) and \(\beta\).
For example, regarding point 1, if \(Ev\) models the evaluative expression _"significantly big"_, then \(Ev\left(\dfrac{|X\cap[x]_{\mathcal{R}}|}{|[x]_{\mathcal{R}}|}\right)\) measures
_"how much the size of \(X\cap[x]_{\mathcal{R}}\) is significantly big w.r.t. the size of \([x]_{\mathcal{R}}\)"._
Equivalently, we are saying that
_"the size of the set of the elements of \([x]_{\mathcal{R}}\) that also belong to \(X\) is significantly big with the truth degree \(Ev\left(\dfrac{|X\cap[x]_{\mathcal{R}}|}{|[x]_{\mathcal{R}}|}\right)\)"._
Observe that \(\dfrac{|X\cap[x]_{\mathcal{R}}|}{|[x]_{\mathcal{R}}|}\) syntactically coincides with the conditional probability (see (1)), but here it has a different interpretation: it is the fuzzy measure specified by Example 2.
Formally, the three regions of \(U\) determined by an evaluative expression are given by the following definition.
**Definition 4**.: _Let \(Ev\in\mathcal{E}\), the \((\alpha,\beta)\)-linguistic positive, negative, and boundary regions induced by \(Ev\) are respectively the following:_
* \(POS^{Ev}_{(\alpha,\beta)}(X)=\bigg{\{}x\in U\ |\ Ev\left(\dfrac{|[x]_{\mathcal{R}} \cap X|}{|[x]_{\mathcal{R}}|}\right)\geq\alpha\bigg{\}}\)_;_
* \(NEG^{Ev}_{(\alpha,\beta)}(X)=\bigg{\{}x\in U\ |\ Ev\left(\dfrac{|[x]_{\mathcal{R}} \cap X|}{|[x]_{\mathcal{R}}|}\right)\leq\beta\bigg{\}}\)_;_
* \(BND^{Ev}_{(\alpha,\beta)}(X)=\bigg{\{}x\in U\ |\ \beta<Ev\left(\dfrac{|[x]_{ \mathcal{R}}\cap X|}{|[x]_{\mathcal{R}}|}\right)<\alpha\bigg{\}}\)_._
We put
\[\mathcal{T}^{Ev}_{(\alpha,\beta)}(X)=\{POS^{Ev}_{(\alpha,\beta)}(X),NEG^{Ev} _{(\alpha,\beta)}(X),BND^{Ev}_{(\alpha,\beta)}(X)\} \tag{9}\]
and we say that \(\mathcal{T}^{Ev}_{(\alpha,\beta)}(X)\) is a tri-partition of \(U\).
**Remark 4**.: _The three regions of \(\mathcal{T}^{Ev}_{(\alpha,\beta)}(X)\) are mutually disjoint, i.e. \(A\cap B=\emptyset\) for each \(A,B\in\{POS^{Ev}_{(\alpha,\beta)}(X),NEG^{Ev}_{(\alpha,\beta)}(X),BND^{Ev}_{( \alpha,\beta)}(X)\}\) with \(A\neq B\), and they cover the universe \(U\), i.e._
\[POS^{Ev}_{(\alpha,\beta)}(X)\cup NEG^{Ev}_{(\alpha,\beta)}(X)\cup BND^{Ev}_{( \alpha,\beta)}(X)=U. \tag{10}\]
**Remark 5**.: _Let us focus on the evaluative expressions not small, very big, extremely big, and utmost. The first three expressions are respectively modelled by (5), (6), and (7). According to Remark 3, \(\neg Sm\), \(BiVe\), and \(BiEx\) respectively appear into the formula of quantifiers many, most, and almost all. Moreover, as explained in [19] (see Lemma 4.5), considering that \(X\) and \([x]_{\mathcal{R}}\) are crisp set, \(\neg Sm\left(\dfrac{|X\cap[x]_{\mathcal{R}}|}{|[x]_{\mathcal{R}}|}\right)\), \(BiVe\left(\dfrac{|X\cap[x]_{\mathcal{R}}|}{|[x]_{\mathcal{R}}|}\right)\), and \(BiEx\left(\dfrac{|X\cap[x]_{\mathcal{R}}|}{|[x]_{\mathcal{R}}|}\right)\) exactly coincide with the formula of quantifiers many, most, and almost all. Hence, they have the following meaning:_
* \(\neg Sm\left(\dfrac{|X\cap[x]_{\mathcal{R}}|}{|[x]_{\mathcal{R}}|}\right)\) _is degree to which "_many _objects of_ \([x]_{\mathcal{R}}\) _are in_ \(X\)_",_
* \(BiVe\left(\dfrac{|X\cap[x]_{\mathcal{R}}|}{|[x]_{\mathcal{R}}|}\right)\) _is degree to which "_most _objects of_ \([x]_{\mathcal{R}}\) _are in_ \(X\)_",_
* \(BiEx\left(\dfrac{|X\cap[x]_{\mathcal{R}}|}{|[x]_{\mathcal{R}}|}\right)\) _is degree to which "_almost all _objects of_ \([x]_{\mathcal{R}}\) _are in_ \(X\)_"._
_Moreover, the function \(\Delta_{1}\) that is obtained by (8) and putting \(t=1\), models the evaluative expression utmost and corresponds to the quantifier all. Therefore, \(\Delta^{1}\left(\dfrac{|X\cap[x]_{\mathcal{R}}|}{|[x]_{\mathcal{R}}|}\right)\) is understood as the degree to which "all objects of \([x]_{\mathcal{R}}\) are in \(X\)"._
_Observe that here the universe of quantification coincides with \([x]_{\mathcal{R}}\), which is always non-empty, considering that \(\mathcal{R}\) is reflexive, hence \(\{x\}\subseteq[x]_{\mathcal{R}}\). In mathematical logic, the assumption expressing that the universe of quantification must be non-empty is called existential import (or presupposition) [20]._
**Remark 6**.: _Consider the evaluative expressions represented by (8). Let \(t\in[0,1]\), then \(\Delta_{t}\left(\dfrac{|X\cap[x]_{\mathcal{R}}|}{|[x]_{\mathcal{R}}|}\right)\) is the degree to which "the size of the set of elements of \([x]_{\mathcal{R}}\) that also belong to \(X\) is at least as large as \(t\) (in the scale [0,1])"._
### An illustrative example
In this subsection, we provide an example of how to use linguistic three-way decision to provide recommendations based on users's profile.
We consider a universe \(U=\{u_{1},\ldots,u_{32}\}\) made of users of online communities and the following equivalence relation \(\mathcal{R}\) on \(U\): let \(x,y\in U\), \(x\mathcal{R}y\) if and only if \(x\) and \(y\) belong to the same community. Therefore, \(\mathcal{R}\) corresponds to the partition \(\{C_{1},\ldots,C_{6}\}\) of \(U\), where \(C_{i}\) is the set of users of \(U\) belonging to the community \(i\). In particular, we suppose that
\[C_{1}=\{u_{1},\ldots,u_{5}\},\ C_{2}=\{u_{6},\ldots,u_{10}\},\ C_{3}=\{u_{11}, \ldots,u_{15}\},\]
\[C_{4}=\{u_{16},\ldots,u_{20}\},\ C_{5}=\{u_{21},\ldots,u_{25}\},\ \text{and}\ C_{6}=\{u_{26},\ldots,u_{32}\}.\]
We use the symbol \(X_{T}\) to denote the set of users of \(U\) interested in a specific topic \(T\).
For example,
\[X_{Sport}=\{u_{10},u_{11},u_{12},u_{18},u_{19},u_{20},u_{21},u_{22},u_{23},u_{24 },u_{26}\}\]
is the set of all users of \(U\) interested in the topic _Sport_.
Using three-way decisions based on evaluative expressions, we intend to select the most appropriate communities among \(C_{1},\ldots,C_{6}\) to which propose news related to the topic \(T\).
If we choose \((\alpha,\beta)=(0.8,0.2)\) and the evaluative expression \(\neg Sm\) corresponding to the fuzzy quantifier _many_, then we decide to assign the news about the topic \(T\) to the communities of \(POS^{\neg Sm}_{(0.8,0.2)}(X_{T})\). Indeed, enough users of \(POS^{\neg Sm}_{(0.8,0.2)}(X_{T})\) are interested in \(T\): \(x\in POS^{\neg Sm}_{(0.8,0.2)}\) if and only if the degree to which
_"many users of the community of \(x\) are interested in \(T\)"_
is greater than or equal to \(0.8\).
In the sequel, we determine the communities to which provide the news about _Sport_. To do this, we firstly compute the value \(\dfrac{|X_{Sport}\cap[x]_{\mathcal{R}}|}{|[x]_{\mathcal{R}}|}\) for each \(x\in U\):
\[\dfrac{|X_{Sport}\cap[x]_{\mathcal{R}}|}{|[x]_{\mathcal{R}}|}=\begin{cases}0& \text{if}\ x\in C_{1},\\ 0.14&\text{if}\ x\in C_{6},\\ 0.2&\text{if}\ x\in C_{2},\\ 0.4&\text{if}\ x\in C_{3},\\ 0.6&\text{if}\ x\in C_{4},\\ 0.8&\text{if}\ x\in C_{5}.\end{cases} \tag{11}\]
According to the definition of \(\neg Sm\) that is given by (5), we get \(\neg Sm(0)=0\), \(\neg Sm(0.14)=0.25\), \(\neg Sm(0.2)=0.75\) and \(\neg Sm(0.4)=\neg Sm(0.6)=\neg Sm(0.8)=1\).
Consequently,
\[\neg Sm\left(\dfrac{|X_{Sport}\cap[x]_{\mathcal{R}}|}{|[x]_{\mathcal{R}}|} \right)=\begin{cases}0&\text{if}\ x\in C_{1},\\ 0.25&\text{if}\ x\in C_{6},\\ 0.75&\text{if}\ x\in C_{2}\\ 1&\text{if}\ x\in C_{3}\cup C_{4}\cup C_{5}.\end{cases} \tag{12}\]
Then, the positive, negative and boundary regions induced by \((0.8,0.2)\) and \(\neg Sm\) are the following:
\[POS^{\neg Sm}_{(0.8,0.2)}(X_{Sport})=\left\{x\in U\ |\ \neg Sm\left(\dfrac{|X_{Sport} \cap[x]_{\mathcal{R}}|}{|[x]_{\mathcal{R}}|}\right)\geq 0.8\right\}=C_{3} \cup C_{4}\cup C_{5},\]
\[NEG^{-Sm}_{(0.8,0.2)}(X_{Sport}) =\left\{x\in U\ |\ \neg Sm\left(\frac{|X_{Sport}\cap[x]_{ \mathcal{R}}|}{|[x]_{\mathcal{R}}|}\right)\leq 0.2\right\}=C_{1},\] \[BND^{-Sm}_{(0.8,0.2)}(X_{Sport}) =\left\{x\in U\ |\ 0.2<\neg Sm\left(\frac{|X_{Sport}\cap[x]_{ \mathcal{R}}|}{|[x]_{\mathcal{R}}|}\right)<0.8\right\}=C_{2}\cup C_{6}.\]
The three regions lead the following decisions. Firstly, we choose to provide the news about the sport to the communities \(C_{3}\), \(C_{4}\), and \(C_{5}\) that form the positive region, considering that _these contains many users interested in sport_ with a degree that we think is enough high (\(\geq 0.8\)). Moreover, we require further analysis on the communities \(C_{2}\) and \(C_{6}\) forming the boundary region, before providing news about sports. For example, we could evaluate the interests of their users in the future or once new users join them. Finally, we surely do not provide sports news to \(C_{1}\) because we think that not enough of its users are interested in sports topics, indeed we consider the degree to which _many users of \(C_{1}\) are interested in sports_ low (\(\leq 0.2\)).
### Linguistic Rough Sets
Definition 4 also leads to a novel generalization of Pawlak rough sets.
**Definition 5**.: _Let \(Ev\in\mathcal{E}\), the \((\alpha,\beta)\)-linguistic rough set of \(X\) determined by \(\mathcal{R}\) and \(Ev\) is the pair \((\mathcal{L}^{Ev}_{(\alpha,\beta)}(X),\mathcal{U}^{Ev}_{(\alpha,\beta)}(X)),\) where_
\[\mathcal{L}^{Ev}_{(\alpha,\beta)}(X)=\left\{x\in X\ |\ Ev\left( \frac{|[x]_{\mathcal{R}}\cap X|}{|[x]_{\mathcal{R}}|}\right)\geq\alpha\right\} \ \ \text{and}\] \[\mathcal{U}^{Ev}_{(\alpha,\beta)}(X)=\left\{x\in X\ |\ Ev\left( \frac{|[x]_{\mathcal{R}}\cap X|}{|[x]_{\mathcal{R}}|}\right)>\beta\right\}.\]
\(\mathcal{L}^{Ev}_{(\alpha,\beta)}(X)\) _and \(\mathcal{U}^{Ev}_{(\alpha,\beta)}(X)\) are respectively called \((\alpha,\beta)\)-linguistic lower and upper approximation of \(X\) determined by \(\mathcal{R}\) and \(Ev\)._
Equivalently, by Definition 4, we get
\[\mathcal{L}^{Ev}_{(\alpha,\beta)}(X)=POS^{Ev}_{(\alpha,\beta)}(X)\ \ \text{and}\ \ \mathcal{U}^{Ev}_{(\alpha,\beta)}(X)=POS^{Ev}_{(\alpha,\beta)}(X)\cup BND^{Ev}_ {(\alpha,\beta)}(X).\]
Let \(x\in U\), the value \(Ev\left(\frac{|[x]_{\mathcal{R}}\cap X|}{|[x]_{\mathcal{R}}|}\right)\) is viewed as the degree of confidence expressing _how much we can trust that \(x\) belongs to \(X\)._
The following is an illustrative example.
**Example 3**.: _Consider Example 3.1. In terms of generalized rough sets, \(X_{Sport}\) can be approximated by the \((0.8,0.2)\)-linguistic rough set_
\[(\mathcal{L}^{-Sm}_{(0.8,0.2)}(X_{Sport}),\mathcal{U}^{-Sm}_{(0.8, 0.2)}(X_{Sport}))=(C_{3}\cup C_{4}\cup C_{5},C_{2}\cup C_{3}\cup C_{4}\cup C_{ 5}\cup C_{6})=\\ (\{u_{11},\ldots,u_{25}\},\{u_{6},\ldots,u_{32}\})\}.\]
_Each element \(x\in U\) is associated with the value \(\neg Sm\left(\frac{|X_{Sport}\cap[x]_{\mathcal{R}}}{[x]_{\mathcal{R}}}\right)\), which is understood as the degree of confidence expressing how much we can trust that \(x\) is interested in sports._
## 4 Connection with TWD methods
In this section, we find a link between the TWD methods based on probabilistic rough sets and evaluative expressions. In particular, we fix a finite universe \(U\), a subset \(X\) of \(U\), an equivalence relation \(\mathcal{R}\) on \(U\), and a pair of thresholds \((\alpha,\beta)\) such that
\(0\leq\alpha<\beta\leq 1\), and we aim to determine for each evaluative expression \(Ev\), the class of all pairs of thresholds like \((\alpha^{\prime},\beta^{\prime})\) so that \(\mathcal{T}_{(\alpha^{\prime},\beta^{\prime})}(X)\) coincides with \(\mathcal{T}_{(\alpha,\beta)}^{Ev}(X)\).
In this paper, we confine to the class \(\mathcal{E}^{+}\subset\mathcal{E}\), which is made of all extensions that are increasing functions, i.e. let \(Ev\in\mathcal{E}\), \(Ev\in\mathcal{E}^{+}\) if and only if "\(Ev(x)\leq Ev(y)\) for each \(x,y\in[0,1]\) such that \(x\leq y\)". Examples of evaluative expressions so that their extension is an increasing function, are _not small_, _very big_, and _extremely big_ (see (5), (6), and (7)). However, there exist evaluative expressions like _small_ that are represented by a decreasing function and others like _medium_ that are represented by a non-monotonic function.
In order to obtain the results of this section, we need to define the values \(\alpha_{1}^{Ev}\), \(\alpha_{2}^{Ev}\), \(\beta_{1}^{Ev}\), and \(\beta_{2}^{Ev}\), which are associated with \(\mathcal{T}_{(\alpha,\beta)}^{Ev}(X)\), where \(Ev\in\mathcal{E}\).
**Definition 6**.: _Let \(Ev\in\mathcal{E}\). If \(POS_{(\alpha,\beta)}^{Ev}(X),NEG_{(\alpha,\beta)}^{Ev}(X)\), \(BND_{(\alpha,\beta)}^{Ev}(X)\neq\emptyset\), then we put_
* \(\alpha_{1}^{Ev}=\max\left\{\frac{|X\cap[x]_{\mathcal{R}}|}{|[x]_{\mathcal{R}}| }\ |\ x\in BND_{(\alpha,\beta)}^{Ev}(X)\right\}\)_,_
* \(\alpha_{2}^{Ev}=\min\left\{\frac{|X\cap[x]_{\mathcal{R}}|}{|[x]_{\mathcal{R}}| }\ |\ x\in POS_{(\alpha,\beta)}^{Ev}(X)\right\}\)_,_
* \(\beta_{1}^{Ev}=\max\left\{\frac{|X\cap[x]_{\mathcal{R}}|}{|[x]_{\mathcal{R}}| }\ |\ x\in NEG_{(\alpha,\beta)}^{Ev}(X)\right\},\)__
* \(\beta_{2}^{Ev}=\min\left\{\frac{|X\cap[x]_{\mathcal{R}}|}{|[x]_{\mathcal{R}}| }\ |\ x\in BND_{(\alpha,\beta)}^{Ev}(X)\right\}.\)__
**Example 4**.: _Consider the universe \(U\), its subset \(X_{Sport}\), and the pair of thresholds \((\alpha,\beta)\) that are defined by Example 3.1. Then, the corresponding positive, negative, and boundary regions are the following:_
\[POS_{(0.8,0.2)}^{-Sm}(X_{Sport})=C_{3}\cup C_{4}\cup C_{5},\ NEG_ {(0.8,0.2)}^{-Sm}(X_{Sport})=C_{1},\ \ \text{and}\] \[BND_{(0.8,0.2)}^{-Sm}(X_{Sport})=C_{2}\cup C_{6}.\]
_Hence, by (11), we get 3_
Footnote 3: Recall that the equivalence classes of \(\{[x]_{\mathcal{R}}\ |\ x\in U\}\) are the sets \(C_{1},C_{2},C_{3},C_{4},C_{5}\), and \(C_{6}\).
\[\left\{\frac{|X_{Sport}\cap[x]_{\mathcal{R}}|}{|[x]_{\mathcal{R}}| }\ |\ x\in POS_{(\alpha,\beta)}^{Ev}(X_{Sport})\right\}=\\ \left\{\frac{|X_{Sport}\cap C_{3}|}{|C_{3}|},\frac{|X_{Sport}\cap C _{4}|}{|C_{4}|},\frac{|X_{Sport}\cap C_{5}|}{|C_{5}|}\right\}=\{0.4,0.6,0.8\};\]
\[\left\{\frac{|X_{Sport}\cap[x]_{\mathcal{R}}|}{|[x]_{\mathcal{R}}| }\ |\ x\in NEG_{(\alpha,\beta)}^{Ev}(X_{Sport})\right\}\ \ =\ \ \left\{\frac{|X_{Sport}\cap C_{1}|}{|C_{1}|}\right\}\ \ =\ \ \{0\};\]
\[\left\{\frac{|X_{Sport}\cap[x]_{\mathcal{R}}|}{|[x]_{\mathcal{R}}| }\ |\ x\in BND_{(\alpha,\beta)}^{Ev}(X_{Sport})\right\}=\left\{\frac{|X_{Sport} \cap C_{2}|}{|C_{2}|},\frac{|X_{Sport}\cap C_{6}|}{|C_{6}|}\right\}\\ =\{0.2,0.14\}.\]
_Finally, by Definition 6, \(\alpha_{1}^{-Sm}=\max\{0.14,0.2\}=0.2\), \(\alpha_{2}^{-Sm}=\min\{0.4,0.6,0.8\}=0.4\), \(\beta_{1}^{-Sm}=\max\{0\}=0\), and \(\beta_{2}^{-Sm}=\min\{0.14,0.2\}\)_
\(=0.14\)_._
If \(Ev\) is an increasing function, namely \(Ev\in\mathcal{E}^{+}\), then we can order \(\beta_{1}^{Ev}\), \(\beta_{2}^{Ev}\), \(\alpha_{1}^{Ev}\), and \(\alpha_{2}^{Ev}\) as shown in the following proposition.
**Proposition 1**.: _Let \(Ev\in\mathcal{E}^{+}\). If \(POS^{Ev}_{(\alpha,\beta)}(X),NEG^{Ev}_{(\alpha,\beta)}(X)\), \(BND^{Ev}_{(\alpha,\beta)}(X)\neq\emptyset\), then \(0\leq\beta_{1}^{Ev}<\beta_{2}^{Ev}\leq\alpha_{1}^{Ev}<\alpha_{2}^{Ev}\leq 1\)._
Proof.: By Definition 6, it is trivial that \(0\leq\beta_{1}^{Ev},\beta_{2}^{Ev},\alpha_{1}^{Ev},\alpha_{2}^{Ev}\leq 1\).
\((\beta_{1}^{Ev}<\beta_{2}^{Ev})\). By Definition 6 ((iii) and (iv)), \(\beta_{1}^{Ev}=\frac{|X\cap[x_{1}]_{\mathcal{R}}|}{|[x_{1}]_{\mathcal{R}}|}\) with \(x_{1}\in NEG^{Ev}_{(\alpha,\beta)}(X)\)
\(\text{and}\ \beta_{2}^{Ev}=\frac{|X\cap[x_{2}]_{\mathcal{R}}|}{|[x_{2}]_{ \mathcal{R}}|}\) with \(x_{2}\in BND^{Ev}_{(\alpha,\beta)}(X)\). Then, by Definition 4 ((ii) and (iii)), \(Ev(\beta_{1}^{Ev})\leq\beta\) and \(\beta<Ev(\beta_{2}^{Ev})<\alpha\). Hence, \(Ev(\beta_{1}^{Ev})<Ev(\beta_{2}^{Ev})\). Thus, considering that \(Ev\) is an increasing function, we can conclude that \(\beta_{1}^{Ev}<\beta_{2}^{Ev}\).
\((\alpha_{1}^{Ev}<\alpha_{2}^{Ev})\). By Definition 6 ((i) and (ii)), \(\alpha_{1}^{Ev}=\frac{|X\cap[x_{1}]_{\mathcal{R}}|}{|[x_{1}]_{\mathcal{R}}|}\) with \(x_{1}\in BND^{Ev}_{(\alpha,\beta)}(X)\)
\(\text{and}\ \alpha_{2}^{Ev}=\frac{|X\cap[x_{2}]_{\mathcal{R}}|}{|[x_{2}]_{ \mathcal{R}}|}\) with \(x_{2}\in POS^{Ev}_{(\alpha,\beta)}(X)\). Thus, by Definition 4 ((iii) and (i)), \(\beta<Ev(\alpha_{1}^{Ev})<\alpha\) and \(Ev(\alpha_{2}^{Ev})\geq\alpha\). Thus, \(Ev(\alpha_{1}^{Ev})<Ev(\alpha_{2}^{Ev})\). Hence, considering that \(Ev\) is an increasing function, \(\alpha_{1}^{Ev}<\alpha_{2}^{Ev}\).
\((\beta_{2}^{Ev}\leq\alpha_{1}^{Ev})\). By Definition 6 ((i) and (iii)), \(\alpha_{1}^{Ev}=\frac{|X\cap[x_{1}]_{\mathcal{R}}|}{|[x_{1}]_{\mathcal{R}}|}\) with \(x_{1}\in BND^{Ev}_{(\alpha,\beta)}(X)\)
\(\text{and}\ \beta_{2}^{Ev}=\frac{|X\cap[x_{2}]_{\mathcal{R}}|}{|[x_{2}]_{ \mathcal{R}}|}\) with \(x_{2}\in BND^{Ev}_{(\alpha,\beta)}(X)\). Therefore, since \(x_{1},x_{2}\in BND^{Ev}_{(\alpha,\beta)}(X)\), \(\beta_{2}^{Ev}\leq\alpha_{1}^{Ev}\) clearly holds.
**Example 5**.: _In Example 4, we have shown that \(\alpha_{1}^{-Sm}=0.2\), \(\alpha_{2}^{-Sm}=0.4\), \(\beta_{1}^{-Sm}=0\), and \(\beta_{2}^{-Sm}=0.14\). Then, according to Proposition 1, \(0\leq\beta_{1}^{Ev}<\beta_{2}^{Ev}\leq\alpha_{1}^{Ev}<\alpha_{2}^{Ev}\leq 1\)._
The next theorems show that the three regions generated by \(Ev\in\mathcal{E}^{+}\) can be also obtained by using the probabilistic approach and changing the initial thresholds. We separately analyze the following cases: all three regions are non-empty (Theorem 1) and only one of the three regions is empty (Theorems 2-4). The remaining case where only one region is non-empty (namely, one of the three regions coincides with the universe) is omitted because not significant.
**Theorem 1**.: _Let \(Ev\in\mathcal{E}^{+}\) such that \(POS^{Ev}_{(\alpha,\beta)}(X),NEG^{Ev}_{(\alpha,\beta)}(X)\), \(BND^{Ev}_{(\alpha,\beta)}(X)\neq\emptyset\) and let \(\alpha^{\prime},\beta^{\prime}\in[0,1]\) with \(\beta^{\prime}<\alpha^{\prime}\). Then,_
\(\mathcal{T}^{Ev}_{(\alpha,\beta)}(X)=\mathcal{T}_{(\alpha^{\prime},\beta^{ \prime})}(X)\) _if and only if \(\alpha^{\prime}\in(\alpha_{1}^{Ev},\alpha_{2}^{Ev}]\) and \(\beta^{\prime}\in[\beta_{1}^{Ev},\beta_{2}^{Ev})\)._
**Remark 7**.: _Before to prove Theorem 1, let us represent the intervals that contain \(\alpha^{\prime}\) and \(\beta^{\prime}\) (i.e. the values for generating \(\mathcal{T}^{Ev}_{(\alpha,\beta)}(X)\) with probabilistic rough sets) by Figure 1. By Definition 6, these intervals separates \(POS^{Ev}_{(\alpha,\beta)}(X)\) from \(BND^{Ev}_{(\alpha,\beta)}(X)\) and \(NEG^{Ev}_{(\alpha,\beta)}(X)\) from \(BND^{Ev}_{(\alpha,\beta)}(X)\). More precisely, the value \(Ev\left(\frac{|X\cap[x]_{\mathcal{R}}|}{|[x]_{\mathcal{R}}|}\right)\) belongs to \([0,\beta_{1}^{Ev}]\) when \(x\in NEG^{Ev}_{(\alpha,\beta)}(X)\), to \([\beta_{2}^{Ev},\alpha_{1}^{Ev}]\) when \(x\in BND^{Ev}_{(\alpha,\beta)}(X)\), and to \([\alpha_{2}^{Ev},1]\) when \(x\in POS^{Ev}_{(\alpha,\beta)}(X)\)._
Proof.: (\(\Leftarrow\)). Let \(\alpha^{\prime}\in(\alpha_{1}^{Ev},\alpha_{2}^{Ev}]\) and let \(\beta^{\prime}\in[\beta_{1}^{Ev},\beta_{2}^{Ev})\), we need to prove that \(POS^{Ev}_{(\alpha,\beta)}(X)=POS_{(\alpha^{\prime},\beta^{\prime})}(X)\), \(NEG^{Ev}_{(\alpha,\beta)}(X)=NEG_{(\alpha^{\prime},\beta^{\prime})}(X)\), and \(BND^{Ev}_{(\alpha,\beta)}(X)=BND_{(\alpha^{\prime},\beta^{\prime})}(X)\).
\((POS^{Ev}_{(\alpha,\beta)}(X)=POS_{(\alpha^{\prime},\beta^{\prime})}(X)\)). Let \(\bar{x}\in POS^{Ev}_{(\alpha,\beta)}(X)\), then \(\frac{|X\cap[\bar{x}]_{\mathcal{R}}|}{|[\bar{x}]_{\mathcal{R}}|}\geq\alpha_{2}^ {Ev}\) from Definition 6 (ii). Moreover, \(\alpha^{\prime}\leq\alpha_{2}^{Ev}\) because \(\alpha^{\prime}\in(\alpha_{1}^{Ev},\alpha_{2}^{Ev}]\). Consequently, \(\frac{|X\cap[\bar{x}]_{\mathcal{R}}|}{|[\bar{x}]_{\mathcal{R}}|}\geq\alpha^{\prime}\). Finally, \(\bar{x}\in POS_{(\alpha^{\prime},\beta^{\prime})}(X)\) from Definition 1 (i). Let \(\bar{x}\in POS_{(\alpha^{\prime},\beta^{\prime})}(X)\), then \(\frac{|X\cap[\bar{x}]_{\mathcal{R}}|}{|[\bar{x}]_{\mathcal{R}}|}\geq\alpha^{\prime}\) from Definition 1 (i). By the previous inequality and \(\alpha^{\prime}>\alpha_{1}^{Ev}\), we get \(\frac{|X\cap[\bar{x}]_{\mathcal{R}}|}{|[\bar{x}]_{\mathcal{R}}|}>\alpha_{1} ^{Ev}\). Hence, considering that \(\alpha_{1}^{Ev}\) is the maximum of \(\left\{\frac{|X\cap[x]_{\mathcal{R}}|}{|[x]_{\mathcal{R}}|}\ |\ x\in BND^{Ev}_{(\alpha,\beta)}(X)\right\}\) (see Definition 6(i)), we are sure that \(\bar{x}\notin BND^{Ev}_{(\alpha,\beta)}(X)\). Furthermore, \(\frac{|X\cap[\bar{x}]_{\mathcal{R}}|}{|[\bar{x}]_{\mathcal{R}}|}>\alpha_{1}^{Ev}\) and \(\beta_{1}^{Ev}<\alpha_{1}^{Ev}\) (see Proposition 1) imply that \(\frac{|X\cap[\bar{x}]_{\mathcal{R}}|}{|[\bar{x}]_{\mathcal{R}}|}>\beta_{1}^{Ev}\). Thus, considering that \(\beta_{1}^{Ev}\) is the maximum of \(\left\{\frac{|X\cap[x]_{\mathcal{R}}|}{|[x]_{\mathcal{R}}|}\ |\ x\in NEG^{Ev}_{(\alpha,\beta)}(X)\right\}\) (see Definition 6(iii)), we have \(\bar{x}\notin NEG^{Ev}_{(\alpha,\beta)}(X)\). Ultimately, by (10), \(\bar{x}\in POS^{Ev}_{(\alpha,\beta)}(X)\).
\((BND^{Ev}_{(\alpha,\beta)}(X)=BND_{(\alpha^{\prime},\beta^{\prime})}(X)\)). Let \(\bar{x}\in BND^{Ev}_{(\alpha,\beta)}(X)\). By Definition 6 ((i) and (iv)), \(\beta_{2}^{Ev}\leq\frac{|X\cap[\bar{x}]_{\mathcal{R}}|}{|[\bar{x}]_{\mathcal{R }}|}\leq\alpha_{1}^{Ev}\). Moreover, by hypothesis, \(\beta^{\prime}<\beta_{2}^{Ev}\) and \(\alpha^{\prime}>\alpha_{1}^{Ev}\). Thus, we can conclude that \(\beta^{\prime}<\frac{|X\cap[\bar{x}]_{\mathcal{R}}|}{|[\bar{x}]_{\mathcal{R}}| }<\alpha^{\prime}\), namely \(\bar{x}\in BND_{(\alpha^{\prime},\beta^{\prime})}(X)\) from Definition 1 (iii). Let \(\bar{x}\in BND_{(\alpha^{\prime},\beta^{\prime})}(X)\), then \(\beta^{\prime}<\frac{|X\cap[\bar{x}]_{\mathcal{R}}|}{|[\bar{x}]_{\mathcal{R}}| }<\alpha^{\prime}\) from Definition 1 (iii). By hypothesis, \(\beta_{1}^{Ev}\leq\beta^{\prime}\) and \(\alpha^{\prime}\leq\alpha_{2}^{Ev}\). Hence, we know that \(\beta_{1}^{Ev}<\frac{|X\cap[\bar{x}]_{\mathcal{R}}|}{|[\bar{x}]_{\mathcal{R}}| }<\alpha_{2}^{Ev}\). By Definition 6 (iii), \(\beta_{1}^{Ev}<\frac{|X\cap[\bar{x}]_{\mathcal{R}}|}{|[\bar{x}]_{\mathcal{R}}|}\) implies that \(\bar{x}\notin NEG^{Ev}_{(\alpha,\beta)}(X)\). Furthermore, by Definition 6 (ii), \(\frac{|X\cap[\bar{x}]_{\mathcal{R}}|}{|[\bar{x}]_{\mathcal{R}}|}<\alpha_{2}^{Ev}\) implies that \(\bar{x}\notin POS^{Ev}_{(\alpha,\beta)}(X)\). Ultimately, by (10), \(\bar{x}\in BND^{Ev}_{(\alpha,\beta)}(X)\).
\((NEG^{Ev}_{(\alpha,\beta)}(X)=NEG_{(\alpha^{\prime},\beta^{\prime})}(X))\). We have previously shown that \(POS^{Ev}_{(\alpha,\beta)}(X)=POS_{(\alpha^{\prime},\beta^{\prime})}(X)\) and \(BND^{Ev}_{(\alpha,\beta)}(X)=BND_{(\alpha^{\prime},\beta^{\prime})}(X)\). So, by (3) and (10), it is clear that \(NEG^{Ev}_{(\alpha,\beta)}(X)=NEG_{(\alpha^{\prime},\beta^{\prime})}(X)\).
\((\Rightarrow)\). Let \(\mathcal{T}^{Ev}_{(\alpha,\beta)}(X)=\mathcal{T}_{(\alpha^{\prime},\beta^{ \prime})}(X)\), we intend to prove that \(\beta_{1}^{Ev}\leq\beta^{\prime}<\beta_{2}^{Ev}\) and \(\alpha_{1}^{Ev}<\alpha^{\prime}\leq\alpha_{2}^{Ev}\).
\((\alpha^{\prime}\leq\alpha_{2}^{Ev})\). Let \(x_{2}\in U\) such that \(\alpha_{2}^{Ev}=\frac{|X\cap[x_{2}]_{\mathcal{R}}|}{|[x_{2}]_{\mathcal{R}}|}\). By Definition 6 (ii), \(x_{2}\in POS^{Ev}_{(\alpha,\beta)}(X)\). Hence, \(\alpha^{\prime}>\alpha_{2}^{Ev}\) means that \(\frac{|X\cap[x_{2}]_{\mathcal{R}}|}{|[x_{2}]_{\mathcal{R}}|}<\alpha^{\prime}\). Thus, \(x_{2}\notin\alpha_{2}^{Ev}\).
\((\alpha^{\prime}\leq\alpha_{2}^{Ev})\). Let \(\alpha_{2}^{Ev}\). By Definition 6 (iii), \(\beta_{2}^{Ev}<\frac{|X\cap[\bar{x}]_{\mathcal{R}}|}{|[\bar{x}]_{\mathcal{R}}|}< \alpha_{2}^{Ev}\). By Definition 6 (iii), \(\beta_{2}^{Ev}<\frac{|X\cap[\bar{x}]_{\mathcal{R}}|}{|[\bar{x}]_{\mathcal{R}}| }<\alpha_{2}^{Ev}\). Hence, \(\alpha_{2}^{Ev}>\alpha_{2}^{Ev}\) means that \(\frac{|X\cap[\bar{x}]_{\mathcal{R}}|}{|[\bar{x}]_{\mathcal{R}}|}<\alpha_{2}^{Ev}\). Thus, \(\alpha_{2}^{Ev}\notin\alpha_{2}^{Ev}\).
\((\alpha^{\prime}\leq\alpha_{2}^{Ev})\). Let \(\alpha_{2}^{Ev}\). By Definition 6 (iii), \(\beta_{2}^{Ev}<\frac{|X\cap[\bar{x}]_{\mathcal{R}}|}{|[\bar{x}]_{\mathcal{R}}|}< \alpha_{2}^{Ev}\). Hence, \(\alpha_{2}^{Ev}<\alpha_{2}^{Ev}\).
\((\alpha^{\prime}\leq\alpha_{2}^{Ev})\). By Definition 6 (iii), \(\beta_{2}^{Ev}<\frac{|X\cap[\bar{x}]_{\mathcal{R}}|}{|[\bar{x}]_{\mathcal{R}}|}< \alpha_{2}^{Ev}\). Hence, \(\alpha_{2}^{Ev}>\alpha_{2}^{Ev}\).
\((\alpha^{\prime}\leq\alpha_{2}^{Ev})\). By Definition 6 (iii), \(\beta_{2}^{Ev}<\frac{|X\cap[\bar{x}]_{\mathcal{R}}|}{|[\bar{x}]_{\mathcal{R}}|}< \alpha_{2}^{Ev}\). Hence, \(\alpha_{2}^{Ev}<\alpha_{2}^{Ev}\).
\((\alpha^{\prime}\leq\alpha_{2}^{Ev})\). By Definition 6 (iii), \(\beta_{2}^{Ev}<\frac{|X\cap[\bar{x}]_{\mathcal{R}}|}{|[\bar{x}]_{\mathcal{R}}|}< \alpha_{2}^{Ev}\). Hence, \(\alpha_{2}^{Ev}<\alpha_{2}^{Ev}\).
\(POS_{(\alpha^{\prime},\beta^{\prime})}(X)\) from Definition 1 (i). This contradicts that \(POS_{(\alpha,\beta)}^{Ev}(X)=POS_{(\alpha^{\prime},\beta^{\prime})}(X)\). Thus, it must be true that \(\alpha^{\prime}\leq\alpha_{2}^{Ev}\).
\((\alpha_{1}^{Ev}<\alpha^{\prime})\)**.** Let \(x_{1}\in U\) such that \(\alpha_{1}^{Ev}=\dfrac{|X\cap[x_{1}]_{\mathcal{R}}|}{|[x_{1}]_{\mathcal{R}}|}\). By Definition 6 (i), \(x_{1}\in BND_{(\alpha,\beta)}^{Ev}(X)\). If \(\alpha_{1}^{Ev}\geq\alpha^{\prime}\), then \(\dfrac{|X\cap[x_{1}]_{\mathcal{R}}|}{|[x_{1}]_{\mathcal{R}}|}\geq\alpha^{\prime}\). So, \(x_{1}\in POS_{(\alpha^{\prime},\beta^{\prime})}(X)\) from Definition 1 (i). This contradicts that \(POS_{(\alpha,\beta)}^{Ev}(X)=POS_{(\alpha^{\prime},\beta^{\prime})}(X)\). Thus, it must be true that \(\alpha_{1}^{Ev}<\alpha^{\prime}\).
\((\beta_{1}^{Ev}\leq\beta^{\prime})\)**.** Let \(x_{1}\in U\) such that \(\beta_{1}^{Ev}=\dfrac{|X\cap[x_{1}]_{\mathcal{R}}|}{|[x_{1}]_{\mathcal{R}}|}\). By Definition 6(iii), \(x_{1}\in NEG_{(\alpha,\beta)}^{Ev}(X)\). If \(\beta_{1}^{Ev}>\beta^{\prime}\), then \(\dfrac{|X\cap[x_{1}]_{\mathcal{R}}|}{|[x_{1}]_{\mathcal{R}}|}>\beta^{\prime}\), which implies that \(x_{1}\notin NEG_{(\alpha^{\prime},\beta^{\prime})}(X)\) from Definition 1(ii). This contradicts that \(NEG_{(\alpha,\beta)}^{Ev}(X)=NEG_{(\alpha^{\prime},\beta^{\prime})}(X)\). Thus, it must be true that \(\beta_{1}^{Ev}\leq\beta^{\prime}\).
\((\beta^{\prime}<\beta_{2}^{Ev})\)**.** Let \(x_{2}\in U\) such that \(\beta_{2}^{Ev}=\dfrac{|X\cap[x_{2}]_{\mathcal{R}}|}{|[x_{2}]_{\mathcal{R}}|}\). By Definition 6(iv), \(x_{2}\in BND_{(\alpha,\beta)}^{Ev}(X)\). Also, if \(\beta^{\prime}\geq\beta_{2}^{Ev}\), then \(\dfrac{|X\cap[x_{2}]_{\mathcal{R}}|}{|[x_{2}]_{\mathcal{R}}|}\leq\beta^{\prime}\), which implies that \(x_{2}\in NEG_{(\alpha^{\prime},\beta^{\prime})}(X)\) from Definition 1 (ii). This contradicts that \(BND_{(\alpha,\beta)}^{Ev}(X)=BND_{(\alpha^{\prime},\beta^{\prime})}(X)\). Thus, it must be true that \(\beta^{\prime}<\beta_{2}^{Ev}\).
**Example 6**.: _Consider Example 3.1, \(\neg Sm\) is an increasing function and all the three regions of \(\mathcal{T}_{(0.8,0.2)}^{\neg Sm}(X_{Sport})\) are non-empty. In Example 4, we have found that \(\alpha_{1}^{\neg Sm}=0.2\), \(\alpha_{2}^{\neg Sm}=0.4\), \(\beta_{1}^{\neg Sm}=0\), and \(\beta_{2}^{\neg Sm}=0.14\). Therefore, according to Theorem 1, we get \(\mathcal{T}_{(0.8,0.2)}^{\neg Sm}(X_{Sport})=\mathcal{T}_{(\alpha^{\prime}, \beta^{\prime})}(X_{Sport})\) for each \((\alpha^{\prime},\beta^{\prime})\) such that \(\alpha^{\prime}\in(0.2,0.4]\) and \(\beta^{\prime}\in[0,0.14)\)_
_For example, we can easily verify that \(\mathcal{T}_{(0.8,0.2)}^{\neg Sm}(X_{Sport})=\mathcal{T}_{(0.3,0.1)}(X_{Sport})\). Indeed, by (11) and by Definition 1,_
* \(POS_{(0.3,0.1)}(X_{Sport})=\left\{x\in U\ |\ \dfrac{|X\cap[x]_{\mathcal{R}}|}{|[x]_{ \mathcal{R}}|}\geq 0.3\right\}=C_{3}\cup C_{4}\cup C_{5}\)_,_
* \(NEG_{(0.3,0.1)}(X_{Sport})=\left\{x\in U\ |\ \dfrac{|X\cap[x]_{\mathcal{R}}|}{|[x]_{ \mathcal{R}}|}\leq 0.1\right\}=C_{1}\)_, and_
* \(BND_{(0.3,0.1)}(X_{Sport})=\left\{x\in U\ |\ 0.1<\dfrac{|X\cap[x]_{\mathcal{R}}|}{|[x]_{ \mathcal{R}}|}<0.3\right\}=C_{2}\cup C_{6}\)_._
By Theorem 1, we can connect linguistic rough sets with classical rough sets. More precisely, the following corollary holds.
**Corollary 1**.: _Let \(Ev\in\mathcal{E}^{+}\) with \(POS_{(\alpha,\beta)}^{Ev}(X),NEG_{(\alpha,\beta)}^{Ev}(X)\), \(BND_{(\alpha,\beta)}^{Ev}(X)\neq\emptyset\). Then,_
\((\mathcal{L}_{(\alpha,\beta)}^{Ev}(X),\mathcal{U}_{(\alpha,\beta)}^{Ev}(X))=( \mathcal{L}(X),\mathcal{U}(X))\)4 _if and only if \(\beta_{1}^{Ev}=0\) and \(\alpha_{2}^{Ev}=1\)._
Footnote 4: Recall that \((\mathcal{L}_{(\alpha,\beta)}^{Ev}(X),\mathcal{U}_{(\alpha,\beta)}^{Ev}(X))\) is the linguistic rough set of \(X\) given by Definition 5 and \((\mathcal{L}(X),\mathcal{U}(X))\) is the classical rough set of \(X\) given by Eq. (4).
Proof.: \((\Rightarrow)\). Suppose that \((\mathcal{L}_{(\alpha,\beta)}^{Ev}(X),\mathcal{U}_{(\alpha,\beta)}^{Ev}(X))\) is the rough set of \(X\). Then, by (4), let \(x\in U\), \(x\in POS_{(\alpha,\beta)}^{Ev}(X)\) if and only if \([x]_{\mathcal{R}}\subseteq X\). The latter means that \(\dfrac{|X\cap[x]_{\mathcal{R}}|}{|[x]_{\mathcal{R}}|}=1\) for each \(x\in POS_{(\alpha,\beta)}^{Ev}(X)\). Hence, By Definition 6 (ii), \(\alpha_{2}^{Ev}=1\).
By (4), let \(x\in U\), \(x\in POS^{Ev}_{(\alpha,\beta)}(X)\cup BND^{Ev}_{(\alpha,\beta)}(X)\) if and only if \([x]_{\mathcal{R}}\cap X\neq\emptyset\). Since \(POS^{Ev}_{(\alpha,\beta)}(X)\cup BND^{Ev}_{(\alpha,\beta)}(X)=U\setminus NEG^{ Ev}_{(\alpha,\beta)}(X)\), we know that \(\frac{|X\cap[x]_{\mathcal{R}}|}{|[x]_{\mathcal{R}}|}=0\) for each \(x\in NEG^{Ev}_{(\alpha,\beta)}(X)\). Finally, by Definition 6 (iii), \(\beta^{Ev}_{1}=0\).
(\(\Leftarrow\)). Suppose that \(\alpha^{Ev}_{2}=1\) and \(\beta^{Ev}_{1}=0\). Trivially, \(\alpha^{Ev}_{2}\in(\alpha^{Ev}_{1},\alpha^{Ev}_{2}]\) and \(\beta^{Ev}_{1}\in[\beta^{Ev}_{1},\beta^{Ev}_{2})\). Then, by Theorem 1, \((POS^{Ev}_{(\alpha,\beta)}(X)\), \(POS^{Ev}_{(\alpha,\beta)}(X)\cup BND^{Ev}_{(\alpha,\beta)}(X))=(POS_{(1,0)}(X), POS_{(1,0)}(X)\cup BND_{(1,0)}(X))\). Moreover, by Remark 2, \((\mathcal{L}_{(1,0)}(X),\mathcal{U}_{(1,0)}(X))=(POS_{(1,0)}(X),POS_{(1,0)}(X) \cup BND_{(1,0)}(X))\) coincides with the rough set \((\mathcal{L}(X),\mathcal{U}(X))\) given by (4).
**Example 7**.: _Consider the universe \(U=\{u_{1},\ldots,u_{20}\}\) and the evaluative expression_
Very big_, which is modelled by (6). We suppose that \(U\) is partitioned into three equivalence classes: \(C_{1}=\{u_{1},\ldots,u_{5}\}\), \(C_{2}=\{u_{6},\ldots,u_{10}\}\), and \(C_{3}=\{u_{11},\ldots,u_{20}\}\). If \(X=\{u_{6},\ldots,u_{19}\}\), we can simply prove that \((\mathcal{L}^{Biv}_{(0.7,0.3)}(X),\mathcal{E}^{Biv}_{(0.7,0.3)}(X))=(\mathcal{L }(X),\mathcal{U}(X))\). Indeed, we get_
\[\frac{|X\cap[x]_{\mathcal{R}}|}{|[x]_{\mathcal{R}}|}=\begin{cases}0&\text{if }x \in C_{1},\\ 0.9&\text{if }x\in C_{3},\\ 1&\text{if }x\in C_{2}.\end{cases}\text{ and }BiVe\left(\frac{|X\cap[x]_{\mathcal{R}}|}{|[x]_{ \mathcal{R}}|}\right)=\begin{cases}0&\text{if }x\in C_{1},\\ 0.59&\text{if }x\in C_{3},\\ 1&\text{if }x\in C_{2}.\end{cases}\]
_Thus,_
\[POS^{BiVe}_{(0.7,0.3)}(X)=\{C_{i}\ |\ i\in\{1,2,3\}\text{ and }BiVe\left(\frac{|X\cap C_{i}|}{|C_{i}|}\right)\geq 0. 7\}=C_{2}\text{;}\]
\[NEG^{BiVe}_{(0.7,0.3)}(X)=\{C_{i}\ |\ i\in\{1,2,3\}\text{ and }BiVe\left(\frac{|X\cap C_{i}|}{|C_{i}|}\right)\leq 0. 3\}=C_{1}\text{;}\]
\[BND^{BiVe}_{(0.7,0.3)}(X)=\{C_{i}\ |\ i\in\{1,2,3\}\text{ and }0.3<BiVe\left(\frac{|X\cap C_{i}|}{|C_{i}|}\right)<0. 7\}=C_{3}.\]
_Also, by Definition 6, \(\beta^{BiVe}_{1}=0\), \(\alpha^{BiVe}_{2}=1\), \(\alpha^{BiVe}_{2}=\beta^{BiVe}_{2}=0.9\). Since the hypothesis of the previous corollary is satisfied, we expect that \((\mathcal{L}(X),\mathcal{U}(X))=(\mathcal{L}^{BiVe}_{(0.7,0.3)}(X),\mathcal{U }^{BiVe}_{(0.7,0.3)}(X))=(C_{2},C_{2}\cup C_{3})\). We can immediately verify that this is true: \(\mathcal{L}(X)=C_{2}\) because \(C_{2}\) is the unique class among \(C_{1},C_{2}\), and \(C_{3}\) that is included in \(X\); moreover, \(\mathcal{U}(X)=C_{2}\cup C_{3}\) because \(X\cap C_{2}\neq\emptyset\) and \(X\cap C_{3}\neq\emptyset\), while \(X\cap C_{1}=\emptyset\)._
We are now going to deal with the cases where one of \(BND^{Ev}_{(\alpha,\beta)},POS^{Ev}_{(\alpha,\beta)}\), \(NEG^{Ev}_{(\alpha,\beta)}\) is empty.
**Theorem 2**.: _Let \(Ev\in\mathcal{E}^{+}\) such that \(BND^{Ev}_{(\alpha,\beta)}=\emptyset\) and \(POS^{Ev}_{(\alpha,\beta)},NEG^{Ev}_{(\alpha,\beta)}\neq\emptyset\). Let \(\alpha^{\prime},\beta^{\prime}\in[0,1]\) such that \(\beta^{\prime}<\alpha^{\prime}\). Then,_
\[\mathcal{T}^{Ev}_{(\alpha,\beta)}(X)=\mathcal{T}_{(\alpha^{\prime},\beta^{ \prime})}(X)\ \text{ if and only if }\ \beta^{Ev}_{1}\leq\beta^{\prime}<\alpha^{\prime}\leq\alpha^{Ev}_{2}.\]
Proof.: (\(\Leftarrow\)). Let \(\alpha^{\prime},\beta^{\prime}\in[0,1]\) such that \(\beta^{Ev}_{1}\leq\beta^{\prime}<\alpha^{\prime}\leq\alpha^{Ev}_{2}\), we need to prove that \(POS^{Ev}_{(\alpha,\beta)}(X)=POS_{(\alpha^{\prime},\beta^{\prime})}(X)\), \(NEG^{Ev}_{(\alpha,\beta)}(X)=NEG_{(\alpha^{\prime},\beta^{\prime})}(X)\), and \(BND^{Ev}_{(\alpha,\beta)}\) (\(X\)) = \(BND_{(\alpha^{\prime},\beta^{\prime})}(X)\).
\((POS^{Ev}_{(\alpha,\beta)}(X)=POS_{(\alpha^{\prime},\beta^{\prime})}(X))\). Let \(\bar{x}\in POS^{Ev}_{(\alpha,\beta)}(X)\). Then, \(\frac{|X\cap[\bar{x}]_{\mathcal{R}}|}{|[\bar{x}]_{\mathcal{R}}|}\geq\alpha^{Ev}_ {1}\) from Definition 6 (i). By hypothesis, \(\alpha^{\prime}\leq\alpha^{Ev}_{2}\). Finally, by the previous two inequalities, we obtain that \(\frac{|X\cap[\bar{x}]_{\mathcal{R}}|}{|[\bar{x}]_{\mathcal{R}}|}\geq\alpha^{\prime}\). Namely, \(\bar{x}\in POS_{(\alpha^{\prime},\beta^{\prime})}(X)\) from Definition 1 (i).
Let \(\bar{x}\in POS_{(\alpha^{\prime},\beta^{\prime})}(X)\). Then, \(\frac{|X\cap[\bar{x}]_{\mathcal{R}}|}{|[\bar{x}]_{\mathcal{R}}|}\geq\alpha^{\prime}\) by Definition 1 (i). Let \(x_{1}\in U\) such that \(\beta^{Ev}_{1}=\frac{|X\cap[x_{1}]_{\mathcal{R}}|}{|[x_{1}]_{\mathcal{R}}|}\). Moreover, by hypothesis \(\beta^{Ev}_{1}<\alpha^{\prime}\). So, by the
previous two inequalities, we get \(\dfrac{|X\cap[\bar{x}]_{\mathcal{R}}|}{|[\bar{x}]_{\mathcal{R}}|}>\beta_{1}^{Ev}\). Then, by Definition 6 (iii), \(\bar{x}\notin NEG_{(\alpha,\beta)}^{Ev}(X)\). Lastly, by (10) and by the hypothesis \(BND_{(\alpha,\beta)}^{Ev}(X)=\emptyset\), we can conclude that \(\bar{x}\in POS_{(\alpha,\beta)}^{Ev}(X)\). \((NEG_{(\alpha,\beta)}^{Ev}(X)=NEG_{(\alpha^{\prime},\beta^{\prime})}(X))\). Let \(\bar{x}\in NEG_{(\alpha,\beta)}^{Ev}(X)\). Then, by Definition 6 (iii), \(\dfrac{|X\cap[\bar{x}]_{\mathcal{R}}|}{|[\bar{x}]_{\mathcal{R}}|}\leq\beta_{1} ^{Ev}\). Additionally, we know that \(\beta_{1}^{Ev}\leq\beta^{\prime}\) from hypothesis. Then, \(\dfrac{|X\cap[\bar{x}]_{\mathcal{R}}|}{|[\bar{x}]_{\mathcal{R}}|}\leq\beta^{\prime}\), namely \(\bar{x}\in NEG_{(\alpha^{\prime},\beta^{\prime})}(X)\) from Definition 1 (ii). Let \(\bar{x}\in NEG_{(\alpha^{\prime},\beta^{\prime})}(X)\), then \(\dfrac{|X\cap[\bar{x}]_{\mathcal{R}}|}{|[\bar{x}]_{\mathcal{R}}|}\leq\beta^{\prime}\) from Definition 1 (ii). By hypothesis, \(\beta^{\prime}<\alpha_{2}^{Ev}\). Thus, by Definition 6 (ii), \(\bar{x}\) cannot belong to \(POS_{(\alpha,\beta)}^{Ev}(X)\). So, by (10) and the hypothesis \(BND_{(\alpha,\beta)}^{Ev}(X)=\emptyset\), we can deduce that \(\bar{x}\in NEG_{(\alpha,\beta)}^{Ev}(X)\). \((BND_{(\alpha,\beta)}^{Ev}(X)=BND_{(\alpha^{\prime},\beta^{\prime})}(X))\). This equality follows from \(POS_{(\alpha,\beta)}^{Ev}(X)=POS_{(\alpha^{\prime},\beta^{\prime})}(X)\) and \(NEG_{(\alpha,\beta)}^{Ev}(X)=NEG_{(\alpha^{\prime},\beta^{\prime})}(X)\), considering that the sets \(POS_{(\alpha,\beta)}^{Ev}(X),NEG_{(\alpha,\beta)}^{Ev}(X)\), and \(BND_{(\alpha,\beta)}^{Ev}(X)\) (as well as \(POS_{(\alpha^{\prime},\beta^{\prime})}(X)\), \(NEG_{(\alpha^{\prime},\beta^{\prime})}(X)\), and \(BND_{(\alpha^{\prime},\beta^{\prime})}(X)\)) cover the universe \(U\) (see (3) and (10)). \((\Leftarrow)\). Let \(\mathcal{T}_{(\alpha,\beta)}^{Ev}(X)=\mathcal{T}_{(\alpha^{\prime},\beta^{ \prime})}(X)\), we intend to prove that \(\beta_{1}^{Ev}\leq\beta^{\prime}\) and \(\alpha^{\prime}\leq\alpha_{2}^{Ev}\). \((\beta_{1}^{Ev}\leq\beta^{\prime})\). Let \(x_{1}\in U\) such that \(\beta_{1}^{Ev}=\dfrac{|X\cap[x_{1}]_{\mathcal{R}}|}{|[x_{1}]_{ \mathcal{R}}|}\). Of course, \(x_{1}\in NEG_{(\alpha,\beta)}^{Ev}(X)\) from Definition 6 (iii). It is clear that the inequality \(\beta_{1}^{Ev}>\beta^{\prime}\) leads to a contradiction: * if \(\beta_{1}^{Ev}>\beta^{\prime}\), then \(\bar{x}\notin NEG_{(\alpha^{\prime},\beta^{\prime})}(X)\) from Definition 6 (iii); * but, this contradicts that \(NEG_{(\alpha,\beta)}^{Ev}(X)=NEG_{(\alpha^{\prime},\beta^{\prime})}(X)\). So, \(\beta_{1}^{Ev}\leq\beta^{\prime}\) must hold. \((\alpha^{\prime}\leq\alpha_{2}^{Ev})\). Let \(x_{2}\in U\) such that \(\alpha_{2}^{Ev}=\dfrac{|X\cap[x_{2}]_{\mathcal{R}}|}{|[x_{2}]_{ \mathcal{R}}|}\). Then, \(x_{2}\in POS_{(\alpha,\beta)}^{Ev}(X)\) from Definition 6 (ii). It is clear that the inequality \(\alpha^{\prime}>\alpha_{2}^{Ev}\) leads to a contradiction: * if \(\alpha^{\prime}>\alpha_{2}^{Ev}\), then \(x_{2}\notin POS_{(\alpha^{\prime},\beta^{\prime})}(X)\) from Definition 6 (ii); * but, this contradicts that \(POS_{(\alpha,\beta)}^{Ev}(X)=POS_{(\alpha^{\prime},\beta^{\prime})}(X)\). Finally, \(\alpha^{\prime}\leq\alpha_{2}^{Ev}\) must hold.
Examples of evaluative expressions satisfying the hypothesis of Theorem 2 can be obtained from the class defined by (8). Indeed, let \(t\in[0,1]\), \(\Delta_{t}\) is trivially an increasing function (i.e. \(\Delta_{t}\in\mathcal{E}^{+}\)) and the boundary region determined by \(\Delta_{t}\) is always empty as shown by the following proposition. In addition, in Proposition 2, the formula of the three regions that are related to \(\Delta_{t}\) is rewritten so that the thresholds \(\alpha\) and \(\beta\) do not appear in it.
**Proposition 2**.: _Let \(t\in[0,1]\) and let \(\alpha,\beta\in[0,1]\) such that \(\beta<\alpha\), then_
1. \(POS_{(\alpha,\beta)}^{\Delta_{t}}(X)=\bigg{\{}x\in U\ |\ \dfrac{|X\cap[x]_{ \mathcal{R}}|}{|[x]_{\mathcal{R}}|}\geq t\bigg{\}}\)_;_
2. \(NEG_{(\alpha,\beta)}^{\Delta_{t}}(X)=\bigg{\{}x\in U\ |\ \dfrac{|X\cap[x]_{ \mathcal{R}}|}{|[x]_{\mathcal{R}}|}<t\bigg{\}}\)_;_
_._
3. \(BND^{\Delta_{t}}_{(\alpha,\beta)}(X)=\emptyset\)_._
Proof.: (a). Let \(\bar{x}\in U\). Thus, \(\bar{x}\in POS^{Ev}_{(\alpha,\beta)}(X)\) if and only if
\[\Delta_{t}\left(\frac{|X\cap[\bar{x}]_{\mathcal{R}}|}{|[\bar{x}]_{\mathcal{R}} |}\right)\geq\alpha \tag{13}\]
from Definition 4 (i).
By (8), the inequality (13) is true if and only if \(\frac{|X\cap[\bar{x}]_{\mathcal{R}}|}{|[\bar{x}]_{\mathcal{R}}|}\geq t\).
(b). Let \(\bar{x}\in U\). Then, \(\bar{x}\in NEG^{Ev}_{(\alpha,\beta)}(X)\) if and only if
\[\Delta_{t}\left(\frac{|X\cap[\bar{x}]_{\mathcal{R}}|}{|[\bar{x}]_{\mathcal{R} }|}\right)\leq\beta \tag{14}\]
from Definition 4 (ii).
By (8), the inequality (14) is true if and only if \(\frac{|X\cap[x]_{\mathcal{R}}|}{|[x]_{\mathcal{R}}|}<t\).
(c). Notice that \(\{x\in U\ |\ \frac{|X\cap[x]_{\mathcal{R}}|}{|[x]_{\mathcal{R}}|}\geq t\} \cup\{x\in U\ |\ \frac{|X\cap[x]_{\mathcal{R}}|}{|[x]_{\mathcal{R}}|}<t\}=U\). Moreover, we have proved that \(POS^{Ev}_{(\alpha,\beta)}(X)=\{x\in U\ |\ \frac{|X\cap[x]_{\mathcal{R}}|}{|[x]_{ \mathcal{R}}|}\geq t\}\) and \(NEG^{Ev}_{(\alpha,\beta)}(X)=\{x\in U\ |\ \frac{|X\cap[x]_{\mathcal{R}}|}{|[x]_{ \mathcal{R}}|}<t\}\). Hence, according to (10), \(BND^{Ev}_{(\alpha,\beta)}(X)\) must be empty.
**Example 8**.: _Let us focus on \(\mathcal{T}^{\Delta_{0.5}}_{(\alpha,\beta)}(X)\), where \(U\), \(X\), and \(\mathcal{R}\) are defined in Example 7. By Proposition 2, it is easy to verify that \(POS^{\Delta_{0.5}}_{(\alpha,\beta)}(X)=C_{2}\cup C_{3}\), \(NEG^{\Delta_{0.5}}_{(\alpha,\beta)}(X)=C_{1}\), and \(BND^{\Delta_{0.5}}_{(\alpha,\beta)}(X)=\emptyset\) for each \(\alpha,\beta\in[0,1]\) with \(\beta<\alpha\). Furthermore, according to Theorem 2, \(POS_{(\alpha^{\prime},\beta^{\prime})}(X)=C_{2}\cup C_{3}\), \(NEG_{(\alpha^{\prime},\beta^{\prime})}(X)=C_{1}\), and \(BND_{(\alpha^{\prime},\beta^{\prime})}(X)=\emptyset\), for each \(\alpha^{\prime},\beta^{\prime}\in[0,1]\) such that \(\beta^{\Delta_{0.5}}_{\Delta}\leq\beta^{\prime}<\alpha^{\prime}\leq\alpha^{ \prime}_{\Delta}_{2}\)\({}^{\Delta_{0.5}}\), where \(\beta^{\Delta_{0.5}}_{1}=0\), \(\alpha^{\Delta_{0.5}}_{2}=0.9\). For example, if we choose \(\alpha^{\prime}=0.2\) and \(\beta^{\prime}=0.7\), we obtain \(POS_{(0.7,0.2)}(X)=C_{2}\cup C_{3}\), \(NEG_{(0.7,0.2)}(X)=C_{1}\), and \(BND_{(0.7,0.2)}(X)=\emptyset\)._
Now, let us suppose that the negative region is empty.
**Theorem 3**.: _Let \(Ev\in\mathcal{E}^{+}\) such that \(NEG^{Ev}_{(\alpha,\beta)}=\emptyset\) and \(POS^{Ev}_{(\alpha,\beta)},BND^{Ev}_{(\alpha,\beta)}\neq\emptyset\). Let \(\alpha^{\prime},\beta^{\prime}\in[0,1]\) such that \(\beta^{\prime}<\alpha^{\prime}\). Then,_
\[\mathcal{T}^{Ev}_{(\alpha,\beta)}(X)=\mathcal{T}_{(\alpha^{\prime},\beta^{ \prime})}(X)\ \ \text{if and only if}\ \ \beta^{\prime}\in[0,\beta^{Ev}_{2})\ \text{and}\ \alpha^{\prime}\in(\alpha^{Ev}_{1},\alpha^{Ev}_{2}].\]
Proof.: The proof is similar to that of Theorems 1 and 2. So, it is omitted.
**Example 9**.: _Let \(U=\{u_{1},\ldots,u_{30}\}\). We supposed that \(U\) is divided into the following equivalence classes: \(C_{1}=\{u_{1},\ldots,u_{5}\}\), \(C_{2}=\{u_{6},\ldots,u_{10}\}\), and \(C_{3}=\{u_{11},\ldots,u_{30}\}\)._
_Also, let \(X=\{u_{1},\ldots u_{28}\}\), we are interested in \(\mathcal{T}^{BiVe}_{(0.8,0.4)}(X)\). Then,_
\[\frac{|X\cap[x]_{\mathcal{R}}|}{|[x]_{\mathcal{R}}|}=\begin{cases}0.9&\text{ if }x\in C_{3},\\ 1&\text{ if }x\in C_{1}\cup C_{2}.\end{cases}\]
_and_
\[BiVe\left(\frac{|X\cap[x]_{\mathcal{R}}|}{|[x]_{\mathcal{R}}|}\right)= \begin{cases}0.58&\text{ if }x\in C_{3},\\ 1&\text{ if }x\in C_{1}\cup C_{2}.\end{cases}\]
_Thus, by Definition 4, \(POS^{BiVe}_{(0.8,0.4)}(X)=C_{1}\cup C_{2}\), \(BND^{BiVe}_{(0.8,0.4)}(X)=C_{3}\), and \(NEG^{BiVe}_{(0.8,0.4)}(X)=\emptyset\). Moreover, \(\beta^{Ev}_{2}=0.9\), \(\alpha^{BiVe}_{1}=0.9\), and \(\alpha^{BiVe}_{2}=1\). So, according to Theorem 3, \(POS_{(\alpha^{\prime},\beta^{\prime})}(X)=C_{1}\cup C_{2}\), \(BND_{(\alpha^{\prime},\beta^{\prime})}(X)=C_{3}\), and \(NEG_{(\alpha^{\prime},\beta^{\prime})}(X)=C_{3}\) for each \(\beta^{\prime}\in[0,0.9)\) and \(\alpha^{\prime}\in(0.9,1]\). For example, if \(\alpha^{\prime}=0.95\) and \(\beta^{\prime}=0.8\), then \(POS_{(0.8,0.95)}(X)=C_{1}\cup C_{2}\), \(BND_{(0.8,0.95)}(X)=C_{3}\), and \(NEG_{(0.8,0.95)}(X)=\emptyset\)._
Finally, the case of an empty positive region.
**Theorem 4**.: _Let \(Ev\in\mathcal{E}^{+}\) such that \(POS^{Ev}_{(\alpha,\beta)}=\emptyset\) and \(NEG^{Ev}_{(\alpha,\beta)},BND^{Ev}_{(\alpha,\beta)}\neq\emptyset\). Let \(\alpha^{\prime},\beta^{\prime}\in[0,1]\) such that \(\beta^{\prime}<\alpha^{\prime}\). Then,_
\[\mathcal{T}^{Ev}_{(\alpha,\beta)}(X)=\mathcal{T}_{(\alpha^{\prime},\beta^{ \prime})}(X)\ \ \text{if and only if}\ \ \beta^{\prime}\in[\beta^{Ev}_{1},\beta^{Ev}_{2})\text{ and }\alpha^{\prime}\in(\alpha^{Ev}_{1},1].\]
Proof.: The proof is similar to that of Theorems 1 and 2. So, it is omitted.
**Example 10**.: _Consider the universe \(U\) and the equivalence classes \(C_{1},C_{2}\), and \(C_{3}\), which are defined by Example 9. Let \(X=\{u_{1},u_{6},u_{11},\ldots u_{28}\}\), we focus on \(\mathcal{T}^{BiVe}_{(0.7,0.2)}(X)\). Then,_
\[\frac{|X\cap[x]_{\mathcal{R}}|}{|[x]_{\mathcal{R}}|}=\begin{cases}0.5&\text{ if }x\in C_{1}\cup C_{2},\\ 0.9&\text{ if }x\in C_{3}.\end{cases}\]
_and_
\[BiVe\left(\frac{|X\cap[x]_{\mathcal{R}}|}{|[x]_{\mathcal{R}}|}\right)=\begin{cases} 0.58&\text{ if }x\in C_{3},\\ 0&\text{ if }x\in C_{1}\cup C_{2}.\end{cases}\]
_By Definition 4, \(NEG^{BiVe}_{(0.7,0.2)}(X)=C_{1}\cup C_{2}\), \(BND^{BiVe}_{(0.7,0.2)}(X)=C_{3}\), and \(POS^{BiVe}_{(0.7,0.2)}(X)=\emptyset\). Also, \(\beta^{Ev}_{1}=0.5\), \(\beta^{Ev}_{2}=0.9\), \(\alpha^{Ev}_{1}=0.9\). Thus, according to Theorem 4, \(NEG_{(\alpha^{\prime},\beta^{\prime})}(X)=C_{1}\cup C_{2}\), \(BND_{(\alpha^{\prime},\beta^{\prime})}(X)=C_{3}\), and \(POS_{(\alpha^{\prime},\beta^{\prime})}(X)=\emptyset\) for each \(\beta^{\prime}\in[0.5,0.9)\) and \(\alpha^{\prime}\in(0.9,1]\). For example, let \((\alpha^{\prime},\beta^{\prime})=(0.95,0.6)\), we can easily verify that \(NEG_{(0.95,0.6)}(X)=C_{1}\cup C_{2}\), \(BND_{(0.95,0.6)}(X)=C_{3}\), and \(POS_{(0.95,0.6)}(X)=\emptyset\)._
## 5 Conclusions and future directions
This work proposes a novel model for three-way decisions based on the concept of evaluative linguistic expressions. Thus, a new way is provided to divide the initial universe into three regions with the corresponding decisions rules. Moreover, our results allow decision-makers to give a linguistic interpretation to the regions already obtained using the probabilistic approach. Let us indicate some possible directions to continue this work. Firstly, we need to extend the results of Section 4 to the evaluative expressions that are not necessarily represented by increasing functions. Then, we want to deepen the study of linguistic-regions by comparing our methods with those presented in [21]. In addition, we intend to understand how the decisions about the elements change using different evaluative expressions. Finally, we could analyze the logical relations between the linguistic regions determined by a given evaluative expression and investigate their consequences in terms of decisions by constructing an hexagon of opposition.
|
2303.12940 | Cryptocurrency wallets: assessment and security | Digital wallet as a software program or a digital device allows users to
conduct various transactions. Hot and cold digital wallets are considered as
two types of this wallet. Digital wallets need an online connection fall into
the first group, whereas digital wallets can operate without internet
connection belong to the second group. Prior to buying a digital wallet, it is
important to define for what purpose it will be utilized. The ease with which a
mobile phone transaction may be completed in a couple of seconds and the speed
with which transactions are executed are reflection of efficiency. One of the
most important elements of digital wallets is data organization. Digital
wallets are significantly less expensive than classic methods of transaction,
which entails various charges and fees. Constantly, demand for their usage is
growing due to speed, security, and the ability to conduct transactions between
two users without the need of a third party. As the popularity of digital
currency wallets grows, the number of security concerns impacting them
increases significantly. The current status of digital wallets on the market,
as well as the options for an efficient solution for obtaining and utilizing
digital wallets. Finally, the digital wallets' security and future improvement
prospects are discussed in this chapter. | Ehsan Nowroozi, Seyedsadra Seyedshoari, Yassine Mekdad, Erkay Savas, Mauro Conti | 2023-03-06T08:52:01Z | http://arxiv.org/abs/2303.12940v1 | # Cryptocurrency wallets: assessment and security
###### Abstract
Digital wallet as a software program or a digital device allows users to conduct various transactions. Hot and cold digital wallets are considered as two types of this wallet. Digital wallets need an online connection fall into the first group, whereas digital wallets can operate without internet connection belong to the second group. Prior to buying a digital wallet, it is important to define for what purpose it will be utilized. The ease with which a mobile phone transaction may be completed in a couple of seconds and the speed with which transactions are executed are reflection of efficiency. One of the most important elements of digital wallets is data organization. Digital wallets are significantly less expensive than classic methods of transaction, which entails various charges and fees. Constantly, demand for their usage is growing due to speed, security, and the ability to conduct transactions between two users without the need of a third party. As the popularity of digital currency wallets grows, the number of security concerns impacting them increases significantly. The current status of digital wallets on the market, as well as the options for an efficient solution for obtaining and utilizing digital wallets. Finally, the digital wallets' security and future improvement prospects are discussed in this chapter.
Keywords:Cryptocurrencies Transactions Digital wallet Security Cryptowallet Blockchain Cybersecurity
## 1 Introduction
Since bitcoin's introduction, the number of cryptocurrencies has increased to thousands, with a market valuation of over $1.72 trillion as of March 2022 [1]. Meanwhile, the percentage of people owning or using cryptocurrencies is growing significantly in many countries as shown in Figure 1. A digital wallet as a software application for cryptocurrencies, keeps private/public keys and properly operates on various blockchains, allowing users to transfer currencies to each
other and monitor their currency balance eliminating need for a physical wallet [27]. Blockchain-based cryptocurrencies are built on the blockchain concept, which is a decentralized open database with entries that may be verified but not modified [17]. Various currencies could be stored, sent, and received using a digital wallet. Within the wallet, cryptocurrencies are not kept like real money. The blockchain captures and archives every transaction [18]. A wallet transaction involves sending currency between two addresses. Private key of the sender and public key of the receiver is required for a transaction to take place [2]. Any quantity of coins owned by the sender can be transferred to public key (or address) of the receiver. To verify that the transaction was started and performed by the sender, it digitally signs the transactions using its private key as shown in the Figure 2. The mainnet includes both the sender and the recipient, which is where transactions take place. There is a separate network called testnet that is utilized for testing, however, testnet coins have no actual worth. Users cannot transmit cryptocurrencies between the mainnet and the testnet since they are two independent networks. In principle, bitcoin wallet applications establish new addresses, securely keep private keys, and assist in the automating of transactions. Several wallets only accept one type of cryptocurrency (for instance, Bitcoin), whereas others such as Exodus and Jaxx support a wide variety of cryptocurrencies.
Cryptography is a field of study that deals with above-mentioned keys. Cryptocurrency consists of two security approaches, symmetric and asymmetric. First one has a secret key whereas the second model contains public and private keys. Encryption and decryption in symmetric mode are simply done by utilizing traditional symmetric encryption techniques like Data Encryption Standard (DES),
Figure 1: The percentage of people owning or using cryptocurrencies in different countries. (a) Shows the percentage in 2020, and (b) Shows the percentage in 2021 [3].
where the same key is used for encryption and decryption [10].
Asymmetric encryption often used in cryptocurrency exchanges, is an encryption technique that employs two keys (public and private keys) paired with distinct encryption and decryption methods. Although the sender must have a duplicate of the recipient's public key in order to transmit a coin, it should be assumed that the adversary has the exact copy. In this case, the sender encrypts the message with the proper encryption mechanism, which the recipient of the message may decode using its private key. The purpose of the asymmetric approach is to prevent an attacker from utilizing the public key to decrypt an encrypted communication [18][33].
### Crypto wallets' Categories
Digital wallets based on their features like online/offline working mode can be divided into two categories: hot and cold wallets. Their main distinction is that a connection to the internet is required for hot wallets, whilst cold wallets do not. Users of a hot wallet typically use it to do online purchases and for that reason, users should not allocate a large amount of money, but a cold wallet functions similarly to a bank vault for storing various digital assets. It's advisable to have both wallets, mostly for security purposes [18].
There are some different types of hot and cold wallets. Desktop wallets, hardware wallets, mobile wallets, online wallets, and paper wallets are all available. Hot wallets include multi-signature, desktop, mobile, and online wallets, whereas cold wallets include paper and hardware wallets. Each cryptocurrency wallet has its own level of safety and privacy to ensure that the private key is kept securely.
Figure 2: Blockchain public/private key cryptography.
Each kind of digital wallets and their advantages and disadvantages have been described as follows [18]:
**Mobile wallets:** Mobile wallets are more efficient and simple than using other kinds of crypto wallets since they can be accessed from anywhere via an internet connection. Despite the fact that new mobile wallets take advantage of the security mechanisms of smartphones like ARM TrustZone to protect users [15], it is susceptible to viruses and hacking. This method allows users to use the TOR network for increased security. TOR is a common anonymous communication network with a low rate of delay that allows users to connect online resources without disclosing their network id [29]. Another fantastic feature is the ability to scan QR codes. Mobile phones, on the other hand, could be considered as unsafe equipment. As a result, if the phone gets compromised, the user's crypto tokens may be lost. They are also vulnerable to viruses, malware, and key loggers.
**Online wallets:** This form of wallet may be accessed through any web browser without the need to download or install any software. Since these wallets are susceptible, it is not suggested that users keep a high number of cryptocurrencies in them. Cryptocurrency transfers are conducted in a timely manner. It is advised to hold a little number of cryptocurrencies in these wallets.
Several numbers of this digital wallet are capable of holding multiple cryptocurrencies as well as transferring funds amongst them. It allows customers to use the TOR network for more confidentiality and privacy. However, a third entity or centralized administration has complete control over the digital wallet. It is suggested to utilize a personal computer (PC) with a necessary security application pre-installed in order to use an online wallet. Users are vulnerable to different online scams due to a lack of awareness of information technology (IT).
**Desktop wallets:** Desktop wallets are assumed to be more secure than mobile and online wallets, however, this might vary depending on the level of protection for online crypto wallets' security. Although a desktop wallet can create addresses for receiving cryptocurrency offline, it requires the use of the internet to send them out. Transaction logs will not be refreshed if there is no internet connection [30]. Although these wallets are simple to use and keep private keys on the user's device, a machine connected to the internet becomes insecure and demands additional protection and security. Furthermore, frequent backups are required because the system may fail at any time, resulting in the loss of all data. Otherwise, the user needs to export the related private key or seed phrase. As a result, users will be able to access digital content on several devices [28].
**Multisignature wallets:** Based on the level of protection, two or three private keys are required to access money after conducting a transfer using a multi-signature wallet. This method is beneficial to businesses because it allows
them to delegate responsibility to many staff, who must all provide their own private key in order to access assets. BitGo is an instance of a multi-signature wallet, where the users store the first key, a trusted third party stores the second key, and the firm itself stores the third key. Transactions might be slow due to the number of signatures required. Multisignature relies on the signing of the transaction by additional devices or a third party.
**Paper wallets:** This is one of the most secure wallets available. They fall in the category of cold crypto wallets. A paper wallet, as the name implies, is a printed sheet of paper containing both private and public keys.
A QR code is printed on the paper, which indicates the keys of the user and may be used for almost any kind of transaction. The user's principal attention should be retaining that paper safe, as the result, this wallet is the safest. They are kept in the physical wallet or pocket of the users without requiring a connection to the computer; however, the transaction takes longer to be completed.
**Hardware wallets:** These wallets are specialized devices of cryptography for generating, storing private keys, and authenticating transactions [24]. In most instances, they are safer wallets because transaction signing occurs on the hardware wallet, and the private key does not leave the safe hardware wallet system, it prohibits malware from stealing digital wallets [31]. A hardware wallet is commonly a USB flash memory (Figure 3) with software installed and ready to use. Some of these devices contain a screen, allowing the user to conduct a transaction without the need of a computer. This kind of wallet provides the user with additional control over their cryptocurrency and is an appropriate option for the long-term storage of crypto assets. The majority of secured USB wallets have a screen. They are safer than all other sorts of digital wallets. They are, however, quite tough to get and are not suggested for novices.
A comparison of different wallets is provided in Table 1.
Users who intend to trade in several currencies may consider multi-currency wallets. Although Bitcoin is the most well-known currency, there is a large number of other cryptocurrencies on the market, each with its own infrastructure
Figure 3: Ledger Nano X, portable hardware wallet [4]
network [5].
\begin{table}
\begin{tabular}{||c|c|c||} \hline \hline \multirow{2}{*}{Wallet} & \multirow{2}{*}{Advantages} & \multirow{2}{*}{Disadvantages} \\ \hline Mobile wallets & - Efficiency and simplicity of use & - Loss of crypto tokens due to compromising the phone \\ & - Supporting TOR network & - Prone to key logger, viruses, and malware \\ & - Using QR code & - Fully controlled by central authorities or third party \\ \hline \multirow{2}{*}{Online wallets} & - Fast transactions & - Fully controlled by central authorities or third party \\ & - Supporting multiple & - Demanding a per- \\ & cryptocurrencies and transactions between & - personal computer and installing specific application \\ \hline \multirow{2}{*}{Desktop wallets} & \multirow{2}{*}{- Simplicity of use - Storing private key on user’s system} & - Susceptible and requiring more security - Regular backup required \\ \hline \hline \multirow{2}{*}{Multisignature wallets} & \multirow{2}{*}{- Dedicating responsibility to employees of a company} & - Slow transactions \\ \hline \hline \multirow{2}{*}{Paper wallets} & \multirow{2}{*}{- Kept in user’s packet or physical wallet} & - Slow transactions \\ \hline \multirow{2}{*}{Hardware wallets} & \multirow{2}{*}{- LCD screen on USB} & - Hard to purchase \\ & wallets & - Not suggested for beginners \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of different categories of cryptowallets
### Available Digital Wallets
It is important to keep in mind that cryptocurrency is outlawed or restricted in certain states and countries prior to deciding on a digital currency, while its usage and exchange are permitted in others. It is advisable to select a multi-currency wallet that supports several cryptocurrencies [28]. It is possible to lose money by selecting the incorrect wallet for a certain digital currency. Users should spend some time learning about the various types of cryptocurrency wallets and their functionality. In this section some of the most common wallets are listed as follows: Exodus (online wallet), Coinpayments (online wallet), Ledger Nano S (hardware wallet), Jaxx (mobile wallet), and Ledger Blue (hardware wallet) [18].
**Exodus** is a web-based electronic wallet with a user-friendly interface shown in Figure 4, a stylish design, and a reporting mechanism. When compared to other online wallets, Exodus provides comparable functionality, with some being better than others. In this type of wallets, registration is free of charge so anyone may submit the form and become the owner of a crypto-wallet of this type. The cryptocurrency swap, where users can trade several crypto assets without incurring extra charges, is one of its best features It is a fantastic place for inexperienced traders. Although it is an online wallet, it is also an offline wallet since the data is kept on the computer of the user when the wallet is generated [18].
**Coinpayments** is a digital wallet that can be accessed online. They become popular after proving that their wallet could hold at least 300 various digital currencies as illustrated in Figure 5. They only get paid when a user finalizes a transaction using their wallet. Because this wallet accepts multiple currencies and is accepted by so many online retailers, it is feasible to shop online using this wallet. The BitGo services have been integrated into this wallet to provide a higher level of security and transaction speed. Moreover, a safety function is
Figure 4: Exodus platform
added to keep the money of the users safe from criminals. This wallet allows users to store several currencies in the same place. In addition, a lot of online retailers utilize it for online shopping.
**Ledger Nano S** is a digitized USB wallet for cryptocurrencies introduced in 2016. Due to the fact that hardware wallets are significantly more expensive than other digital wallets, but they are a cost-effective investiture with a variety of capabilities such as enabling the users to securely trade and monitor digital assets as well as supporting more than 1,100 cryptocurrencies and tokens [6].
The private key's backup and security are given special consideration. This gadget can be started without the need for a computer. It includes a little LCD screen on the front of the USB so that the users can use it easily as shown in Figure 6. It makes it possible to move money from one account to another as well as exchange cryptocurrencies. There are two sizes of Ledger Nano S, the larger device is 98 mm, while the smaller device is 60 mm. This wallet can hold a variety of famous cryptocurrencies. The user may keep an eye on current transactions and utilize the button to double-chseck them. Several security features are available, as well as the possibility to lock the wallet using a password. Regardless of how little the gadget is, it can be conveniently utilized by users.
**Ledger Blue** is also a hardware wallet designed by the same company. It outperforms the Ledger Nano S and adds plenty of additional features which could be seen in Figure 7. This wallet is one of the most costly wallets on the market due to these qualities. To prohibit external access, the users can specify a code with 4 to 6 digits. The Ledger blue wallet uses dual-chip technology and includes built-in software for digital currency safety. It is completely immune to harmful malware which means attackers cannot hack it.
Figure 5: Coinpayments platform
**Jaxx Liberty** is one of the mobile and web digital wallets (illustrated on Figure 8) that may also be referred to as a desktop wallet since it operates on both Windows and mobile platforms that allows the user to trade digital assets using third-party services like Changelly as shown on Figure 9. For all digital assets, Jaxx was intended to keep them safe from cybercriminals. New mobile wallets offer a variety of security measures if a user's phone is lost. If this is the case, they'll let users swap accounts. Jaxx is compatible with all main operating systems such as Android, IOS, Windows, Linux, and Mac OS. Jaxx enterprise is not able to view the user's digital currency since a private key is produced and saved on the computer of the user. In most cases, making a transaction with an online wallet requires a number of procedures. The Jaxx concept is focused on the Nada privacy model. Nada is responsible for protecting confidentiality and privacy.
Main features of discussed wallets are summarized in Table 2.
## 2 Overview of Digital Wallets' Security
The crypto wallets' security goals, including availability, integrity, and confidentiality, are compatible with most security standards. Adversary makes use of
Figure 6: Ledger Nano S device
Figure 7: Ledger Blue device [7]
vulnerabilities in wallet libraries to create a distinctive impression of the wallet finger that is linked to the user's identity for further monitoring. Although, Android and IOS provides a variety of tools for programmers and customers, some of these features can be abused by hackers to violate the security of the framework of cryptocurrencies that runs on the platform [32].
There are many data transmission functions in the bitcoin cryptocurrency, the most prominent crypto money. These features may pose a security risk, but Bitcoin has a very strong security system, implying that they should be used correctly. The security of the platform should be a priority while investing in online platforms. When purchasing a wallet for this digital money, two-factor verifications are suggested which is a process of authentication that requires two assets of the users including what they know such as login credentials, and what they have like a mobile phone to receive a one-time password (OTP) [9]. In comparison to a physical wallet, a smart approach to storing money in the wallet is possible. This implies that a little quantity of currency should be kept in the digital crypto wallet for daily utilization.
Figure 8: Jaxx Liberty platform
Figure 9: Changelly exchange platform
\begin{table}
\begin{tabular}{|c|c|} \hline \hline Wallet & Main features \\ \hline Exodus & **Safe:** Information are saved on user’s system while being created **Multi-currency:** Supports diverse currencies in the same wallet **Free registration:** Possessing this wallet by simply filling out a form \\ \hline Coinpayments & **Safe:** Money of user is protected against stealing **Multi-currency:** Supports diverse currencies in the same wallet **Integrated with BitGo services:** Increases speed and security of transactions **Common:** Used by thousands of online shops \\ \hline Ledger Nano S & **User-friendly:** Could be used comfortably **Multi-currency:** Supports diverse currencies in the same wallet **Small screen:** Monitoring current transactions and confirming them by a button **Backup and recovery:** Fast restoring process if digital money is lost **Safe:** Provides many security features like a password to lock the wallet \\ \hline Ledger Blue & **Pin code:** Restricts external access using a password with 4-6 characters **Resistant to malicious software:** Cannot be violated by malwares **Safe:** Benefits from dual chip design and includes a firmware for security \\ \hline Jaxx & **Acceptable:** Could be implemented on any OS **Easily operated:** Does not need a lot of steps to execute a transaction **Full control:** Private key is stored and accessed only on user’s computer \\ \hline \hline \end{tabular}
\end{table}
Table 2: Main features of some of most common wallets
Backup wallets can help eliminate issues such as information theft or errors in the computer; however, this condition can only be met if the data has been encoded. The security of data saved on the network is not completely guaranteed. Malicious software is able to infect any machine connected to the internet. Encryption of the data is essential to eliminate any risk of being compromised, which is a vital safety measure. Data should be kept in a multitude of places like a backup wallet. It's not just about cloud storage when it comes to various places, but also regarding physical devices like CDs, external hard disks, USBs, and so on. Daily or frequent backups guarantee that the data is constantly updated. When it comes to digital wallets, encryption plays a critical role. Thus, encrypting the user's cryptocurrency wallet is quite effective method to protect the money saved within that wallet. When someone attempts to enter the digital wallet, a password is created. The password should not be forgotten or lost, as this would result in the loss of funds. The distinction between actual money and cryptocurrency is that if a user's password is lost, may obtain a new one. The user has complete responsibility in cryptocurrency and blockchain. It's critical to combine characters, numbers, and letters to set up a secure password.
### Cryptocurrency Wallet's Backup
A backup wallet is simply another term for transferring money to another location or producing a replica [18]. There should be two wallets during the backup process: primary wallet and backup wallet which is working offline [25]. To keep safe cryptocurrencies, encrypted local backups of the funds can be saved in a hardware wallet that is not connected to the internet (backup wallet). Existing hardware wallets' backup and recovery procedure is a significant problem since most of them employ a word list (mnemonics) to create a duplication of the private keys and restore them while required. These words must be written on paper and kept secure by the user [24]. This approach pushes the problem outside the wallet by converting the private key's seed from digitized to physical type. Rezaeighaleh et al. [24] proposed a novel framework for a backup and recovery process implementing Elliptic-Curve Diffie-Hellman (ECDH) algorithm, which could be easily used by users since they no longer need to write the word list and save it.
### Cryptocurrency Wallet's Encryption
Encrypting confidential information and digital money has always been a robust and reliable method of protection. In cryptocurrency, hashing is a way of transforming huge amounts of data into little digits. It is used in the Bitcoin network for encoding the wallet's address, encoding transactions between two wallets, and confirming the balance in a specific wallet. The Bitcoin network employs a safe hash algorithm such as SHA-256. One of the most important features of this technique is that modification of one bit of incoming data will totally alter
the output. This is related to the Avalanche effect, which reflects the behavior of traditional mathematical algorithms such as Data Encryption Standard (DES) and Advance Encryption Standard (AES), in which a little variation in the input causes the entire hash value to alter considerably [22]. A slight modification in the plaintext may result in a large shift in the ciphertext when using symmetric ciphers. Otherwise, an error will appear while decrypting the encrypted text [23]. Public key cryptography is a technique of proving the identity of a person by using a pair of cryptographic keys (a private key and a public key). A digital signature is created by combining both keys. The blockchain wallet, which connects with the blockchain network, stores these private keys, public keys, and blockchain addresses as well as keeps track of the coins that could be transmitted via digital signature [32].
Since possession of the private key entails complete control over the related cryptocurrency account, managing the private key is critical for security. Before saving in the wallet, the private key must be encoded, and when used, must be decoded into plaintext. The plaintext of the private key in Ethereum, for instance, is a 256-bit binary integer that is usually displayed encoded as a hexadecimal number. Before being saved in the wallet, the private key must be encrypted and then decrypted if needed.
### Cold Wallets as Another Solution
Cold wallets are another solution for storing and protecting data. These are hardware wallets that do not demand an online connection and use a USB stick to transfer transactions and keys [24]. Two computers share some parts of the same digital wallet while signing transactions offline. Only the first computer must be disconnected from all networks, as it is the only one with an entire digital wallet and permission for signing transactions. The other computer is connected to the internet and holds the digital wallet, which can only be used to observe and execute unsigned transactions. Only a few steps are required to complete the transaction:
* **Step 1**: A new transaction should be created on the computer with a connection to the internet and save to a USB device.
* **Step 2**: Transaction must be signed with the computer which does not have an internet connection.
* **Step 3**: Signed transaction should be sent with the computer which is connected to the network.
### Cryptocurrency Wallets and QR Code
Ghaffar Khan et al. [21] employed QR codes for cross-verification across hot and cold wallets to keep digital currencies. Cold wallets are safer against cyber
attacks due to their offline nature; This approach is like an additional protection layer of bitcoin transactions [14]. All cryptocurrency investors should understand the differences between hot and cold wallets in order to ensure safe and secure digital money transactions. Online wallets can send the funds and distribute them in a network only after confirming the private key of the cold wallet and scanning the QR code.
The version of the digital wallet application must be upgraded on a regular basis since every time the program is updated, the users receive vital security upgrades. Updates may provide new capabilities for crypto-wallets, as well as the prevention of a number of issues with different degrees of intensity. Numerous signatures can be used in crypto wallets, requiring several confirmations before a transaction can be funded. This form of security may be employed in larger businesses like banks with staff who have access to government coffers. The multi-signature feature is also available in some web wallets such as BitGo, and Coinbase [5].
## 3 Cryptocurrency Wallets' Security Objectives
Cryptocurrency wallets have security goals that are similar to those of other security structures, including availability, integrity, and confidentiality [17].
**Availability:** The purpose of availability is to guarantee that the legal use of data is not inhibited, which means that the information must be usable and available while demanded by a valid authority. It's critical for wallet applications to make sure that keys can be produced, saved, and retrieved appropriately. In addition, transactions should be properly signed, transmitted, and accessed in response to user queries [17].
The wallets can become unavailable if any failure, overload, or attack occurs. Important features of availability are fail-safe, reliability, scalability, fault-tolerance, up-time, and recoverability. The system could be called fail-safe if the attack or failure has the least impact such as data loss. Reliability is known as the probability of operating as expected if no outside source attempts to interrupt the system. A scalable system allows for increasing the number of available resources without modifying the system architecture. A system can be assumed as a fault-tolerance system if it is able to continue operating properly even with a decreased level of functionality. Up-time is referred to the period of time that the system is actively working and accessible to users. Finally,the term "recoverability" refers to the ability of a system to recover its data in an acceptable time frame in the event of a breakdown [12].
**Integrity:** Integrity refers to the ability to prohibit illegitimate entities from altering data in order to ensure its completeness and correctness. When it comes to blockchain wallets, ensuring the integrity of the private key is critical. The user will lose his/her account's control if the private key kept in the wallet gets
modified or deleted in an illegal way, resulting in the loss of the account's assets. Blockchain has employed cryptographic methods like hashes and signatures to verify that transaction data has not been changed before being transmitted to the blockchain. The integrity feature, on the other hand, is critical for a recently launched transaction. Even if the transaction's data has been altered before being signed by user with the private key, the transaction will be validated by the blockchain system since it carries the signature of the legal owner. It's also possible to tamper with historical transactions once they have been retrieved from the blockchain system.
**Confidentiality:** The goal of confidentiality is to keep sensitive information safe from unwanted access. A digital currency account's private key grants complete control over the account and any digital assets held within it. As a result, the wallet's primary security feature is to guarantee that the private key is not accessible in an illegal manner. Because all the information is publicly available on the blockchain, transaction information is not assumed to be confidential.
## 4 Cryptocurrency Wallets' Adversary Model
Various sorts of digital money wallets have different adversary models like the application-oriented adversary model and physical access adversary model [25]. In this section, the adversary model for cryptocurrency wallets based on software has been discussed. The purpose of the adversary is to compromise the availability, integrity, or confidentiality of the wallet's data. This involves tampering with earlier transactions, preventing the initiation of new transactions, accessing the private key, manipulating newly launched transactions, refusing transaction information queries, etc [17].
The attacker lacks private information specific to a target wallet's owner, like the list of wallet transaction passwords or the user's account's private key. The attacker, however, has the potential to install and execute any program that is installed on the same system as the wallet operates. All the permissions requested by the installed program have been granted. Any option on the device where the wallet operates can be changed by the attacker. The attacker can also execute any program on the user's other devices that utilizes the wallet. The wallet's communication can be listened to and modified by the adversary, even if they don't have access to the encrypted traffic's key. The servers connected to the wallets can be attacked by the adversary, but the blockchain network cannot be controlled by attackers.
The adversary approach described above is realistic since the users might be persuaded to install a new program and then provide it with all the necessary permissions. The program can imitate the appearance of a standard program. Furthermore, tactics like accessibility services, USB debugging, as well as other
smartphone functions might provide attackers with extra possibilities to exploit [17].
## 5 Vulnerabilities in Cryptocurrency Wallets
Transaction management and private key management are two of the most important functionalities of cryptocurrency wallets. Transaction management comprises sending and gathering tokens, as well as querying balances and transactions, while key management covers a private key's creating, saving, importing, and exporting; however, if these capabilities are used incorrectly, attack points may be introduced into the attack surfaces. Furthermore, because an operating system (OS) hosts the digital wallet, an attacker might be able to exploit the OS's properties, arising a danger to the digital wallet's security [17].
The attack surface from the perspective of the cryptocurrency wallet and its underlying operating system has been discussed in the following.
### Cryptocurrency Wallets' Attack Surface
**Transaction Management:** While a user intends to withdraw money from an account, the wallet creates a transaction and signs it using the user's private key. Then, it sends the signed transaction to the blockchain system for confirmation in order to accomplish the operation of the transaction. When a user has to perform a collection process, must present the payer address of their account, which might contain the currency and amount.
Users can access the related account balances and account transaction logs using transaction records of the wallet application and balance inquiry services. This approach may need a connection to the server of the wallet devoted to the service, instead of a blockchain network, because certain blockchain systems do not support direct queries of this information.
When sending or receiving money, information about the transaction provided by the user or shown by the wallet might be altered, causing a security risk and potentially resulting in the user's money being moved to the account of the adversary. If the user's password input screen and the keyboard are observed during the money transfer, the encoded password might be thieved, which violates confidentiality. Diao et al. [13], derived unlock pattern of the user and the status of the foreground program without any authorization, revealing the intensity of security weaknesses in the transmission procedure. If an intruder can disrupt the money transfer or query of the balances or transactions by blocking the link between the wallet and its server or the blockchain network, postures a vulnerability to availability and may result in serious operations like the user extracting the private key to gain back administration of the account, resulting in more impairment. While looking up payments and account balances, an attacker also might deceive the users by falsifying the transmitted information between the wallet server and client, displaying data on the wallet, or data kept
on the wallet's server. In this case, the wallet's integrity will be compromised, which results in a display of incorrect transaction registers or incorrect balances on the wallet, consequently deceiving the users [17].
**Key Management:** If the user has not created a cryptocurrency account, the wallet will randomly produce a couple of private and public keys for a new account on the local device. If the user owns an account, can import the account's private key into the wallet, which enables control of the account from the wallet. Then, the created or imported private key gets encrypted by the digital wallet using user's encryption password. The users might lose full control of their account forever if they lose the private key which leads to the loss of their funds. As a result, the private key should frequently be extracted for backup purposes.
If the random seed employed for producing a private key can be anticipated or retrieved during the creation process, the created private key is the potential to be compromised. If the saved private key gets decrypted or retrieved in plaintext during the storage process, it can be stolen and exploited, putting confidentiality at risk. Another way of violating confidentiality occurs when the attacker observes the input of the user and gains the key when the user is manually typing or copying and pasting the key. Moreover, the wallet may show information related to the key on its screen while importing and exporting keys so that an attacker could watch the data in order to achieve the key, endangering confidentiality. Furthermore, when the password for key encryption is configured, an attacker can obtain it by eavesdropping on the user's input, posing a danger to confidentiality. On the other hand, the account's integrity and availability may be at risk if a third party can manipulate or remove the saved key [17].
In the following section, some of the security threats against mobile wallets have been discussed.
### Digital Wallet's Common Threats
**Inappropriate Usage of Platform:** Android and Apple IOS, for example, supply a group of functions of the host operating system. Abusing these services may cause security risks. All of the host system's services have presented implementation rules, and breaking these instructions is the most typical manner of imposing a recognized threat. For instance, using App Local Storage instead of utilizing IOS Keychain to store confidential information in IOS apps. The data stored in the app's local storage may be exposed to other parts of the program, but the data kept in the Keychain is protected from illegal access by the operating system [26].
**Unsafe Data Storage:** Unintended information disclosures and risky data storage fall under this category. If an attacker obtains access to the system, data saved locally in SQL databases and log files may be at risk. External storage of crucial data is recognized as unsafe and can be misused. The detection of
unintentional data leaks is not as easy as the detection of intentional leaks.
Data leaking might be caused by flaws in rooted devices, hardware, or frameworks. Data leakage vulnerabilities can be exploited in applications that lack sufficient monitoring measures of data leaking.
**Inadequate Cryptography:** Cryptographic functions are frequently used in programs that require encryption. Inadequate cryptography can be exploited by two sorts of threats such as weakness in the encryption process and damaged cryptography functionalities. The first is gaining access to confidential information by exploiting a flaw in the construction of the encryption/decryption procedure. The second risk derives from the use of compromised functions of cryptography.
**Reverse engineering:** Like data, reverse engineering targets encryption keys and hardcoded passwords. This approach entails extracting source code from a digital wallet as well as numerous resources from an APK file. These attacks can be accomplished only by hackers who have a deep knowledge of digital wallets [16].
**Public Wi-Fi:** Using public Wi-Fi such as to conduct digital wallet money transfers can allow third parties to disrupt communication and possibly disrupt payment via MITMF, Wi-Fi sniffing, and DNS spoofing [20][11]. For instance, an attacker could steal the sensitive information of users who are connected to public Wi-Fi such as in cafes.
**Social engineering:** Instead of breaching or employing practical hacking strategies, social engineering is a technique for gaining control over a computer or information of the users by exploiting human psychology. Attackers might sell the information in black markets or use them to make illegal payments. In addition, they can utilize the obtained information as their identity.
**Phishing attacks:** This kind of attack is one of the most frequent attacks where a phishing link is a type of fraudulent access point that attackers exploit to get critical information and private data from users, such as credit card numbers, financial lottery, or SMS. In phishing attacks, attackers try to acquire login information of the user and personal information, putting digital wallet accounts at risk of theft. For example, the Singapore Police Force (SPF) warned people about the growth of the phishing attack in recent months and it has observed about 1200 cases from December 2021 till January 2022. In most cases, victims were called via messaging applications like WhatsApp. During the conversation, they were asked to provide some private information based on the belief that the caller is from one of the Government agencies [8].
## 6 Conclusion
A cryptocurrency wallet is a software application or a hardware device that provides users the possibility to execute several transactions. Users aiming to buy a digital wallet should recognize their needs and objectives before choosing which type to obtain. Data organization as well as speed, security, and the possibility to execute transactions between two clients are pushing digital wallets into more demand. As these wallets become more popular, the security and safety of the wallets become crucial [19]. In this study, we have seen that creating a backup of the private key and also encrypting the digital money using hash functions help diminish privacy and security threats as well as system errors. Employing QR codes as cross-verifying cold wallets is another technique for keeping digital currencies safe. The security of digital wallets has the same objectives as other security systems including availability, integrity, and confidentiality. Moreover, the adversary model for cryptocurrency wallets has been discussed in this study where the adversary or attacker aims to violate the security objectives of the digital wallets. Transaction management and key management as two principal features of crypto wallets provide several functionalities such as sending and collecting the tokens and creating and saving the private key. Exploiting these capabilities by attackers may vulnerabilities to blockchain-based wallets. It's critical to reinforce cryptocurrency wallets with the system's updated security standards, avoid infection of the application supply chain, and mitigate repackaging threats in order to ensure wallet security.
|
2307.16009 | Comparative $^{181}$Ta-NQR Study of Weyl Monopnictides TaAs and TaP:
Relevance of Weyl Fermion Excitations | Based on our first detailed $^{181}$Ta nuclear quadrupole resonance (NQR)
studies from 2017 on the Weyl semimetal TaP, we now extended our NQR studies to
another Ta-based monopnictide TaAs. In the present work, we have determined the
temperature-dependent $^{181}$Ta-NQR spectra, the spin-lattice relaxation time
$T_{1}$, and the spin-spin relaxation time $T_{2}$. We found the following
characteristic features that showed great contrast to what was found in TaP:
(1) The quadrupole coupling constant and asymmetry parameter of EFG, extracted
from three NQR frequencies, have a strong temperature dependence above $\sim$80
K that cannot be explained by the density functional theory calculation
incorporating the thermal expansion of the lattice. (2) The temperature
dependence of the spin-lattice relaxation rate, $1/T_{1} T$, shows a $T^{4}$
power law behavior above $\sim$30 K. This is a great contrast with the $1/T_{1}
T \propto T^{2}$ behavior found in TaP, which was ascribed to the magnetic
excitations at the Weyl nodes with a temperature-dependent orbital hyperfine
coupling. (3) Regarding the nuclear spin-spin interaction, we found the
spin-echo signal decays with the pulse separation simply by a Lorentzian
function in TaAs, but we have observed spin-echo modulations in TaP that is
most likely due to the indirect nuclear spin-spin coupling via virtually
excited Weyl fermions. From our experimental findings, we conclude that the
present NQR results do not show dominant contributions from Weyl fermion
excitations in TaAs. | Tetsuro Kubo, Hiroshi Yasuoka, Balázs Dóra, Deepa Kasinathan, Yurii Prots, Helge Rosner, Takuto Fujii, Marcus Schmidt, Michael Baenitz | 2023-07-29T15:37:35Z | http://arxiv.org/abs/2307.16009v1 | Comparative \({}^{181}\)Ta-NQR Study of Weyl Monopnictides TaAs and TaP: Relevance of Weyl Fermion Excitations
###### Abstract
Based on our first detailed \({}^{181}\)Ta nuclear quadrupole resonance (NQR) studies from 2017 on the Weyl semimetal TaP, we now extended our NQR studies to another Ta-based monopnictide TaAs. In the present work, we have determined the temperature-dependent \({}^{181}\)Ta-NQR spectra, the spin-lattice relaxation time \(T_{1}\), and the spin-spin relaxation time \(T_{2}\). We found the following characteristic features that showed great contrast to what was found in TaP: (1) The quadrupole coupling constant and asymmetry parameter of EFG, extracted from three NQR frequencies, have a strong temperature dependence above \(\sim\)80 K that cannot be explained by the density functional theory calculation incorporating the thermal expansion of the lattice. (2) The temperature dependence of the spin-lattice relaxation rate, \(1/T_{1}T\), shows a \(T^{4}\) power law behavior above \(\sim\)30 K. This is a great contrast with the \(1/T_{1}T\propto T^{2}\) behavior found in TaP, which was ascribed to the magnetic excitations at the Weyl nodes with a temperature-dependent orbital hyperfine coupling. (3) Regarding the nuclear spin-spin interaction, we found the spin-echo signal decays with the pulse separation simply by a Lorentzian function in TaAs, but we have observed spin-echo modulations in TaP that is most likely due to the indirect nuclear spin-spin coupling via virtually excited Weyl fermions. From our experimental findings, we conclude that the present NQR results do not show dominant contributions from Weyl fermion excitations in TaAs.
+
Footnote †: Present address: MIP Management- und IT-Beratung GmbH. Film- und Medienzentrum Konigsallee 49, 71638 Ludwigsburg, Germany
+
Footnote †: Present address: University of Hyogo, Graduate School of Material Science, Hyogo 678-1297, Japan
+
Footnote †: Present address: University of Hyogo, Graduate School of Material Science, Hyogo 678-1297, Japan
+
Footnote †: Present address: University of Hyogo, Graduate School of Material Science, Hyogo 678-1297, Japan
+
Footnote †: Present address: University of Hyogo, Graduate School of Material Science, Hyogo 678-1297, Japan
## I Introduction
It is well known by now that the topological properties in materials open a new world in condensed matter physics. Particularly, topological semimetals, such as Dirac, Weyl, or line-node semimetals, are gapless states of matter characterized by their nodal band structures and surface states [1]. Weyl semimetals are realized in systems without spatial-inversion or time-reversal symmetry even more with the strong spin-obit coupling. Quite interesting new phenomena such as ultra-high mobility [2], surface Fermi arcs [3], and chiral magnetic effect [4] are expected to emerge in these materials. Furthermore, in topological semimetals, we found new types of quasiparticles, Dirac- and Weyl-fermions, and those excitations have exhibited fascinating properties that have been the subject of many theoretical and experimental investigations. The first target materials in this field were taken among the monopnictides \(TMPn\) (\(TM=\) Nb, Ta; \(Pn=\) P, As). Visualization of the nodal structure in topological semimetals has been realized by ARPES (angle-resolved photoemission spectroscopy) in the topologically protected surface states [5]. Indirectly, the large negative magnetoresistance [6], optical and resistivity measurements [7], and chiral anomalies [8] are believed to be associated with those quasiparticles. In addition to the surface-sensitive probes like ARPES and electron spin resonance (ESR), the microscopic measurements which enable us to study the static and dynamical properties of quasiparticles as a bulk are highly expected. Along this line, we have succeeded for the first time to explore the Weyl fermion excitations in TaP through the temperature dependence of nuclear quadrupole resonance (NQR) relaxation rate, \(1/T_{1}T\)[9]. There, we have demonstrated that in addition to the \(T^{4}\) power law of \(1/T_{1}T\) associated with the linear dispersion of Weyl nodes near the Fermi level, we have pointed out the importance of fluctuations in Dirac/Weyl-type orbital currents to the relaxation channel through the characteristic temperature dependence of the orbital hyperfine interaction [10]. This scenario is supported by the theory explicitly and the overall temperature dependence of \(1/T_{1}T\propto T^{2}\) has been interpreted properly [11].
In this paper we present an extended study of the sister compound, TaAs, using the same \({}^{181}\)Ta-NQR technique. TaAs has been claimed to be a typical example of the Weyl nodal semimetal from band structure calculations. Regarding the nodal structure, both compounds have Weyl points near the Fermi level, \(E_{\rm F}\), 14 meV below \(E_{\rm F}\) for the W2 Weyl points in TaAs [12], while 13 meV above \(E_{\rm F}\) in TaP [13].
In the following, we will first briefly describe the experimental technique, then discuss the temperature dependence of NQR parameters, \(\nu_{\rm Q}\) and \(\eta\). This will be followed by the temperature dependence of nuclear magnetic relaxation time, \(T_{1}\), and a comparison of spin-echo
decay curves of TaAs and TaP, along with their interpretations.
## II Experimental
Basically, we followed the experimental procedure of previous \({}^{181}\)Ta-NQR experiments and analysis [9]. Here, we briefly describe the essence of them.
Samples used in the present NQR experiments were prepared by the chemical transport reaction (CTR) method. Starting from microcrystalline powder synthesized by reacting 3-nine Tantalum and 6-nine Arsenic, single crystals of TaAs were grown in a temperature gradient from 900 \({}^{\circ}\)C (source) to 1000 \({}^{\circ}\)C (sink), and a transport agent concentration of 13 mg/cm\({}^{3}\) iodine. The crystals obtained by the CTR were characterized by electron-probe-microanalysis and powder X-ray diffraction (XRD) to ensure the single phase, tetragonal \(I4_{1}md\) (#109) structure.
Temperature-dependent powder XRD was performed at the beamline ID22 at the European Synchrotron Research Facility (ESRF) in Grenoble in a temperature range between 80 and 300 K with a wavelength \(\lambda=0.39997\) A.
The NQR experiments were mostly carried out with high-quality polycrystals prepared by powdering several single crystals. The NQR spectra and the nuclear magnetic relaxation times were measured using a standard pulsed (spin-echo) NMR apparatus (Apollo, TecMag). The \({}^{181}\)Ta-NQR spectra were taken using the frequency sweep method under zero applied magnetic field. In order to avoid any artificial broadening, fast Fourier-transformed (FFT) spin-echo signals were summed across the spectrum (FFT-summation), or the real part of spin-echoes was integrated after appropriate phase adjustments.
The quadrupole Hamiltonian can be written, using a set of principal axes [14], as
\[\mathcal{H}_{\mathrm{Q}}=\frac{e^{2}qQ}{4I(2I-1)}\left[3I_{Z}^{2}-I(I+1)+ \frac{\eta}{2}(I_{+}-I_{-}^{2})\right], \tag{1}\]
where \(eq\) is the largest component of the electric field gradient (EFG) tensor, \(V_{ZZ}\), and \(eQ\) the nuclear quadrupole moment. The EFG tensor is generally defined as \(|V_{XX}|\leq|V_{YY}|\leq|V_{ZZ}|\) with the asymmetry parameter, \(\eta\equiv(V_{XX}-V_{YY})/V_{ZZ}\). The quadrupole-split nuclear energy levels, \(E_{m}\), and the resultant transition frequencies can be readily calculated numerically by diagonalizing Eq. (1). For \(\eta=0\), the energy levels can simply be expressed as,
\[E_{m}=\frac{1}{6}h\nu_{\mathrm{Q}}\left[3m^{2}-I(I+1)\right],\ \nu_{\mathrm{Q}}=\frac{3e^{2}qQ}{2I(2I-1)h}, \tag{2}\]
where \(\nu_{\mathrm{Q}}\) is the quadrupole coupling constant. The NQR occurs for the transition between two levels \(m\) and \(m+1\), and the resonance condition can be written as \(f_{\mathrm{Q}}=\nu_{\mathrm{Q}}(2|m|+1)/2\). Hence, three NQR lines for \({}^{181}\)Ta with \(I=7/2\) are expected at \(1\nu_{\mathrm{Q}}\), \(2\nu_{\mathrm{Q}}\), and \(3\nu_{\mathrm{Q}}\) with equal spacing. For the finite \(\eta\) values, the calculated NQR frequencies with \(\nu_{\mathrm{Q}}=1\) MHz for respective transitions are shown in Fig. 1.
Since the nuclear spin-lattice (longitudinal) relaxation time, \(T_{1}\), is extremely long in TaAs at low temperatures (typically several hundred seconds), we mostly employed the progressive saturation method [15] to measure the recovery of nuclear magnetization below 130 K, as was in the case of TaP [9]. Above 130 K, a conventional inversion recovery method is employed. At 130 K, both methods yield essentially the same \(T_{1}\) value.
The recovery of nuclear magnetization was fitted to the theoretical function for the magnetic relaxation in NQR lines for \({}^{181}\)Ta (\(I=7/2\)) nucleus with finite \(\eta=0.558\)[16],
\[M_{n}(t)= M_{0}[1-\{Q_{1}\exp(-3.03t/T_{1})\] \[+\{Q_{2}\exp(-8.260t/T_{1})\] \[+\{Q_{3}\exp(-17.074t/T_{1})\}], \tag{3}\]
where \(Q_{n}\) are constants depending on which NQR transition is excited. For the preset \(T_{1}\) measurements, we typically used \(2\nu_{\mathrm{Q}}\)-line corresponding to the \(\pm 3/2\leftrightarrow\pm 5/2\) nuclear quadrupole transition. In this case, \(Q_{1}\), \(Q_{2}\) and \(Q_{3}\) are 0.076, 0.021, and 0.903, respectively.
Figure 2 shows the recovery of nuclear magnetization measured by the progressive saturation method at (a) 4.2 K and (b) 100 K for the \({}^{181}\)Ta-NQR \(2\nu_{\mathrm{Q}}\)-line in TaAs. Solid lines are the least-squares fitting to Eq. (3). For both temperatures, experimental data are perfectly fitted by the theoretical curve, verifying that the nuclear relaxation is governed by magnetic fluctuations as in the case of TaP [9].
Figure 1: (Color online) EFG asymmetry parameter, \(\eta\), dependence of NQR frequencies for \(I=7/2\). Here, the quadrupole coupling parameter, \(\nu_{\mathrm{Q}}\), is set as 1 MHz. The cross point between \(1\nu_{\mathrm{Q}}\) and \(2\nu_{\mathrm{Q}}\) lines at \(\eta=0.5855\) is called as the “magic eta”. \(\eta\) values determined from the experimental NQR frequencies for TaAs (\(\eta=0.558\)) and TaP (\(\eta=0.423\)) are shown by dashed arrows.
For the measurements of the nuclear spin-spin (transverse) relaxation time, \(T_{2}\), we simply measure spin-echo amplitude, \(E\), as a function of the time between the first exciting and the second refocusing pulses. The repetition time between spin-echo sequences was taken to be sufficiently longer than \(T_{1}\) (typically 8-10 times longer than \(T_{1}\) value) to avoid a saturation effect.
In order to extract the EFG theoretically, we performed band structure calculations using the density functional theory (DFT) solid-state code FPLO [17]. We used the Perdew-Wang parametrization of the local density approximation (LDA) for the exchange-correlation functional [18; 19]. The strong spin-orbit coupling in TaAs is taken into account by performing full-relativistic calculations, wherein the Dirac Hamiltonian with a general potential is solved. The quadrupole coupling \(\nu_{\rm Q}\) can be obtained by the calculated EFG at the Ta nuclear site which is defined as the second partial derivative of the electrostatic potential \(V(\vec{r})\) at the position of the nucleus \(V_{ij}=(\partial_{i}\partial_{j}V(0)-\delta_{ij}\Delta V(0)/3)\).
## III Experimental results and discussion
In this section, we present the static and dynamic properties revealed by the temperature dependence of the EFG parameters and nuclear magnetic relaxation times with phenomenological discussions.
### NQR spectra and their temperature dependence
A typical example of \({}^{181}\)Ta-NQR spectra in TaAs measured at 4.2 K is shown in Fig. 3(a) for three NQR transitions. From the lowest frequency we define the lines as \(1\nu_{\rm Q}\) (\(\pm 1/2\leftrightarrow\pm 3/2\)), \(2\nu_{\rm Q}\) (\(\pm 3/2\leftrightarrow\pm 5/2\)) and \(3\nu_{\rm Q}\) (\(\pm 5/2\leftrightarrow\pm 7/2\)) lines. For comparison, we also depict a similar spectrum of TaP in Fig. 3(b). It is immediately seen that the frequency difference between \(1\nu_{\rm Q}\) and \(2\nu_{\rm Q}\) lines is smaller for TaAs than TaP. It means that the \(\eta\) value is larger for TaAs than TaP as indicated in Fig. 1. The EFG parameters, \(\nu_{\rm Q}\) and \(\eta\), were calculated using the same manner described in Ref. [9] and shown in Table 1, together with values for TaP and NbP. For NbP, \(\nu_{\rm Q}\) and \(\eta\) are extracted from the single-crystal \({}^{93}\)Nb-NMR spectrum at \(\sim\)6.3 T, 4.2 K. The calculated values of \(\nu_{\rm Q}\) agrees with the experimental values of all compound within 4%.
The temperature dependence of NQR frequencies is plotted in Fig. 4. From the observed NQR frequencies we can extract \(\nu_{\rm Q}\) and \(\eta\) by diagonalizing Eq. (1) and those temperature dependences, \(\nu_{\rm Q}(T)\) and \(\eta(T)\), are shown in Fig. 5(a) and (b) for TaAs (open triangles) and TaP (open circles), respectively. In general, \(\nu_{\rm Q}\) is expected to decrease with increasing temperature due to the thermal expansion of the lattice, and is often discussed by an empirical formula, \(\nu_{\rm Q}(T)=\nu_{\rm Q0}(1-\alpha T^{3/2})\). Actually, \(\nu_{\rm Q}(T)\) is well fitted to this empirical formula below \(\sim\)100 K for both TaAs and TaP, but the experimental data fall more rapidly above \(\sim\)100 K.
In particular, for TaAs, we have made the DFT calculation for EFG parameters by putting the measured lattice parameters measured by synchrotron XRD at selected temperatures above 80 K shown in Table 2. As the temperature decreases, the lattice parameters exhibit a slight, monotonous reduction while maintaining a constant \(c/a\) ratio, indicative of i
Figure 3: (Color online) A \({}^{181}\)Ta-NQR spectrum in (a) TaAs in comparison with that in (b) TaP. Both spectra were taken at 4.2 K and curves are Lorentzian fit of the line profiles.
Figure 2: (Color online) The recovery curves of nuclear magnetization measured by the spin-echo amplitude for the \(2\nu_{\rm Q}\) line at (a) 4.2 and (b) 100 K. Solid lines are the least-squares fit to Eq. (3) for magnetic fluctuations. For details, see text.
culated \(\nu_{\rm Q}(T)\) and \(\eta\) are shown by stars in Fig. 5(a). As can be seen from the figure, the thermal expansion of the lattice cannot account for the temperature dependence of the observed NQR parameters.
It should be noted here that the fractional decrease of \(\nu_{\rm Q}(T)\) and \(\eta(T)\) is very similar as shown by scaling TaAs results (shown by open open diamonds) to TaP by factors of 0.95 and 0.76 for \(\nu_{\rm Q}(T)\) and \(\eta(T)\), respectively.
EFG generally has contributions from lattice symmetry and asymmetrical charge distribution around the nucleus in concern,
\[eq(T)=eq_{\rm lattice}(T)+eq_{\rm el}(T). \tag{4}\]
The first term was literally calculated by measured thermal lattice expansion and agrees well with the experimental values below 80 K. On the other hand, experimental data deviate from the first term, showing an almost linear dependence on temperature above 80 K. This fact suggests that the second term becomes dominant above 80 K, indicating an unusual electronic contribution to the EFG is induced by a reason that is yet to be identified.
### Nuclear spin-lattice relaxation
The temperature dependence of \({}^{181}\)Ta nuclear spin-lattice relaxation rate divided by \(T\), \(1/T_{1}T\), in TaAs is shown in Fig. 6 together with previous data of TaP [9]. Also, we reproduced data of \({}^{75}\)As-NQR taken by Wang _et al._[20]. There is a general tendency for the \(1/T_{1}T\) exhibiting that from high-temperature power law type relaxation process crosses over to temperature-dependent Korringa type around \(T^{*}\sim 20\)-40 K.
Quite generally, \(1/T_{1}T\) can be expressed by using the wave vector (\(q\)) and frequency (\(\omega\)) dependent magnetic susceptibility, \(\chi(q,\omega)\), characterizing the magnetic excitations in a system as,
\[\frac{1}{T_{1}T}=\frac{2\gamma_{\rm N}^{2}k_{\rm B}}{g^{2}\mu_{\rm B}^{2}}\sum _{q}A_{q}^{2}\frac{\chi_{\perp}^{\prime\prime}(q,\omega_{\rm N})}{\omega_{ \rm N}}, \tag{5}\]
where \(\gamma_{\rm N}\) is the nuclear gyromagnetic ratio, \(k_{\rm B}\) the Boltzmann constant, \(g\) the electron \(g\)-factor, \(\mu_{\rm B}\) the Bohr magneton, \(A_{q}\) the \(q\)-dependent hyperfine coupling constant, \(\chi_{\perp}^{\prime\prime}(q,\omega)\) the transverse component of imaginary part of
\begin{table}
\begin{tabular}{c c c c c} \hline \(T(K)\) & \(a\) (Å) & \(c\) (Å) & \(c/a\) & \(V\) (Å\({}^{3}\)) \\ \hline
300 & 3.43752 & 11.64762 & 3.38838 & 137.63 \\
260 & 3.43642 & 11.64374 & 3.38833 & 137.50 \\
220 & 3.43563 & 11.64089 & 3.38828 & 137.40 \\
180 & 3.43492 & 11.63826 & 3.38822 & 137.32 \\
140 & 3.43437 & 11.63627 & 3.38818 & 137.25 \\
100 & 3.43375 & 11.63397 & 3.38812 & 137.17 \\
80 & 3.43349 & 11.63299 & 3.38809 & 137.14 \\ \hline \end{tabular}
\end{table}
Table 2: Lattice parameters measured by synchrotron XRD for TaAs at selected temperatures above 80 K.
Figure 4: (Color online) Temperature dependence of the NQR peak frequencies in (a) TaAs in comparison with those in (b) TaP. Here \(1\nu_{\rm Q}\), \(2\nu_{\rm Q}\), and \(3\nu_{\rm Q}\) line correspond to \(\pm 1/2\leftrightarrow\pm 3/2\), \(\pm 3/2\leftrightarrow\pm 5/2\), and \(\pm 5/2\leftrightarrow\pm 7/2\) quadrupole transitions, respectively.
Figure 5: (Color online) Temperature dependences of quadrupole coupling parameter, \(\nu_{\rm Q}\), and asymmetry parameter of the EFG, \(\eta\), for TaAs and TaP are shown in panels (a) and (b), respectively. Both values were extracted from experimental data shown in Fig. 4 by diagonalizing Eq. (1). The dashed curves with open stars are DFT calculated values of \(\nu_{\rm Q}\) and \(\eta\) using the lattice parameters of respective temperatures shown in Table 2. The calculated \(\nu_{\rm Q}(T)\) follows the empirical form with \(\nu_{\rm Q}(T)=\nu_{\rm Q0}(1-\alpha T^{3/2})\), \(\nu_{\rm Q0}=20.24\) MHz, \(\alpha=3.92\times 10^{-7}\) K\({}^{-3/2}\).
\(\chi(q,\omega)\), and \(\omega_{\rm N}\) the NQR frequency. Since at present we do not have any plausible microscopic theory to calculate \(\chi(q,\omega)\) in multiband systems like TaAs, we have adopted the theoretical \(1/T_{1}T\) for non-interacting itinerant electrons based on the band structure calculation with random phase approximation (RPA). Also, since we do not have the information about \(A_{q}\), we cannot perform a quantitative analysis of \(1/T_{1}T\). So, we try to interpret the data qualitatively only using the shape of temperature dependence. In what follows we will discuss it by setting three cases: [Case-1] simple calculation from the density of states (DOS), [Case-2] in-gap states near the Fermi level, and [Case-3] Weyl fermion excitations.
#### iii.2.1 Case-1: Simple \(1/t_{1}t(t)\) from DOS
Here, we have simply adopted the theoretical \(1/T_{1}T\) for non-interacting itinerant electrons based on the band structure calculation with RPA. For such a system \(1/T_{1}T\) may be expressed using density of state near the Fermi level as [14; 21],
\[\frac{1}{T_{1}T}\propto\frac{A_{\rm hf}^{2}}{T}\int f(E-\mu_{\rm c})[1-f(E-\mu_ {\rm c})]D(E)^{2}{\rm d}E, \tag{6}\]
where \(f(E)\) is a fermi distribution function, \(D(E)\) the energy dependent DOS, and \(\mu_{\rm c}\) the temperature-dependent chemical potential. If \(A_{\rm hf}\) does not change with temperature and is set to one, the calculation of \(1/T_{1}T\) is straightforward from calculated \(D(E)\) based on the band structure shown as Fig. 7(a) for TaAs and TaP. The calculated \(1/T_{1}T\) is shown by curved lines in Fig. 7(b). Also, we show the experimental \(1/T_{1}T\) data of TaAs and TaP. We can clearly observe that, aside from the absolute value of \(1/T_{1}T\), the temperature dependencies for experimental and calculated \(1/T_{1}T\) do not match each other. This shows that the simple Korringa-type relaxation process cannot account for the experimental results observed.
#### iii.2.2 Case-2: \(1/t_{1}t\) from in-gap states
Simple \(D(E)\) calculations predict a fairly high energy scale for the excitation of the valence band. To reconcile this with the experimental data, here we assume the existence of rather narrow bands crossing the Fermi energy shown inset of Fig. 8. Following the common phenomenological treatment of \(1/T_{1}T\) at the high-temperature region, an activated-type temperature dependence of \(1/T_{1}T\) has been assumed. Including the low temperature upturn the data have been fitted to the following empirical form,
\[\frac{1}{T_{1}T}=\alpha T^{-\beta}+\left(\frac{1}{T_{1}T}\right)_{0}\exp\left( -\frac{\Delta}{k_{\rm B}T}\right), \tag{7}\]
Figure 6: (Color online) Temperature dependence of \(1/T_{1}T\) measured for the \(2\nu_{\rm Q}\) line in TaAs is shown by filled circles. For comparison, the similar data taken by \({}^{75}\)As-NQR are shown in cross squares [20]. Also, data in TaP are shown in open circles where the \(T^{2}\) temperature dependence of \(1/T_{1}T\) which is characteristic to the Weyl fermion excitations is seen above \(\sim\)30 K [9].
Figure 7: (Color online) (a) The calculated \(D(E)\) curves within \(\Delta E\sim\pm 1000\) K based on the band structure are shown for TaAs and TaP. (b) Temperature dependence of \(1/T_{1}T\) calculated from DOS (solid lines) are compared with the experimental data for TaAs (filled circles) and TaP (open circles). Here, the hyperfine coupling constant \(A_{\rm hf}\) is set to one and is assumed to be temperature-independent.
where the first term is associated with the local moment type fluctuations of the in-gap state, and the second term is due to an activation process in high temperatures with the energy of \(\Delta\). The solid line is a least-squares fit of the data to Eq. (7). We found \(\alpha=8\times 10^{-4}\) sec. K and \(\beta=0.55\) for the first term, and \((1/T_{1}T)_{0}=0.18\) sec. K and \(\Delta/k_{\rm B}=283\) K (24.4 meV) for the second term. The energy scale in the activation process found in TaAs is nearly one order of magnitude larger than those observed in similar materials, SmB\({}_{6}\) (\(\Delta=4.3\) meV) [22], FeGa\({}_{3}\) (\(\Delta=1.1\) meV) [23], and PuB\({}_{4}\) (\(\Delta=1.8\) meV) [24]. There also exists a common feature of \(1/T_{1}T\) in low-temperature region, where the exponential decrease of \(1/T_{1}T\) with decreasing temperature crosses over to other excitations which give rise to an upturn of \(1/T_{1}T\). Within the present model, the low-temperature behavior must be due to the local moment-type fluctuations in the occupied narrow band. If this is the case, \(1/T_{1}T\) should be treated by the exchange narrowed theory and \(\beta\) should be one. The fit value, \(\beta=0.55\), may indicate that the assumed in-gap state is rather spatially extended so that the exchange narrowed theory may not be applicable. The origin of the in-gap state is not clear, but it may be associated with Anderson localization [25] or impurity states.
#### Case-3: \(1/t_{1}t(t)\) from Weyl fermion excitations
The first successful observation of the Weyl fermion excitations in topological materials has been achieved in the temperature dependence of \(1/T_{1}T\) in TaP where the \(T^{2}\) power law dependence has observed in high temperatures [9]. This \(T^{2}\) dependence was interpreted as competing relaxation channels between the spin and orbital. The spin channel is due to the Weyl fermion excitations associated with the linear dispersion around the Weyl points and has \(T^{4}\) dependence of \(1/T_{1}T\). The orbital channel is the relaxation process associated with the fluctuations of the orbital hyperfine field which leads to the \(T^{-2}\) dependence [10; 11]. In high temperatures, both contributions are equally acting in TaP, then the \(T^{2}\) dependence has been observed which is associated with the excitation of Weyl points located 13 meV above Fermi level. The same scenario was applied to the \(T^{2}\) dependence observed in \({}^{75}\)As-NQR measurements in TaAs [20]. However, our \({}^{181}\)Ta-NQR measurements in TaAs revealed \(T^{4}\) dependence, meaning the orbital contribution is negligibly small. Following the previous calculation for TaP, \(1/T_{1}T\) from the spin channel for TaAs has been calculated using,
\[\frac{1}{T_{1}T} = \alpha\left[4\mu(T)^{4}+8\pi^{2}\mu(T)^{2}\,T^{2}+(28\pi^{4}/15) \,T^{4}\right],\] \[\mu(T) = \frac{\mu(0)}{1+c[T/\mu(0)]^{2}}, \tag{8}\]
where \(\mu(T)\) is the temperature-dependent chemical potential [11] in the unit of K. Then we have obtained a reasonably good fit to the data with \(\alpha=6.14\times 10^{-13}\) sec.\({}^{-1}\) K\({}^{-5}\), \(\mu(0)=120\) K, \(c=35\) as shown by a solid curve in Fig. 9. It should be noted that the deviation above \(\sim\)100 K may be due to a cutoff effect of the Weyl fermion excitation toward the Korringa process. We also note that we currently have no explanation as to why the orbital relaxation channel is not visible compared with the case of TaP.
Based on the given information, it is difficult to draw a definitive conclusion about the temperature dependence of \(1/T_{1}T\). While Case 2 appears to be the most plausible scenario, the lack of information regarding the hyperfine coupling constants of both spin and charge relaxation channels prevents us from making a conclusive statement.
Figure 8: (Color online) Fitting for \(1/T_{1}T\) assuming a rectangular in-gap state shown as inset a least-sure fit of the data has adapted to Eq. (7). The result is shown by a solid curve with low-temperature power law exponent \(\beta=0.55\) and high-temperature activation energy of \(\Delta=283\) K (24.4 meV).
Figure 9: (Color online) Experimental temperature dependence of \(1/T_{1}T\) are fitted to theoretical Weyl fermion excitations by a solid curve in TaAs. The normalized temperature dependence of chemical potential, \(\mu(T)/\mu(0)\), is shown in the inset.
### Nuclear spin-spin relaxation
The nuclear spin-spin (transverse) relaxation time, \(T_{2}\), was obtained by measuring the spin-echo amplitude, \(E\), as a function of the time \(t\) between the first exciting pulse and the spin-echo position. The amplitude \(E(t)\) can generally be expressed as,
\[E(t)=E_{0}\mathrm{e}^{-\Delta^{2}t^{2}}[c_{0}+c_{1}\cos(Jt+\phi)\mathrm{e}^{-t /\tau_{\mathrm{c}}}], \tag{9}\]
where \(\Delta\) is the second moment of the direct nuclear spin-spin interaction due to the classical dipolar coupling in the first place, \(c_{0}\) and \(c_{1}\) are constants, and the cosine term is the oscillatory term due to the indirect coupling or nuclear quadrupole coupling with their characteristic decay constant, \(\tau_{\mathrm{c}}\). The spin-echo modulation is well known for the case that the interaction is given by the formula \(J(\vec{I}_{i}\cdot\vec{I}_{j})=J/2\,(I_{i+}I_{j-}+I_{i-}I_{j+})+JI_{z}^{2}\). Here, the \(I_{z}^{2}\) term is responsible to the oscillation because this term is invariant for the refocusing pulse (\(\pi\)-rotation in the rotating frame) making an oscillatory behavior of the formation of spin-echo as a function of \(t\). The clear evidences for this oscillated spin-echo decay have been documented for the nuclear quadrupole interaction [26; 27] and the indirect nuclear spin-spin coupling via conduction electrons (the Ruderman-Kittel interaction) [28]. The direct nuclear spin-spin coupling includes the same term, but the coupling constant \(J\) is so small (the oscillation has a range of several milliseconds at least) that one could not see this effect except for a special case. It should be noted that the direct nuclear spin-spin interaction is easily detuned by inhomogeneous broadening (either external field, sample inhomogeneity, or both) making the decay longer with an exponential function, \(E(t)=E_{0}\exp(-\alpha t)\).
The experimental spin-echo decay curves taken at \(2\nu_{\mathrm{Q}}\) line of TaAs and TaP are shown in Fig. 10(a), where spin-echo decays basically exponential for both compounds, but the oscillation was seen only for TaP. As shown in Fig. 10(b), the oscillatory part of the decay in TaP can be fitted to \(\Delta E(t)=c_{1}\cos(\omega_{\mathrm{p}}t+\phi)\exp(-t/\tau_{\mathrm{c}})\) with \(c_{1}=0.30\), \(\omega_{\mathrm{p}}/2\pi=3.58\,\mathrm{kHz}\), and \(\tau_{\mathrm{c}}=150\,\mu\mathrm{sec}\). with \(\omega_{\mathrm{p}}\) is an energy scale of indirect coupling. If the oscillation is caused by an indirect nuclear spin-spin coupling via virtual excitation of Weyl fermions as illustrated in Fig. 10(c), the absence of oscillation in TaAs indicates an absence of Weyl fermion excitations. This may be consistent with the \(1/T_{1}T\) behavior discussed in Case 2.
## IV Concluding remarks
We presented an extended comparative microscopic study of one of the typical Weyl semimetals, TaAs, beyond previous work on TaP, utilizing the \({}^{181}\)Ta-NQR technique. The experimental results are contrasted between the above twoomoplicides. The NQR parameters, \(\nu_{\mathrm{Q}}\) and \(\eta\), are in good agreement with the ab initio calculations for both compounds. However, their temperature dependence above approximately \(100\,\mathrm{K}\) shows distinct characteristics, in the sense that \(\nu_{\mathrm{Q}}(T)\) deviates considerably from calculated values using simple thermal expansion. This discrepancy is likely due to a manifestation of the change of the electronic structure above \(100\,\mathrm{K}\).
Likewise, nuclear spin-lattice relaxation rate \(1/T_{1}T\) and nuclear spin-echo decay have great contrast between TaP and TaAs. In TaP, \(1/T_{1}T\) are well documented by the Weyl fermion excitations with a temperature-dependent orbital hyperfine interaction. However, in TaAs, we observed a \(T^{4}\) power law dependence of \(1/T_{1}T\), which could potentially be associated with the linear dispersion of the Weyl fermions within a certain temperature range. Despite this observation, we were unable to draw a conclusive picture for \(1/T_{1}T\). The lack of information regarding the hyperfine coupling constants of both spin and charge relaxation channels prevents us from making a conclusive statement.
The work on TaAs shows that there are still many open questions in the field of Weyl semimetals, and this is even more true when trying to understand local measurement methods such as the \({}^{181}\)Ta-NQR. There is an urgent need to use more local methods like NQR and NMR but also muon spin spectroscopy (\(\mu\)SR) or ESR to fully understand electronic excitations near the Fermi level in detail.
## Acknowledgements
We thank G. Auffermann, U. Burkhardt, and V. Suss for help with the synthesis and characterization of the TaAs crystals. B. D. was supported by the Ministry of Culture and Innovation and the National Research,
Figure 10: (Color online) (a) Typical spin echo decay for \(2\nu_{\mathrm{Q}}\) lines in TaAs (filled circles) and TaP (open circles) at \(4.2\,\mathrm{K}\). A strong sin-echo modulation has been observed in TaP while in TaAs spin-echo decays exponentially without any modulation. The oscillatory part of the spin-echo decay in TaP (open circles) is shown in (b) with the data fit to Eq. (9) (solid curve). A cartoon of an indirect nuclear spin-spin coupling via virtual excitation of Weyl fermions is illustrated in (c).
Development and Innovation Office within the Quantum Information National Laboratory of Hungary (Grant No. 2022-2.1.1-NL-2022-00004) K134437, K142179 and by a grant of the Ministry of Research, Innovation and Digitization, CNCS/CCCDI-UEFISCDI, under projects number PN-III-P4-ID-PCE-2020-0277. We thank U. Nitzsche (IFW Dresden) for technical support. We thank the ESRF (ID22) for providing beamtime.
|
2307.07104 | GRB 221009A with an unconventional precursor: a typical two-stage
collapsar scenario? | As the brightest Gamma-Ray burst (GRB) ever detected, GRB 221009A may offer a
chance that reveals some interesting features which are hidden in those bursts
that are not so bright. There seems a very weak emission with a flux of
$10^{-8}\sim10^{-7}$ erg cm$^{-2}$ s$^{-1}$ between the first pulse ($T_0\sim
T_0+50$~s, $T_0$ is the trigger time) and the main burst (appears from
$T_0+180$ s). Thus the gap time between them is not really quiescent, and the
first pulse could be taken as an unconventional precursor, which may provide a
peculiar case study for the GRB-precursor phenomena. A two-stage collapsar
scenario is proposed as the most likely origin for this burst. In this model,
the jet for the precursor is produced during the initial core-collapse phase,
and should be weak enough not to disrupt the star when it breaks out of the
envelope, so that the fallback accretion process and the forming of the disk
could continue. We present an approach in which the duration and flux both
provide constraints on the luminosity ($L_{\rm j}$) and the Lorentz factor at
the breakout time ($\Gamma_{\rm b}$) of this weak jet. The estimated $L_{\rm
j}\lesssim 10^{49}$ erg s$^{-1}$ and $\Gamma_{\rm b}$ has an order of ten,
which are well consistent with the theoretical prediction. Besides, the weak
emission in the gap time could be interpreted as a MHD outflow due to a
magnetically driven wind during the period from the proto-neutron star phase to
forming the accretion disk in this scenario. | Xin-Ying Song, Shuang-Nan Zhang | 2023-07-14T00:49:46Z | http://arxiv.org/abs/2307.07104v4 | # GRB 221009A with an unconventional precursor: a typical two-stage collapsar scenario?
###### Abstract
As the brightest Gamma-Ray burst (GRB) ever detected, GRB 221009A may offer a chance that reveals some interesting features which are hidden in those bursts that are not so bright. There seems a very weak emission with a flux of \(10^{-8}\sim 10^{-7}\) erg cm\({}^{-2}\) s\({}^{-1}\) between the first pulse (\(T_{0}\sim T_{0}+50\) s, \(T_{0}\) is the trigger time) and the main burst (appears from \(T_{0}+180\) s). Thus the gap time between them is not really quiescent, and the first pulse could be taken as an unconventional precursor, which may provide a peculiar case study for the GRB-precursor phenomena. A two-stage collapsar scenario is proposed as the most likely origin for this burst. In this model, the jet for the precursor is produced during the initial core-collapse phase, and should be weak enough not to disrupt the star when it breaks out of the envelope, so that the fallback accretion process and the forming of the disk could continue. We present an approach in which the duration and flux both provide constraints on the luminosity (\(L_{\rm j}\)) and the Lorentz factor at the breakout time (\(\Gamma_{\rm b}\)) of this weak jet. The estimated \(L_{\rm j}\lesssim 10^{49}\) erg s\({}^{-1}\) and \(\Gamma_{\rm b}\) has an order of ten, which are well consistent with the theoretical prediction. Besides, the weak emission in the gap time could be interpreted as a MHD outflow due to a magnetically driven wind during the period from the proto-neutron star phase to forming the accretion disk in this scenario.
gamma-ray bursts:individual-radiation mechanisms: non-thermal +
Footnote †: journal: ApJ
## 1 Introduction
Precursors are usual for bright, long GRBs (\(\sim 20\%\), e.g., Lazzati, 2005) and **the emission types and jet compositions of their precursors and main bursts are listed in Table 1. Quasi-thermal (QT) component could be observed in precursors, as shown in Types 2, 3 and 4, while most of the precursors are found to be non-thermal (NT)(e.g. Li & Mao, 2022).** There are some models or mechanisms for precursors of long bursts. Fireball-internal shock (IS) models (e.g., Meszaros & Rees, 2000; Ramirez-Ruiz et al., 2002; Wang & Meszaros, 2007) and jet-cocoon interaction (e.g., Nakar & Piran, 2017) both predict a precursor with a quasi-thermal (QT) component, as shown in Types 2, 3 and 4 in Table 1; the quiescent time for the former is estimated to be about 10 s. Jet-cocoon interaction mechanism and the 'two-stage' model (e.g., Cheng & Dai, 2001; Wang & Meszaros, 2007) both correspond to a scenario of a collapsar. In the 'two-stage' scenario, the precursor is from a weak jet which may be produced by a collapsed core ( e.g., LeBlanc & Wilson jet, LeBlanc & Wilson, 1970) or by a rotating proto-neutron star (PNS) during the initial core-collapse phase, while the quiescent time \(\sim\)100 s is the timescale of fallback and forming a proto-compact star with an accretion disk; the central engine of the main burst is a black hole (BH) or neutron star (NS). The process of the 'two-stage' model is shown in Figure 1. In the'magnetar-switch' model (Bernardini et al., 2013), the precursor and the main burst arise from accretion of matter onto the surface of the magnetar; the accretion process can be halted by the centrifugal drag exerted by the rotating magnetosphere onto the infalling matter, allowing for multiple precursors and very long quiescent times. Lipunov's works (e.g. Lipunov & Gorbovskoy, 2007; Lipunova et al., 2009) suggest a collapsing'spinar' similar to the 'two-stage' model, without any accretion in the process. The origins for precursors are still under debate in some works (e.g. Lazzati, 2005; Burlon et al., 2009; Bi et al., 2018; Li & Mao, 2022), and it is of importance to perform the precursor research to un
derstand the physical mechanisms of the GRB central engine.
In this analysis, the so-called precursor in GRB 221009A is not conventional because the 'quiescent' time is not really quiescent. Note that in the former works (e.g. Burlon et al., 2009), a time interval during which the background subtracted light curve is consistent with zero, is defined as a 'quiescent' time. For GRB 221009A, there exist some weak emissions between the first pulse (\(T_{0}\sim T_{0}+50\) s, \(T_{0}\) is the trigger time) and the main burst beginning at \(T_{0}+180\) s, as shown in Figure 2 (a) and (b). However, we still could use the mechanisms for precursors to interpret its origin.
GRB 221009A was detected by many missions, such as Fermi/GBM (GCN 31565, Lesage and Fermi Gamma-ray Burst Monitor Team, 2022), Fermi-LAT (GCN 32637, Bissaldi et al., 2022), Swift/BAT/XRT (GCN 32632, Dichiara et al., 2022), Konus-Wind (GCN 31604, Svinkin et al., 2022), Insight-HXMT (ATel 155660, Tan et al., 2022), HEBS (GCN 32751, Liu et al., 2022) and LHAASO (GCN 32677, Huang et al., 2022). For GRB 221009A, the prompt emission has a long duration \(\sim 1000\) s. The isotropic-equivalent radiated energy \(E_{\rm iso,\gamma}\sim 10^{55}\) erg has been reported in some works (e.g., Frederiks et al., 2023; An et al., 2023). Note that the jet of the main burst is highly collimated with a small opening angle \(\theta\sim 1.0^{\circ}\)(Ren et al., 2022; An et al., 2023), thus, the outflow has a total energy of \(f_{\rm b}E_{\rm iso}\sim 10^{51\sim 52}\) erg with \(f_{\rm b}\sim\theta^{2}/2\) and \(E_{\rm iso}=E_{\rm iso,\gamma}+E_{\rm iso,k}\), where \(E_{\rm iso,k}\) is the isotropic-equivalent kinematic energy and has an order of \(10^{55}\) erg. This is a typical released energy for a collapsar to form a BH or a magnetar (e.g., MacFadyen et al., 2001), though GRB 221009A is the brightest ever detected in terms of \(E_{\rm iso,\gamma}\).
The paper is organized as follows. In Section 2, we extract the observational properties of the first pulse (\(T_{0}\sim T_{0}+50\) s) and the followed weak emissions (\(T_{0}+50\) s \(\sim T_{0}+170\) s); in Section 3 several scenarios for the precursor and jet launching are discussed. In Section 4, a conclusion and summary are given based on the discussion.
## 2 The observational properties of the first pulse and weak emissions
Background (BG) estimation for extremely long GRB 221009A is important. We use the data from nearby orbit as BG for GBM NaI 8 detector, and polynomial with 0-2 orders for GBM BGO 0 detector above 385 keV. The details are shown in Appendix A. As shown in Figure 2 (a) and (b), some weak emissions exist between the first pulse and the main burst, and are mainly from the lower energy band (\(\lesssim 100\) keV). The first pulse lasts \(\sim 50\) s. After a very weak emission with duration of around 70 s, a long bump of 60 s comes before the main burst as shown in Figure 2 (c).
### The first pulse from \(T_{0}\) to \(T_{0}+50\) s
Fittings with BAND, exponential cut-off power law (CPL), and power law (PL) function1 are performed on the time-integrated spectrum from \(T_{0}\) to \(T_{0}\)+10 s which contains 80% photons. The Markov Chain Monte Carlo (MCMC) fitting is performed to find the parameters with the maximum Poisson likelihood. The BAND model is determined to be the best model by the method of bayesian information criterion (BIC, Wei et al., 2016), and require \(\Delta\)BIC is at least \(6^{2}\). If with the contribution from HXMT/CsI detectors which have large effective area in high energy region (Song et al., 2022), the low energy photon index (\(\alpha\)), the high energy photon index (\(\beta\)) and the peak energy (\(E_{\rm p}\)) of \(\nu F_{\nu}\) spectrum is determined to be \(\alpha=-1.55\pm 0.03\), \(\beta=-2.02\pm 0.02\) and \(E_{\rm p}=242.9\pm 113.0\) keV as shown in Figure 2 (d). The low photon energy index \(\alpha<-2/3\) (the so-called 'line of death', Preece et al., 1998), which is consistent well with the synchrotron mechanism.
Footnote 1: The formulae for spectral models, BAND, CPL, PL could be found in the Appendix in Song et al. (2022).
The constant cadence (CC, Burgess, 2014) method and Bayesian blocks (BBlocks, Scargle et al., 2013) method with a false alarm probability \(p_{0}\)= 0.01 are used for binning in time-resolved analysis. We also require the signal-to-noise ratio (S/N) \(\geq\)30 at least in one detector, so we combine some adjacent bins. The time bins are [0.., 1.2], [1.2, 2.4], [2.4, 3.9], [3.9, 6.7], [6.7, 10] s. The evolution of \(\alpha\) is shown in Figure 2 (e). Generally, the double-tracking trend of \(\alpha\)-flux and \(E_{\rm p}\)-flux, is observed in the first 10 s, which is consistent well with that of one-zone synchrotron model (e.g., Uhm and Zhang, 2014; Li et al., 2019). Note that \(\alpha<-1\) for the first pulse, implies non-thermal (NT) emission is dominant. The internal-collision-induced magnetic reconnection and turbulence mechanism (ICMART, Zhang and Yan, 2011) is preferred as the one-zone synchrotron model for this NT emission mainly.
### The weak emissions between the the first pulse and the main burst
The emission from \(T_{0}+50\) s to \(T_{0}+115\) s has S/N\(\sim 10\) in NaI 8 detector. Therefore, it is difficult to describe
the shape of spectrum. The observed flux is estimated to be \(\sim 10^{-8}\) erg cm\({}^{-2}\) s\({}^{-1}\) with PL model as shown in Figure 2 (f). The long bump from \(T_{0}+115\) s to \(T_{0}+172\) s is best described by CPL function with \(\alpha=-1.00\pm 0.15\) and \(E_{\rm p}=78.5\pm 8.0\) keV. The flux is \(\sim 10^{-7}\) erg cm\({}^{-2}\) s\({}^{-1}\) as shown in Figure 2 (g).
## 3 The Possible Origin of the First Pulse
There are some common characteristics between the first pulse in GRB 221009A and the conventional precursor: much weaker than the main burst and a gap time from the main burst. Thus, several models or mechanisms for precursors could be used to interpret origins of the first pulse as well. Fireball-internal shock (IS) models and jet-cocoon interaction are excluded first, because there seems not any evident QT component in the emission of the precursor as discussed in Section 2.1.
**Besides, the gap time between the precursor and the main burst is too long (\(\sim 100\) s) for the fireball-IS model. In the fireball-IS model, the gap time is contributed by three parts (e.g. Wang & Meszaros 2007). The first part (\(t_{1}\)) is the time that the rarefaction wave takes to arrive at the reverse shock. Once the jet head reaches the stellar surface, the pressure in front of the jet head decreases suddenly, and a rarefaction wave will form and propagate back into the shocked jet material at the speed of sound (\(c_{\rm s}=c/\sqrt{3}\)). The width of the shocked jet is less than the distance from the core to the envelope (\(r\)), thus \(t_{1}\lesssim r/c_{\rm s}\simeq 6r_{11}\) s. The second part (\(t_{2}\)) is from the time that the unshocked jet pass through the envelop, \(t_{2}=r/c=3r_{11}\) s. The internal shock dissipation occurs at about \(R_{d}\) as the beginning
\begin{table}
\begin{tabular}{l|c|c|c|c|c} \hline \hline \multicolumn{1}{c}{ Type No.} & 1 & 2 & 3 & 4 & 5 \\ \hline precursor & NT & QT & QT+NT & QT & NT \\ main burst(or following episodes) & NT & NT & NT & QT+NT & QT+NT \\ \hline \end{tabular}
\end{table}
Table 1: The emission types of long GRBs with precursors. QT: quasi-thermal; NT: non-thermal.
Figure 1: A scenario for the ‘two-stage’ model of precursors.
Figure 2: (a) The light curve and BG shape from the data of NaI 8. (b) The BG subtracted light curves in different energy bands of NaI 8. (c) The light curve from \(T_{0}\) to \(T_{0}+200\) s. (d) The spectrum of the first pulse. (e) The light curves from GBM NaI 8 detector and HXMT, \(\alpha\) and \(E_{\rm p}\) values of precursor. (f) The spectrum of \(T_{0}+50\) s to \(T_{0}+115\) s. (g) The spectrum of \(T_{0}+115\) s to \(T_{0}+172\) s.
Figure 2: Continued.
of the main burst, and the third part (\(t_{3}\)) is the delay of the main burst to the precursor, \(t_{3}=R_{d}/2\Gamma^{2}c=1.7R_{d,15}\Gamma_{2}^{-2}\) s. The gap time is the sum of \(t_{1}\), \(t_{2}\) and \(t_{3}\) and has an order of 10 s.**
It seems that the 'two-stage','magnetar-switch' and spinar models could be consistent with the NT emission for the precursor and long gap time. Here we define two quantities: (1) the Lorentz factor \(\Gamma_{\rm b}\) at the breakout time of the jet for the precursor, which is the Lorentz Factor when the jet breaks out of the envelope and can be taken as the maximum Lorentz factor of the jet passing through the envelope;3 (2) the luminosity of the jet (\(L_{\rm j}\)) for the precursor. In the'magnetar-switch' and spinar model, \(\Gamma_{\rm b}\) and \(L_{\rm j}\) are not constrained especially, while for the two-stage model, the jet for the precursor should be weak enough not to disrupt the star, so that the fallback accretion process and the forming of the disk could continue. In details, a mild \(\Gamma_{\rm b}<100\) and the released energy \(\lesssim 10^{50}\) erg are both required (Wang & Meszaros, 2007). Therefore, the estimation of \(L_{\rm j}\) and \(\Gamma_{\rm b}\) for the first pulse is important. The 'two-stage' model can be excluded if a weak jet is not consistent with the data. The first pulse of GRB 221009A has a long duration of tens of seconds (80% photons are in \(\sim\)10 s). Thus, \(t_{b}\lesssim 10\) s, where \(t_{b}\) is the time taken by the jet head to move from the interior of the star to the surface.
Footnote 3: Note that \(\Gamma_{\rm b}\) may be not the maximum Lorentz Factor of the jet (\(\Gamma\)) and \(\Gamma_{\rm b}<\Gamma\);
Assuming the jet acceleration is saturated, we have Equation (12) in Wang & Meszaros (2007) to describe the relation among \(\Gamma_{\rm b}\), \(L_{\rm j}\) and \(t_{\rm b}\),
\[\Gamma_{\rm b}\gtrsim 10r_{11}^{1/2}L_{\rm j,49}^{-1/4}t_{\rm b,10}^{-3/4}, \tag{1}\]
where \(r\sim 10^{11}\) cm is the distance from core to envelope; CGS4 units are used here. The flux in the first 10 s is \(\sim 2\times 10^{-6}\) erg cm\({}^{-2}\) s\({}^{-1}\) (\(L_{\rm iso,\gamma}\sim 10^{50}\) erg s\({}^{-1}\) with redshift \(z=0.151\) from de Ugarte Postigo et al. (2022); here \(L_{\rm iso,\gamma}\) denotes \(E_{\rm iso,\gamma}/T\), \(T\) is the duration time in the rest frame of central engine; \(T=T_{\rm obs}/(1+z)\) with \(T_{\rm obs}\) is that in the rest frame of the observer), thus, the isotropic-equivalent luminosity \(L_{\rm iso}\) could be \(\gtrsim 10^{50}\sim 10^{51}\) erg s\({}^{-1}\) (\(L_{\rm iso}=L_{\rm iso,\gamma}/\epsilon_{\gamma}\) with radiative efficiency \(\epsilon_{\gamma}\sim 50\%-90\%\) for ICMART mechanism, Zhang & Yan, 2011). Considering the opening angle (\(\theta_{\rm b}\)) at the breakout time \(\sim 1/\Gamma_{\rm b}\) and \(L_{\rm j}\sim L_{\rm iso}\theta_{\rm b}^{2}/2\), we have
Footnote 4: the convention \(Q=10^{n}Q_{\rm n}\) is adopted for CGS units.
\[\Gamma_{b}\sim(2L_{\rm j}/L_{\rm iso})^{-1/2}. \tag{2}\]
Here, we present an approach to limit \(L_{\rm j}\) and \(\Gamma_{\rm b}\). By combining the above equation and constraint, the region for the possible values for \(L_{\rm j}\) and \(\Gamma_{\rm b}\) is the overlap of these two as shown in Figure 3 in dark blue. The range of \(L_{\rm j}\) corresponds to \(\lesssim 10^{48}\) erg s\({}^{-1}\), and \(\Gamma_{b}\) has an order of 10, which is consistent well with the weak jet assumption. Note that this estimation is approximate with the orders of magnitudes of these quantities, e.g., \(r_{11}\sim 1\) and \(L_{\rm iso,51}\sim 1\). For specific values, the order of magnitudes of the results will not be changed much. If we use the full time of the first pulse, \(t_{b}\lesssim 50\) s, the edge of the violet shadow in Figure 3 which denotes the lower limit of the \(\Gamma_{\rm b}\) will move to the lower range according to Constraint (1); \(L_{\rm iso,\gamma}\sim 2.5\times 10^{49}\) erg s\({}^{-1}\) is smaller, thus the possible \(\Gamma_{\rm b}\) and \(L_{\rm j}\) become much smaller that those with \(t_{b}\lesssim 10\) s.
For the unsaturated acceleration case, as already calculated in Equation (11) in Wang & Meszaros (2007) for \(t_{\rm b}\lesssim 10\) s, \(\Gamma_{\rm b}\gtrsim 10\). In this case, the upper limit of \(\theta_{\rm b}\lesssim 0.1\), so that \(L_{\rm j}\sim L_{\rm iso}\theta_{\rm b}^{2}/2\) is still weak with luminosity of \(10^{48}\sim 10^{49}\) erg s\({}^{-1}\), since \(L_{\rm iso}\sim 10^{50}-10^{51}\) erg s\({}^{-1}\).
Therefore, from the above discussion, the observed flux and duration of this pulse both provide constraints on \(L_{\rm j}\lesssim 10^{49}\) erg s\({}^{-1}\) in this case. Otherwise, a shorter duration or larger luminosity could result in a larger \(\Gamma_{\rm b}\) or \(L_{\rm j}\), so that the assumption of a weak jet fails. Note that the estimated \(\theta_{\rm b}\) is not small and the weak jet from the initial collapsar is an axial jet (e.g., LeBlanc & Wilson, 1970); the strong jet of the main burst launched by, e.g. the Blandford-Znajek (BZ) mechanism (Blandford & Znajek, 1977), is also in the axial direction of rotation, thus, we still could see the burst with the first pulse, though the latter is highly collimated with a much smaller opening angle.
**One question arises that there may be a possible jet-cocoon interaction during the jet passing through the envelop. If \(\Gamma_{\rm b}\) is large enough, the mixing between the two components (the jet material and a stellar material shocked by the expanding high pressure cocoon) could be ignored. As simulated numerically in Nakar & Piran (2017), a relativistic cocoon may produce a short (a few seconds) extremely bright QT burst (with the observed temperature of 10-100 keV) as the precursor. Otherwise, if \(\Gamma_{\rm b}\) is small enough, the partial mixing occurs (Nakar & Piran, 2017). As a result, optical/UV emission should appear with temperature of \(\sim 10^{4}\) K, beyond the observed precursor phase; unfortunately, the optical/UV observations started much later and can not offer more information on this. There is no evident thermal component observed in the precursor, furthermore, most of
emission in the precursor is NT at least. This might be also evidence that the jet responsible for the precursor is weak with a mild Lorentz Factor.**
Another constraint for the origin is the weak emission between the first pulse and the huge main burst. As estimated in Section 2.1, the flux of the weak emission has an order of \(10^{-8}\sim 10^{-7}\) erg cm\({}^{-2}\) s\({}^{-1}\), which is one to two magnitudes smaller than that of the first pulse (\(\sim 10^{-6}\) erg cm\({}^{-2}\) s\({}^{-1}\)). In a collapsar scenario of the 'two-stage' model, the 'quiescent' time is the time scale of the period from the PNS phase to forming an accretion disk. During this time, the newborn NS would launch a strong neutrino-driven wind, or a magnetically driven wind due to the differential rotation of the NS. A semi-analytical spindown formula (Siegel et al., 2014) for a magnetically driven wind gives a luminosity of
\[L\simeq 10^{48}B_{14}^{2}R_{6}^{3}P_{-4}^{-1}\mathrm{erg\,s^{-1}}, \tag{3}\]
where \(B\) is the surface magnetic field strength at the polar cap region, \(R\) is the radius of the NS, and \(P\) is the period. It seems that this spindown mechanism could produce a MHD outflow with luminosity of one to two magnitudes smaller than that of the first pulse (\(\sim 10^{49}\) erg s\({}^{-1}\)) if the values of \(B\), \(R\) and \(P\) are in reasonable ranges.
For the'magnetar-switch' model, the longer the waiting time, the higher the stored energy available for the next emission episode. The 'quiescent' time for GRB 221009A is long, and the main burst is extremely bright, which seems consistent with the prediction of'magnetar-switch'. There are three mechanisms of energy extraction for a magnetar as a central engine for a GRB, including 1) spin down controlled by magnetic dipole radiation, 2) extracting differential rotational energy of the NS through an erupting magnetic bubbles by winding up the poloidal magnetic field into the a toroidal configuration (Kluzniak & Ruderman, 1998), and 3) accretion. **In the propeller mechanism, the accretion process can be halted by the centrifugal drag exerted by the rotating magnetosphere onto the infalling matter, and during the halting time, there should be no evident emission emitted. The weak emission of 60 s before the main burst is not predicted in this model. If'magnetar-switch' works, it is necessary to interpret the long bump of \(60\) s before the main burst at least. If we assume it is not from the beginning of the re-accretion, it must be from the magnetic dipole radiation or the erupting magnetic bubbles. The former should exist during the burst and does not begin at \(T_{0}+112\) s; the latter occurs at a hot PNS phase, and the released energy (\(\sim 10^{51}\) erg) seems too high for the bump. If it is the beginning of the re-accretion, it is not reasonable that it lasts \(\sim 60\) s with a very low luminos
ity (\(\sim 10^{49}\) erg s\({}^{-1}\), corresponding to \(\sim 10^{-7}\) erg cm\({}^{-2}\) s\({}^{-1}\) at the distance of this GRB) as the next emission episode with high energy. Moreover, the magnetar-switch scenario offers a good explanation for these GRBs whose precursors have spectral and temporal properties similar to the main prompt emission, and smaller, but comparable, energetics (Bernardini et al., 2013), because the origins for precursors and main bursts are the same in this model. It is significant that the energies released in precursor and the main emission are not comparable for GRB 221009A. Therefore, considering these inconsistencies, the'magnetar-switch' model may be not the best interpretation for GRB 221009A.**
In the scenario of the spinar model, the details for a weak jet corresponding to the precursor are not predicted or constrained. It also occurs in a collapsar scenario, thus we think the production of the long bump may be similar to that in two-stage model. There is not enough information for us to rule out or accept it. Table 2 is a summary of the consistency between the mechanisms for GRB precursors and the observational properties of GRB 221009A. If one property could be interpreted or predicted by the mechanism, the corresponding blank in the table is filled with 'Yes'; otherwise, 'No' is filled for the inconsistency, and '?' is for the case of no prediction. **For example, the weak jet is not predicted in the spinar model, thus the blank is filled with '?'.**
In general, from the analysis of the first pulse or precursor and the 'quiescent' time, it is proposed that the properties of the first pulse are well consistent with the prediction by mechanism for the precursor in the 'two-stage' model in the collapsar scenario. Moreover, the first pulse is different from the traditional definition of the precursor because of the weak emission in the gap time.
## 4 Discussion and Summary
We present an approach to infer the possible ranges of \(\Gamma_{\rm b}\) and \(L_{\rm j}\) of the jet for the first pulse with the constraints from the duration and flux in a collapsar scenario. Furthermore, this approach could be used to speculate the origins of the precursors of GRBs in the future study.
The first pulse of GRB 221009A is non-thermal, which is the difference between GRB 221009A and GRB 160625B (e.g., Zhang et al., 2018); the first precursor of the latter is dominated by a thermal component. In the scenario of GRB 160625B, the precursor occurs after the formation of the accretion disk. As a comparison, we consider that for GRB 221009A the precursor is from the weak jet produced by a rotating PNS during the initial core-collapse phase, rather than the initial prompt accretion phase, as shown in Figure 1. Considering the estimated luminosity (\(\lesssim 10^{49}\) erg s\({}^{-1}\)) with duration of tens of seconds, the total energy (\(\sim 10^{50}\) erg) the jet carried is well consistent with that predicted by e.g. LeBlanc & Wilson (1970); Wheeler et al. (2000). In summary, the origin for the first pulse is discussed conservatively in this analysis, and a weak jet from the initial core-collapse phase in the 'two-stage' scenario is taken as the most likely origin, while the other origin for the precursor, the spinar model, is not ruled out.
As the brightest GRB ever detected, GRB 221009A may provide a case that reveals some interesting features which are hidden in those bursts that are not so bright. If the source of the burst had a high \(z\), or the observations were not so head-on, the weak emission in the gap time might be missed in the detection. In that case, the gap time seems quiescent and the first pulse should seem similar to the precursors detected before. However, in GRB 221009A, a weak emission during the gap time is observed which enriches the GRB-precursor phenomena, and is important for us to understand the physical mechanisms of the GRB central engine.
The authors thank supports from the National Program on Key Research and Development Project (2021YFA0718500). This work was partially supported by International Partnership Program of Chinese Academy of Sciences (Grant No.113111KYSB20190020). The authors are very grateful to the GRB data of Fermi/GBM, HXMT and HEBS and konus-Wind. We are very grateful for the comments and suggestions of the anonymous referees. Dr. Xin-Ying Song thanks Dr Ming-Yu Ge and Dr Yuan You for the suggestions on the background estimation.
## Appendix A About Background Estimation
The background (BG) estimation for extremely long GRB 221009A is important. Figure 4 shows the events data from the nearby orbits, which helps us know the shape of BG. The shift time (\(\sim\) 5720 s) is determined by the smallest sum of squares of the difference between the two light curves from \(T_{0}-1000\) s to \(T_{0}\) from GRB 221009A and nearby orbit. Figure 5 shows the light curves and events data from the nearby orbit as BG in different energy bands. The BG could be well described by the data from nearby orbits for NaI 7 and 8 detectors. However, from Figure 4 (c) and 5 (c) for BGO data, the nearby data are not very consistent with those from trigger of GRB 221009A. From above 385 keV, we find the the GRB events ends at \(\sim\)600 s. Therefore, we could use a polynomial to describe \(T_{0}\) +[-1100,-5] s and [900, 2400] s, so that the peaking structure could be well described. The contribution below 385.2 keV in BGO 0 (channels 0-5) is ignored in the fitting procedure.
\begin{table}
\begin{tabular}{l|c|c|c|c|c} \hline \hline \multicolumn{1}{c}{ GRB 221009A/mechanisms for precursors} & fireball-IS & jet-cocoon interaction & two-stage & magnetar-switch & spinar \\ \hline Long gap time(\(\sim\) 100 s) & No & Yes & Yes & Yes & Yes \\ NT precursor & No & No & Yes & Yes & Yes \\ The jet corresponding to precursor is weak enough & No & No & Yes & No &? \\ The long bump (\(\sim\) 60 s, 10\({}^{-7}\) erg cm\({}^{-2}\) s\({}^{-1}\)) before the main burst & No & No & Yes & No &? \\ \hline \end{tabular}
\end{table}
Table 2: The consistency between the mechanisms for precursors and GRB 221009A.
Figure 4: The light curves of BGO, NaI 7 and NaI 8 from \(T_{0}-1100\) s to \(T_{0}+2500\) s and nearby orbits.
GRB 221009553
Figure 5: Continued. |
2305.09750 | ICDAR 2023 Competition on Hierarchical Text Detection and Recognition | We organize a competition on hierarchical text detection and recognition. The
competition is aimed to promote research into deep learning models and systems
that can jointly perform text detection and recognition and geometric layout
analysis. We present details of the proposed competition organization,
including tasks, datasets, evaluations, and schedule. During the competition
period (from January 2nd 2023 to April 1st 2023), at least 50 submissions from
more than 20 teams were made in the 2 proposed tasks. Considering the number of
teams and submissions, we conclude that the HierText competition has been
successfully held. In this report, we will also present the competition results
and insights from them. | Shangbang Long, Siyang Qin, Dmitry Panteleev, Alessandro Bissacco, Yasuhisa Fujii, Michalis Raptis | 2023-05-16T18:56:12Z | http://arxiv.org/abs/2305.09750v1 | # ICDAR 2023 Competition on Hierarchical Text Detection and Recognition
###### Abstract
We organize a competition on hierarchical text detection and recognition. The competition is aimed to promote research into deep learning models and systems that can jointly perform text detection and recognition and geometric layout analysis. We present details of the proposed competition organization, including tasks, datasets, evaluations, and schedule. During the competition period (from January 2nd 2023 to April 1st 2023), at least 50 submissions from more than 20 teams were made in the 2 proposed tasks. Considering the number of teams and submissions, we conclude that the HierText competition has been successfully held. In this report, we will also present the competition results and insights from them.
Keywords:OCR Text Detection and Recognition Layout Analysis.
## 1 Introduction
Text detection and recognition systems [11] and geometric layout analysis techniques [12, 13] have long been developed separately as independent tasks. Research on text detection and recognition [14, 15, 16, 17] has mainly focused on the domain of natural images and aimed at single level text spotting (mostly, word-level). Conversely, research on geometric layout analysis [12, 13, 18, 19], which is targeted at parsing text paragraphs and forming text clusters, has assumed document images as input and taken OCR results as fixed and given by independent systems. The synergy between the two tasks remains largely under-explored.
Recently, the Unified Detector work by Long et al. [20] shows that the unification of line-level detection of text and geometric layout analysis benefits both tasks significantly. StructuralLM [21] and LayoutLMv3 [27] show that text line grouping signals are beneficial to the downstream task of document understanding and are superior to word-level bounding box signals. These initial studies demonstrate that the unification of OCR and layout analysis, which we term as _Hierarchical Text Detection and Recognition (HTDR)_, can be mutually beneficial to OCR, layout analysis, and downstream tasks.
Given the promising potential benefits, we propose the **ICDAR 2023 Competition on Hierarchical Text Detection and Recognition**. In this competition, candidate systems are expected to perform the unified task of text detection and recognition and geometric layout analysis. Specifically, we define the
unified task as producing a hierarchical text representation, including word-level bounding boxes and text transcriptions, as well as line-level and paragraph-level clustering of these word-level text entities. We defer the rigorous definitions of word / line / paragraph later to the dataset section. Fig. 1 illustrates our notion of the unified task.
We believe this competition will have profound and long-term impact on the whole image-based text understanding field by unifying the efforts of text detection and recognition and geometric layout analysis, and furthering providing new signals for downstream tasks.
The competition started on January 2nd 2023, received more than 50 submissions in 2 tasks in total, and closed on April 1st 2023. This report provides details into the motivation, preparation, and results of the competition. We believe the success of this competition greatly promotes the development of this research field. Furthermore, the dataset except the test set annotation and evaluation script are made publicly available. The competition website1 remains open to submission and provides evaluation on the test set.
Footnote 1: [https://rrc.cvc.uab.es/?ch=18](https://rrc.cvc.uab.es/?ch=18)
## 2 Competition Protocols
### Dataset
The competition is based on the HierText dataset [20]. Images in HierText are collected from the Open Images v6 dataset [28], by first applying the _Google Cloud Platform (GCP) Text Detection API2_ and then filtering out inappropriate images, for example those with too few text or non-English text. In total, 11639 images are obtained. In this competition, we follow the original split of
Figure 1: Illustration for the proposed unified task: **Hierarchical Text Detection and Recognition (HTDR)**. Given an input image, the unified model is expected to produce a hierarchical text representation, which resembles the form of a forest. Each tree in the forest represents one paragraph and has three layers, representing the clustering of words into lines and then paragraphs.
8281/1724/1634 for _train_, _validation_, _test_ sets. Images and annotations of the train and validation set are released publicly. The test set annotation is kept private and will remain so even after the end of the competition.
As noted in the original paper [20], we check the cross-dataset overlap rates with the two other OCR datasets that are based on Open Images. We find that 1.5% of the 11639 images we have are also in TextOCR [29] and 3.6% in Intel OCR [30]. Our splits ensure that our training images are not in the validation or test set of Text OCR and Intel OCR, and vice versa.
The images are annotated in a hierarchical way of _word_-to-_line_-to-_paragraph_, as shown in Fig. 2. _Words_ are defined as a sequence of textual characters not interrupted by _spaces_. _Lines_ are then defined as _space_-separated clusters of _words_ that are logically connected and aligned in spatial proximity. Finally, _paragraphs_ are composed of _lines_ that belong to the same semantic topic and are geometrically coherent. Fig. 3 illustrates some annotated samples. Words are annotated with polygons, with 4 vertices for straight text and more for curved text depending on the shape. Then, words are transcribed regardless of the scripts and languages, as long as they are legible. Note that we do not limit the character sets, so the annotation could contain case-sensitive characters, digits, punctuation, as well as non-Latin characters such as Cyrillic and Greek. After word-level annotation, we group words into lines and then group lines into paragraphs. In this way, we obtain a hierarchical annotation that resembles a forest structure of the text in an image.
Figure 2: Example of hierarchical annotation format of the dataset.
Figure 3: Illustration for the hierarchical annotation of text in images. From **left** to **right**: **word**, **line**, **paragraph** level annotations. Words (blue) are annotated with polygons. Lines (green) and paragraphs (yellow) are annotated as hierarchical clusters and visualized as polygons. Images are taken from the train split.
### Tasks
Our challenge consists of 2 competition tracks, **Hierarchical Text Detection** and **Word-Level End-to-End Text Detection and Recognition**. In the future, we plan to merge them into a single unified Hierarchical Text Spotting task that requires participants to give a unified representation of text with layout.
#### 2.2.1 Task 1: Hierarchical Text Detection
This task itself is formulated as a combination of 3 tasks: word detection, text line detection, and paragraph detection, where lines and paragraphs are represented as clusters of words hierarchically.
In this task, participants are provided with images and expected to produce the hierarchical text detection results. Specifically, the results are composed of **word-level bounding polygons** and **line and paragraph clusters** on top of words. The clusters are represented as forests, as in Fig. 1, where each paragraph is a tree and words are leaves. For this task, participants do not need to provide text recognition results.
Figure 4: Illustration of how hierarchical text detection can be evaluated as 3 instance segmentation sub-tasks. The coloring of each column indicates the instance segmentation for each sub-task.
As illustrated in Fig. 4, we evaluate this task as 3 instance segmentation sub-tasks for word, line, and paragraph respectively. For word level, each word is one instance. For line level, we take the union of each line's children words as one instance. For paragraph level, we aggregate each paragraph's children lines, and take that as one instance. With this formulation, all the 3 sub-tasks will be evaluated with the PQ metric [31] designed for instance segmentation, as specified in [20]:
\[PQ=\frac{\sum_{(p,g)\in TP}IoU(p,g)}{|TP|+\frac{1}{2}|FP|+\frac{1}{2}|FN|} \tag{1}\]
where \(TP,\ FP,\ FN\) represent true positives, false positives, and false negatives respectively. We use an IoU threshold of 0.5 to count true positives. Note that the PQ metric is mathematically equal to the product of the _Tightness_ score, which is defined as the average IoU scores of all TP pairs, and the _F1_, score which is commonly used in previous OCR benchmarks. Previous OCR evaluation protocols only report F1 scores which do not fully reflect the detection quality. We argue that tightness is very important in evaluating hierarchical detection. It gives an accurate measurement of how well detections match ground-truths. For words, a detection needs to enclose all its characters and not overlap with other words, so that the recognition can be correct. The tightness score can penalize missing characters and oversized boxes. For lines and paragraphs, they are represented as clusters of words, and are evaluated as unions of masks. Wrong clustering of words can also be reflected in the IoU scores for lines and paragraphs. In this way, using the PQ score is an ideal way to accurately evaluate the hierarchical detection task.
Each submission has 3 PQ scores for word, line, and paragraph respectively. There are 3 rankings for these 3 sub-tasks respectively. For the final ranking of the whole task, we compute the final score as a harmonic mean of the 3 PQ scores (dubbed _H-PQ_) and rank accordingly.
#### 2.2.2 Task 2: Word-Level End-to-End Text Detection and Recognition
For this task, images are provided and participants are expected to produce word-level text detection and recognition results, i.e. a set of word bounding polygons and transcriptions for each image. Line and paragraph clustering is not required. This is a challenging task, as the dataset has the most dense images, with more than 100 words per image on average, 3 times as many as the second dense dataset TextOCR [29]. It also features a large number of recognizable characters. In the training set alone, there are more than 960 different character classes, as shown in Fig. 5, while most previous OCR benchmarks limit the tasks to recognize only digits and case-insensitive English characters. These factors make this task challenging.
For evaluation, we use the F1 measure, which is a harmonic mean of word-level prediction and recall. A word result is considered true positive if the IoU with ground-truth polygon is greater or equal to 0.5 and the transcription is the same as the ground-truth. The transcription comparison considers all characters
and will be case-sensitive. Note that, some words in the dataset are marked as illegible words. Detection with high overlap with these words (IoU larger than 0.5) will be removed in the evaluation process, and ground-truths marked as illegible do not count as false negative even if they are not matched.
### Evaluation and Competition Website
We host the competition on the widely recognized Robust Reading Competition (RRC) website3 and set up our own competition page. The RRC website has been the hub of scene text and document understanding research for a long time and hosted numerous prestigious competitions. It provides easy-to-use infrastructure to set up competition, tasks, and carry out evaluation. It also supports running the competition continuously, making it an ideal candidate.
Figure 5: Character set in the training split.
### Competition Schedule
We propose and execute the following competition schedule, in accordance with the conference timeline:
* **January 2nd, 2023**: Start of the competition; submissions of results were enabled on the website.
* **April 1st, 2023**: Deadline for competition submissions.
* **April 15th, 2023**: Announcement of results.
### Other Competition Rules
In addition to the aforementioned competition specifications, we also apply the following rules:
* **Regarding the usage of other publicly available datasets**: HierText is the only allowed annotated OCR dataset. However, participants are also allowed to do self-labeling on other public OCR datasets as long as they don't use their ground-truth labels. In other words, they can use the images of other public datasets, but not their labels. They can also use non-OCR datasets, whether labeled or not, to pretrain their models. We believe they are important techniques that can benefit this field.
* **Usage of synthetic datasets** Synthetic data has been an important part of OCR recently [22, 23, 24, 25, 26]. Participants can use any synthetic datasets, whether they are public or private, but are expected to reveal how they are synthesized and some basic statistics of the synthetic datasets if they are private.
* Participants should not use the validation split in training their models.
* Participants can make as many submissions as desired before the deadline, but we only archive the latest one submission of each participant in the final competition ranking.
### Organizer Profiles
Authors are all members of the OCR team at Google Research. In addition to academic publications, authors have years of experience in building industrial OCR systems that are accurate and efficient for a diversity of image types and computation platforms.
## 3 Competition Results
In total, the competition received 30 submissions in Task 1 and 20 submissions in Task 2. Note that, we encourage participants to submit multiple entries using different methods, for example, to understand the effect of applying different techniques such as pretraining and synthetic data. To produce the final leaderboard in compliance with the ICDAR competition protocols, we only keep the latest 1 submission from each participants. The final deduplicated competition results are summarized in Tab. 1 / Fig. 6 and Tab. 2 / Fig. 7. In total, the competition received 11 unique submissions in Task 1 and 7 in Task 2.
Figure 6: Figure for the results of task 1.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{**User**} & \multirow{2}{*}{**Method**} & \multirow{2}{*}{**Rank**} & \multicolumn{4}{c|}{**Word**} \\ \cline{3-6} & & **PQ** & **F** & **P** & **R** & **T** \\ \hline YunSu Kim & Upstage KR & 1 & 70.00 & 79.58 & 82.05 & 77.25 & 87.97 \\ \hline DeepSE x Upstage & DeepSE End-to-End Text Detection and Recognition Model & 2 & 67.46 & 77.93 & 88.05 & 69.89 & 86.57 \\ \hline ssm & Ensemble of three task-specific Clova DEER & 3 & 59.84 & 76.15 & 77.63 & 74.73 & 78.59 \\ \hline Mike Ranzinger & NVTextSpotter & 4 & 63.57 & 74.10 & 80.94 & 68.34 & 85.78 \\ \hline JiangQing & SCUT-HUAWEI & 5 & 58.12 & 73.41 & 74.38 & 72.46 & 79.17 \\ \hline kuli\_cyd & DBNet++ and SATRN & 6 & 51.62 & 71.64 & 82.76 & 63.15 & 72.06 \\ \hline LGS & leba & 7 & 44.87 & 54.30 & 68.37 & 45.03 & 82.64 \\ \hline \end{tabular}
\end{table}
Table 2: Results for Task 2. F/P/R/T/PQ stand for _F1-score_, _Precision_, _Recall_, _Tightness_, and _Panoptic Quality_ respectively. The submissions are ranked by the F1 score. We omit the % for all these numbers for simplicity.
Figure 7: Figure for the results of task 2.
### Submission Validation
In the final leaderboard, each participant is only allowed to have one submission. We validate each submission and examine the number of submissions from each team. If a team has more than one submission, we keep the latest one and remove the rest from the leaderboard. Note that these removed submissions will remain on the RRC portal for reference, since they also provide important aspects into this research field. We adopt the following rules to determine the authorship of each submission:
* **user_id**: If two submissions have the same user_id field, it means they are submitted by the same RRC user account and thus should be from the same team.
* **method description**: Participants are asked to provide descriptive information of their submissions, including authors, method details, etc. If two submissions have strictly almost identical author list and method description, we consider them to be from the same team.
### Task 1 Methodology
Task 1 in our competition, i.e. Hierarchical Text Detection, is a novel task in the research field. There are no existing methods that participants can refer to. Even the previous work Unified Detector [20] can only produce line and paragraph outputs but no word-level results. Among the 8 submissions in Task 1 which have disclosed their methods, we observed that 5 of them develop '_multi-head plus postprocessing_' systems. These methods treat words, lines, and paragraphs as generic objects, and train detection or segmentation models to localize these three levels of text entities in parallel with separate prediction branches for each level. In the post-processing, they use IoU-based rules to build the hierarchy in the post-processing step, i.e. assigning words to lines and lines to paragraphs. The most of the top ranking solutions belong to this type of methods. One submission (from the SCUT-HUAWEI team) adopts a cascade pipeline, by first detecting words and then applying LayoutLMv3 [27] to cluster words into lines and paragraphs. The _Hierarchical Transformers for Text Detection_ method develops a unified detector similar to [20] for line detection and paragraph grouping and also a line-to-word detection model that produces bounding boxes for words. Here we briefly introduce the top 2 methods in this task:
**Upstage KR team** ranks 1st place in Task 1, achieving an H-PQ metric of 76.85%. It beats the second place by almost 6% in the H-PQ metric. They implemented a two-step approach to address hierarchical text detection. First, they performed multi-class semantic segmentation where classes were word, line, and paragraph regions. Then, they used the predicted probability map to extract and organize these entities hierarchically. Specifically, an ensemble of UNets with ImageNet-pretrained EfficientNetB7[9] / MitB4 [8] backbones was utilized to extract class masks. Connected components were identified in the predicted mask to separate words from each other, same for lines and paragraphs. Then, a word
was assigned as a child of a line if the line had the highest IoU with the word compared to all other lines. This process was similarly applied to lines and paragraphs. For training, they eroded target entities and dilated predicted entities. Also, they ensured that target entities maintained a gap between them. They used symmetric Lovasz loss [10] and pre-trained their models on the SynthText dataset [25].
**DeepSE X Upstage HK team** ranks 2nd in the leaderboard. They fundamentally used DBNet [7] as the scene text detector, and leveraged the oCLIP [6] pretrained Swin Transformer-Base [5] model as the backbone to make direct predictions at three different levels. Following DBNet, they employed Balanced Cross-Entropy for binary map and L1 loss for threshold map. The authors also further fine-tuned the model with lovasz loss [10] for finer localization.
### Task 2 Methodology
Task 2, i.e. Word-Level End-to-End Text Detection and Recognition, is a more widely studied task. Recent research [16, 2] focuses on building end-to-end trainable OCR models, as opposed to separately trained detection and recognition models. It's widely believed that end-to-end models enjoy shared feature extraction which leads to better accuracy. However, the results of our competition say otherwise. The top 2 methods by the **Upstage KR team** and **DeepSE End-to-End Text Detection and Recognition Model team** are all separately trained models. There are two end-to-end submissions. The **unified_model team** applies a deformable attention decoder based text recognizer and ranks 3th place. Here we briefly introduce the top 2 methods in this task:
**Upstage KR team** uses the same task 1 method for detecting words. For word-level text recognition, they use the ParSeq [1] model but replace the visual feature extractor with SwinV2 [4]. The text recognizer is pretrained with synthetic data before fine-tuning it on the HierText dataset. They use an in-house synthetic data generator derived from the open source SynthTiger [26] to generate word images using English and Korean corpus. Notably, they generate 5M English/Korean word images with vertical layout, in addition to 10M English / Korean word images with horizontal layout. For the final submission, they use an ensemble of three text recognizers for strong and stable performance.
**DeepSE End-to-End Text Detection and Recognition Model team** also uses the ParSeq [1] model as their recognizer. They point out that, in order to make the data domain consistent between the training and inference stages, they run their detector on training data, and then crop words using detected boxes. This step is important int adapting the training domain to the inference domain. This trick essentially improves their model's performance.
## 4 Discussion
In the Hierarchical Text Detection task, the original Unified Detector [20] can only achieve PQ scores of 48.21%, 62.23%, 53.60% on the words, lines, and paragraphs respectively. The H-PQ score for Unified Detector is only 54.08%, ranking
at 10th place if put in the competition leaderboard. The winning solution exceeds Unified Detector by more than 20%. These submissions greatly push the envelope of state-of-the-art Hierarchical Text Detection method. However, current methods are still not satisfactory. As shown in Fig. 6, we can easily notice that for all methods, word PQ scores are much higher than line PQ scores, and line PQ scores are again much higher than paragraph PQ scores. It indicates that, line and paragraph level detections are still more difficult than word detection. Additionally, Fig. 8 shows that layout analysis performance is only marginally correlated with word detection performance, especially when outliers are ignored. We believe there's still hidden challenges and chances for improvement in layout analysis. Furthermore, winning solutions in our competition rely on postprocessing which can be potentially complicated and error-prone. It's also important to improve end-to-end methods.
The task 2 of our challenge is a standard yet unique end-to-end detection and recognition task. While it inherits the basic setting of an end-to-end task, it is based on a diversity of images which has high word density, and it has an unlimited character set. For this task, we see most of the submissions are two-stage methods, where the detection and recognition models are trained separately, and there's no feature sharing. These two-stage methods achieve much better performances than end-to-end submissions. This contrasts with the trend in research paper that favors end-to-end trainable approaches with feature sharing between the two stage. Therefore, we believe the HierText dataset can be a very useful benchmark in end-to-end OCR research. Another interesting observation for Task 2 is that, while most submissions achieve a tightness score of around 80%, the correlation between tightness scores and F1 scores and very low, with a correlation coefficient of 0.06. It could indicate that recognition is less sensitive to the accuracy of bounding boxes after it surpasses some threshold. This would mean that the mainstream training objective of maximizing bounding box IoU
Figure 8: Correlation between text levels. Each dot is a submission in the Task 1. **Left**: Correlation between word PQ and line PQ. **Right**: Correlation between word PQ and paragraph PQ.
might not be the optimal target. For example, a slightly oversized bounding box is better than a small one which might miss some characters. With that said, a precise bounding box is still useful itself, which indicates the localization. Another potential reason is that bounding box annotation is not always accurate - it's always oversized because text are not strictly rectangular.
## 5 Conclusion
This paper summarizes the organization and results of ICDAR 2023 Competition on Hierarchical Text Detection and Recognition. We share details of competition motivation, dataset collection, competition organization, and result analysis. In total, we have 18 valid and unique competition entries, showing great interest from both research communities and industries. We keep the competition submission site open to promote research into this field. We also plan to extend and improve this competition, for example, adding multilingual data.
|
2305.18885 | Criteria Tell You More than Ratings: Criteria Preference-Aware Light
Graph Convolution for Effective Multi-Criteria Recommendation | The multi-criteria (MC) recommender system, which leverages MC rating
information in a wide range of e-commerce areas, is ubiquitous nowadays.
Surprisingly, although graph neural networks (GNNs) have been widely applied to
develop various recommender systems due to GNN's high expressive capability in
learning graph representations, it has been still unexplored how to design MC
recommender systems with GNNs. In light of this, we make the first attempt
towards designing a GNN-aided MC recommender system. Specifically, rather than
straightforwardly adopting existing GNN-based recommendation methods, we devise
a novel criteria preference-aware light graph convolution CPA-LGC method, which
is capable of precisely capturing the criteria preference of users as well as
the collaborative signal in complex high-order connectivities. To this end, we
first construct an MC expansion graph that transforms user--item MC ratings
into an expanded bipartite graph to potentially learn from the collaborative
signal in MC ratings. Next, to strengthen the capability of criteria preference
awareness, CPA-LGC incorporates newly characterized embeddings, including
user-specific criteria-preference embeddings and item-specific criterion
embeddings, into our graph convolution model. Through comprehensive evaluations
using four real-world datasets, we demonstrate (a) the superiority over
benchmark MC recommendation methods and benchmark recommendation methods using
GNNs with tremendous gains, (b) the effectiveness of core components in
CPA-LGC, and (c) the computational efficiency. | Jin-Duk Park, Siqing Li, Xin Cao, Won-Yong Shin | 2023-05-30T09:27:36Z | http://arxiv.org/abs/2305.18885v4 | # Criteria Tell You More than Ratings:
###### Abstract.
The multi-criteria (MC) recommender system, which leverages MC rating information in a wide range of e-commerce areas, is ubiquitous nowadays. Surprisingly, although graph neural networks (GNNs) have been widely applied to develop various recommender systems due to GNN's high expressive capability in learning graph representations, it has been still unexplored how to design MC recommender systems with GNNs. In light of this, we make the first attempt towards designing a GNN-aided MC recommender system. Specifically, rather than straightforwardly adopting existing GNN-based recommendation methods, we devise a novel _criteria preference-aware_ light graph convolution (CPA-LGC) method, which is capable of precisely capturing the criteria preference of users as well as the collaborative signal in complex high-order connectivities. To this end, we first construct an _MC expansion graph_ that transforms user-item MC ratings into an expanded bipartite graph to potentially learn from the collaborative signal in MC ratings. Next, to strengthen the capability of criteria preference awareness, CPA-LGC incorporates newly characterized embeddings, including _user-specific criteria-preference embeddings_ and _item-specific criterion embeddings_, into our graph convolution model. Through comprehensive evaluations using four real-world datasets, we demonstrate (a) the superiority over benchmark MC recommendation methods and benchmark recommendation methods using GNNs with tremendous gains, (b) the effectiveness of core components in CPA-LGC, and (c) the computational efficiency.
Collaborative signal; criteria preference; graph neural network; light graph convolution; multi-criteria recommender system. +
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Journal of Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Journal of Information systems Recommender systems
+
Footnote †: journal: Journal of Information systems Recommender systems
+
Footnote †: journal: Journal of Information systems Recommender systems
+
Footnote †: journal: Journal of Information systems Recommender systems
+
Footnote †: journal: Journal of Information systems Recommender systems
+
Footnote †: journal: Journal of Information systems Recommender systems
+
Footnote †: journal: Journal of Information systems Recommender systems
+
Footnote †: journal: Journal of Information systems Recommender systems
+
Footnote †: journal: Journal of Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Journal of Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Information systems Recommender systems
+
Footnote †: journal: Journal of Information systems Recommender systems
In this context, even with a number of studies on CF-based MC recommendation (Han et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019), a natural question arising is: "Is it beneficial to take advantage of GNNs for solving the MC recommendation problem in terms of both _effectiveness_ and _efficiency_?"
To answer this question, we make the first attempt towards designing a _lightweight_ GNN-aided MC recommender system. Rather than straightforwardly adopting existing GNN-based recommendation methods for MC recommendation, we devise _our own_ methodology, built upon new design principles and comprehensive empirical findings. To this end, we outline two design challenges that must be addressed when building a new GNN-based MC recommendation method:
* **Graph construction:** which graph type should be taken into account to explore the collaborative signal in MC ratings;
* **Criteria preference awareness:** how to maximally grasp the criteria preference of users through graph convolution. **(Idea 1)** In designing recommender systems only using single ratings, it is natural to construct a bipartite graph by establishing edges based on user-item interactions as in (Han et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019). On the other hand, in MC recommender systems, rather than using one bipartite graph, the graph construction step accompanies non-straightforward design choices along with MC ratings. To harness the expressive capability of GNNs in learning representations, GNNs should be able to leverage the complex behavioral similarity in the high-order connectivities _across_ MC ratings. Modeling MC ratings as a multi-graph with multi-relations can be one possible option. However, GNNs designed for such heterogeneous graphs mostly require excessive computational costs or handcrafted metapaths (Han et al., 2017; Wang et al., 2019; Wang et al., 2019), which are undesirable as our design principle since we aim to build a lightweight model with few learnable parameters. As an alternative, we present an _MC expansion graph_ that transforms MC ratings into an expanded bipartite graph to potentially learn from the collaborative signal in MC ratings. Concretely, in the constructed graph, each item is expanded to different _criterion-item_ nodes. Figure 0(b) illustrates the MC expansion graph constructed by creating some edges, corresponding to high ratings, i.e., the rating scores of 3-5, between a user node and _criterion-item_ nodes. Our graph construction enables multi-layer GNNs to effectively capture complex contextual semantics existing among MC ratings.
**(Idea 2)** We are interested in designing a GNN model that is capable of making full use of MC rating information based on the constructed MC expansion graph. Meanwhile, users tend to make decisions according to their preferences _w.r.t._ one or multiple aspects (criteria) of items (Wang et al., 2018; Wang et al., 2019). For example, in a hotel recommender system, some users may prefer a hotel based on its cleanliness while others may like the same hotel for its price, check-in service, or any other combinations of the distinct attributes of that hotel. In light of this, it is of paramount importance to be aware of the criteria preference of each user when we learn representations through graph convolution. As one of our main contributions, we propose a novel GNN architecture, _criteria preference-aware_ light graph convolution (CPA-LGC), which is capable of precisely capturing the criteria preference of users as well as the collaborative signal in complex high-order connectivities on the MC expansion graph at a fine-grained level. To reinforce the capability of criteria preference awareness, we newly characterize two embeddings, including _user-specific criteria-preference (UCP) embeddings and item-specific criteria (IC) embeddings_, and incorporate them into the graph convolution model. Then, CPA-LGC predicts the user preference by discovering the final representations of user nodes and criterion-item nodes that accommodate the two newly characterized embeddings.
To validate the effectiveness of CPA-LGC, we comprehensively conduct empirical evaluations using large-scale benchmark datasets (_e.g._, x23.0, x5.5, and x12.8 scale of the datasets used in Wang et al. (2019), Wang et al. (2019), and Wang et al. (2019) in terms of the number of overall ratings, respectively). Most importantly, experimental results demonstrate that our method significantly and consistently outperforms the best MC recommendation competitor and the best GNN-based recommendation competitor up to 141.20% and 58.66%, respectively, in terms of the precision.
Our main contributions are summarized as follows:
* **Novel methodology:** We propose an MC recommendation method using a novel GNN architecture, named as CPA-LGC, that deliberately captures 1) the collaborative signal in complex high-order connectivities from constructing our MC expansion graph and 2) the criteria preference of users from accommodating two new embeddings (_i.e._, UCP embeddings and IC embeddings).
* **Analysis and evaluation:** We validate the rationality and effectiveness of CPA-LGC through extensive experiments on four real-world datasets. We demonstrate (a) the superiority over eleven state-of-the-art recommendation methods by a significant margin, (b) the impact of key hyperparameters, (c) the influence of each component in CPA-LGC, (d) the degree of over-smoothing alleviation, and (e) the computational efficiency with linear complexity in the number of ratings.
## 2. Problem Definition
In this section, we formally define the top-\(K\) MC recommendation, along with basic notations. Let \(u\in\mathcal{U}\) and \(i\in\mathcal{I}\) denote a user and an item, respectively, where \(\mathcal{U}\) and \(\mathcal{I}\) denote the sets of all users and all items, respectively. \(N_{u}\subset\mathcal{I}\) denotes a set of items interacted by user \(u\). Then, the top-\(K\) MC recommendation problem is defined as follows:
**Definition 1: (Top-\(K\) MC recommendation)** Given \(u\in\mathcal{U}\) and \(i\in\mathcal{I}\), and \(C+1\) user-item ratings \(\mathcal{R}_{0}\times\mathcal{R}_{1}\times\ldots\times\mathcal{R}_{C}\) including an overall rating \(\mathcal{R}_{0}\), the top-\(K\) MC recommendation aims to recommend top-\(K\) items that user \(u\in\mathcal{U}\) is most likely to prefer among his/her non-interacted items in \(\mathcal{I}\setminus N_{u}\)_w.r.t._ the _overall_ rating by using all \(C+1\) user-item MC ratings.
Figure 1. An illustration showing (a) four criteria ratings in a hotel domain, and (b) the corresponding MC expansion graph.
## 3. Proposed Method: CPA-LGC
In this section, we describe our methodology, which includes how to construct an MC expansion graph and how to learn criteria preference awareness alongside our proposed CPA-LGC method. Then, we present the optimization in CPA-LGC. Moreover, we provide the model analysis including the computational complexity of CPA-LGC and the relationship with R-GCN.
### Graph Construction
**Construction of an MC expansion graph.** A naive graph construction approach using MC rating information would be to construct \(C+1\) separate bipartite graphs based on MC ratings including overall ratings. However, in this case, complex contextual semantics existing among multiple user-item interactions cannot be captured via multi-layer GNNs. Figure 1(a) illustrates a rating instance of hotel \(i_{1}\) with three criteria ratings in the \(1\)-\(5\) rating scale, where two users \(u_{1}\) and \(u_{2}\) reveal a rather _complex_ behavioral similarity in that they express the same opinions in terms of the cleanliness, but not in terms of the price. Figure 1(b) visualizes three separate bipartite graphs, each of which is constructed by creating some edges, corresponding to high ratings (_i.e._, the rating scores of \(3\)-\(5\)), between users and items for each criterion. However, this naive graph construction fails to capture the complex behavioral similarity in the high-order connectivities _across_ MC ratings when multi-layer GNNs are employed independently on each graph. For example, the rating information of user \(u_{2}\)_w.r.t._ the price cannot be propagated to other graphs where overall and cleanliness ratings are concerned, which limits the high expressive capability of GNNs for acquiring richer representations.
To overcome this inherent limitation, we design a new bipartite graph, namely an _MC expansion graph_, in which each item is expanded to \(C+1\) different _criterion-item_ nodes, as illustrated in Figure 1(c). If a user provided a high rating or had a positive interaction for a particular criterion, then an edge between the corresponding user and criterion-item nodes is created. Formally, given a set of criterion-item nodes _positively_ led by user _u w.r.t._ criterion \(c\), denoted as \(\mathcal{N}^{c}_{u}\), the resulting MC expansion graph is denoted as \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}\) and \(\mathcal{E}=\{(u,i^{c})|u\in\mathcal{U},i^{c}\in\mathcal{N}^{c}_{u},c=0,1, \cdots,C\}\) are the sets of nodes and edges in the graph, respectively. In our setting, the union of all criterion-item node sets positively rated by user \(u\) is denoted as \(\mathcal{N}_{u}=\bigcup_{c=0}^{C}\mathcal{N}^{c}_{u}\). Note that the graph \(\mathcal{G}\) can be modeled as a _weighted_ graph so that MC rating information is leveraged more precisely. Our MC expansion graph construction enables us to exploit complex _high-order proximity_ among user nodes and criterion-item nodes with the aid of GNNs. In other words, by feeding the MC expansion graph into GNNs, it is possible to effectively capture the collaborative signal in complex high-order connectivities (_i.e._, complex contextual semantics existing among multiple user-item interactions). Figure 1(c) visualizes our MC expansion graph in which edges corresponding to the rating scores of \(3\)-\(5\) are created. If a \(3\)-layer GNN is applied to the graph, then we are capable of generating user/criterion-item representations that reflect high-order connectivity information. As an example, information of a complex behavioral similarity, such as "for hotel \(i_{1}\), two users \(u_{1}\) and \(u_{2}\) have the same preference for cleanliness, but different preferences for price", can be incorporated into a vector representation of target user \(u_{1}\) through graph convolution.
**Over-smoothing effect in the MC expansion graph**. While stacking multiple layers in GNNs is beneficial in capturing the high-order structural information, it may lead to the problem of over-smoothing where node representations converge to a certain value and thus become less distinguishable (Han et al., 2017; Wang et al., 2018). This holds particularly strong validity for nodes of a higher degree (Han et al., 2017; Wang et al., 2018), which exhibit a higher convergence rate _w.r.t._ the number of layers in GNNs (Han et al., 2017). In the MC expansion graph, the over-smoothing effect may be intensified due to an increased number of item neighbors for each user. To alleviate this problem, we design an additional module to be added to each GNN layer, which will be specified in the following subsection.
### Criteria Preference-Aware Architecture
In this subsection, we elaborate on the four key components of CPA-LGC. The schematic overview of CPA-LGC is illustrated in Figure 3.
#### 3.2.1. Layer-wise Over-smoothing Alleviation
As stated in Section 3.1, our MC expansion graph construction may potentially intensify over-smoothing in GNNs. To solve this issue, we employ a _layer-wise over-smoothing alleviation_ strategy. In our study, we adopt PairNorm (Wang et al., 2018) as a simple normalization technique such that all pairwise distances between node representations remain unchanged across layers. PairNorm is composed of centering and rescaling steps for each node \(v\in\mathcal{V}\) in \(\mathcal{G}\) and is expressed as follows:
\[\begin{split}\mathbf{m}^{(I)}_{v}=\mathbf{e}^{(I)}_{v}-\frac{1}{| \mathcal{V}|}\sum_{i=1}^{|\mathcal{V}|}\mathbf{e}^{(I)}_{i}\\ \mathbf{e}^{(I)}_{v}=s\sqrt{|\mathcal{V}|}\frac{\mathbf{m}^{(I)}_ {v}}{\sqrt{\|\mathbf{E}^{(I)}\|^{2}_{F}}},\end{split} \tag{1}\]
where \(\mathbf{e}^{(I)}_{v}\) is the representation of node \(v\) after the \(l\)-th layer propagation, which will be specified in later; \(\mathbf{m}^{(I)}_{v}\) is the centered representation of node \(v\) after the \(l\)-th layer propagation; \(\mathbf{e}^{(I)}_{v}\) is the output of the PairNorm operation \(f(\cdot)\), that is, \(\mathbf{e}^{(I)}_{v}=f(\mathbf{e}^{(I)}_{v});\mathbf{E}^{(I)}\in\mathbb{R}^{| \mathcal{V}|\times d}\) is the node representation matrix after the \(l\)-th layer propagation given the \(d\)-dimensional latent representation vector of each node;
Figure 2. An example illustrating (a) a rating instance with three criteria ratings, (b) three graphs, each of which is constructed by ratings per criterion, and (c) our MC expansion graph. In (b) and (c), edges corresponding to the rating scores of \(3\)–\(5\) are created and newly-involved nodes in each GNN layer for target user \(u_{1}\) are marked with different colors.
\(\|\cdot\|_{F}\) is the Frobenius norm of a matrix; and \(s\) is the scaling hyperparameter that controls the total pairwise squared distance between node representations. In CPA-LGC, we use \(f(\cdot)\) after each GNN layer (see Figure 3), so as to prevent the over-smoothing that may be potentially intensified by the increased degree of each node in the MC expansion graph.
#### 3.2.2. LGC for User/Criterion-Item Embeddings
It is known that _lightweight_ GCN-based models (see (Beng et al., 2017; Wang et al., 2018; Wang et al., 2018) and references therein), which simplify GCNs by removing feature transformations and/or nonlinear activations, are quite effective in achieving state-of-the-art performance for single rating recommender system. Since utilizing MC ratings inherently accompanies expensive computational overheads compared to the case of single ratings, it is also vital to adopt a lightweight model for designing MC recommender systems with GNNs while guaranteeing satisfactory performance. In light of this, we build a simple yet effective layer-wise LGC operation in the MC expansion graph, which is formulated as:
\[\begin{split}&\mathbf{e}_{u}^{(l)}=\sum_{l^{\prime}\in\mathcal{ N}_{u}}\frac{w_{u,l^{\prime}}}{\sqrt{\sum_{l^{\prime}\in\mathcal{N}_{u}}w_{u,l^{ \prime}}}\sum_{l^{\prime}\in\mathcal{N}_{u^{\prime}}}\mathbf{e}_{l^{\prime}}^ {(l-1)}}\\ &\mathbf{e}_{l^{\prime}}^{(l)}=\sum_{u\in\mathcal{N}_{u^{\prime}} }\frac{w_{u,l^{\prime}}}{\sqrt{\sum_{u\in\mathcal{N}_{u^{\prime}}}w_{u,l^{ \prime}}}\sum_{l^{\prime}\in\mathcal{N}_{u}}w_{u,l^{\prime}}}\mathbf{e}_{u}^{ (l-1)},\end{split} \tag{2}\]
where \(\mathbf{e}_{u}^{(l)}\) and \(\mathbf{e}_{l^{\prime}}^{(l)}\) indicate the representations of user node \(u\) and criterion-item node \(l^{\prime}\), respectively.
(3) \(\mathbf{e}_{u}^{(0)}\) and \(\mathbf{e}_{l^{\prime}}^{(0)}\) are the ID embeddings of user node \(u\) and criterion-item node \(l^{\prime}\), respectively.
Note that \(\mathbf{e}_{u}^{(l)}=f(\mathbf{e}_{u}^{(l)})\) and \(\mathbf{e}_{l^{\prime}}^{(0)}\) are the ID embeddings of user node \(u\) and criterion-item node \(l^{\prime}\), respectively.
#### 3.2.3. LGC for UCP/IC Embeddings
As a core component of CPA-LGC, to precisely capture the criteria preference of users, we newly characterize two types of embeddings, including _UCP embeddings_ and _IC embeddings_, into our graph convolution model. To generate these newly characterized representations, we formulate the layer-wise LGC operation in the MC expansion graph as follows:
\[\begin{split}&\mathbf{p}_{u}^{(l)}=\sum_{l^{\prime}\in\mathcal{ N}_{u}}\frac{w_{u,l^{\prime}}}{\sqrt{\sum_{l^{\prime}\in\mathcal{N}_{u}}w_{u,l^{ \prime}}}\mathbf{e}^{\prime}\sqrt{\sum_{l^{\prime}\in\mathcal{N}_{u^{\prime}} }w_{u,l^{\prime}}}}\mathbf{p}_{x}^{(l-1)}\\ &\mathbf{p}_{x^{\prime}}^{(l)}=\sum_{u\in\mathcal{N}_{l^{\prime} }}\frac{w_{u,l^{\prime}}}{\sqrt{\sum_{l^{\prime}\in\mathcal{N}_{l^{\prime}}}w_{ u,l^{\prime}}}\mathbf{e}^{\prime}\sqrt{\sum_{l^{\prime}\in\mathcal{N}_{u}}w_{u,l^{ \prime}}}}\mathbf{p}_{u}^{(l-1)},\end{split} \tag{4}\]
where \(\mathbf{p}_{u}^{(l)}\) and \(\mathbf{p}_{x^{\prime}}^{(l)}\) are the UCP embedding of user node \(u\) and the IC embedding of criterion-item node \(l^{\prime}\), respectively, after the \(l\)-th layer propagation (see the right part of Figure 3); the denominator in Eq. (4) is the symmetric normalization term; and \(\mathbf{p}_{u}^{(l)}=f(\mathbf{p}_{u}^{(l)})\) and \(\mathbf{p}_{x^{\prime}}^{(l)}=f(\mathbf{p}_{x^{\prime}}^{(l)})\). For efficient memory management, we set the initial IC embeddings \(\mathbf{p}_{x^{\prime}}^{(0)}\) belonging to the same criterion \(c\) to be the same, generating \(C+1\) different initial embeddings that act as distinct labels without being clustered with each other. As depicted in Figure 3, we utilize the _stop-gradient_ operation in the feed-forward process of the IC embeddings \(\mathbf{p}_{x^{\prime}}^{(l)}\) to prevent back-propagation of gradients and unnecessarily excessive computation.
Now, let us explain the interplay between two embeddings \(\mathbf{p}_{u}^{(l)}\) and \(\mathbf{p}_{x^{\prime}}^{(l)}\) via graph convolution. Due to the fact that stacking multiple layers in GNNs results in an increased similarity of representations among connected nodes (Wang et al., 2018), a user node \(u\) connected to a number of different criterion-item nodes belonging to the _same_ criterion \(c\) will have its UCP embedding \(\mathbf{p}_{u}^{(l)}\) that is co-located to the corresponding IC embeddings in the embedding space. Figure 4 illustrates a motivating example where two users \(u_{1}\) and \(u_{2}\) in the given graph are connected to several criterion-item nodes for criterion 1 and criterion 3, respectively; through \(L\)-layer graph convolution, the UCP embeddings \(\mathbf{p}_{u_{1}}^{(L)}\) and \(\mathbf{p}_{u_{2}}^{(L)}\) are more closely located to the IC embeddings whose related criterion is 1 and 3,
Figure 4. A motivating example describing user criteria preference can be captured via graph convolution. Here, the different IC embeddings are described with different colors and patterns.
Figure 3. The schematic overview of CPA-LGC.
respectively.2 By harnessing the UCP embeddings and IC embeddings as well as user/criterion-item embeddings in the prediction stage, we are capable of achieving higher accuracy of MC recommendation, which will be verified in Section 4.2.
Footnote 2: In general, IC embeddings \(\mathbf{p}_{i^{\prime}}^{(l)}\) are not necessarily the same for different \(i^{\prime}\)’s belonging to the same criterion \(c\).
#### 3.2.4. Layer Combination and Prediction
The layer combination operation is known to be effective in the sense of capturing different semantics for each layer and alleviating the potential over-smoothing problem (Kang et al., 2016; Wang et al., 2017; Wang et al., 2018). Thus, CPA-LGC leverages the layer combination (_i.e._, layer aggregation) to obtain the combined embeddings, while setting the importance of each layer-wise representation uniformly since such a setting leads to good performance in general (Kang et al., 2016). The combined representations after \(L\)-layer propagation are expressed as
\[\mathbf{e}_{u}^{*}\!=\!\frac{1}{L}\sum_{l=0}^{L}\hat{\mathbf{c}}_{u}^{(l)}; \mathbf{e}_{i^{\prime}}^{*}\!=\!\frac{1}{L}\sum_{l=0}^{L}\hat{\mathbf{c}}_{i^{ \prime}}^{(l)};\mathbf{p}_{i^{\prime}}^{*}\!=\!\frac{1}{L}\sum_{l=0}^{L}\hat{ \mathbf{p}}_{i^{\prime}}^{(l)};\mathbf{p}_{i^{\prime}}^{*}\!=\!\frac{1}{L}\sum _{l=0}^{L}\hat{\mathbf{p}}_{i^{\prime}}^{(l)}, \tag{4}\]
where \(\mathbf{e}_{u}^{*}\) and \(\mathbf{e}_{i^{\prime}}^{*}\) are the combined embeddings of user \(u\) and criterion-item node \(i^{\prime}\), respectively; \(\mathbf{p}_{i^{\prime}}^{*}\) and \(\mathbf{p}_{i^{\prime}}^{*}\) are the combined UCP embedding of user node \(u\) and the combined IC embedding of criterion-item node \(i^{\prime}\), respectively.
Next, CPA-LGC predicts user \(u\)'s preference for target criterion-item node \(i^{\prime}\). To this end, we first form the final representations of a user and a criterion-item node as \(\hat{\mathbf{c}}_{u}^{*}+\hat{\mathbf{p}}_{u}^{*}\) and \(\hat{\mathbf{c}}_{i^{\prime}}^{*}+\hat{\mathbf{p}}_{i^{\prime}}^{*}\), respectively, where \(\hat{\mathbf{c}}_{u}^{*}=f(\mathbf{e}_{u}^{*})\), \(\hat{\mathbf{p}}_{u}^{*}=f(\mathbf{p}_{u}^{*})\), \(\hat{\mathbf{c}}_{i^{\prime}}^{*}=f(\mathbf{e}_{i^{\prime}}^{*})\), and \(\hat{\mathbf{p}}_{i^{\prime}}^{*}=f(\mathbf{p}_{i^{\prime}}^{*})\). Then, we compute the matching score \(\hat{y}_{u,i^{\prime}}\) between the final embedding of user \(u\) and the final embedding of criterion-item node \(i^{\prime}\) via the dot-product as follows:
(5) \[\hat{y}_{u,i^{\prime}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
### Experimental Settings
**Datasets.** We conduct experiments on four real-world datasets, which are widely used in studies on MC recommendation (Han et al., 2017; Chen et al., 2018; Chen et al., 2019; Chen et al., 2020; Li et al., 2021): TripAdvisor (TA), Yahoo!Movie (YM), RateBeer (RB), and Yelp-2022 (YP). Here, Yahoo!Movie was collected by requesting the authors of (Li et al., 2021), and the other three datasets are publicly available. It is noteworthy that we use relatively large-scale datasets in comparison to prior studies (Han et al., 2017; Chen et al., 2019; Chen et al., 2020; Li et al., 2021; Li et al., 2021). To ensure data quality, we use each user/item having at least five interactions. Table 1 summarizes some statistics of the four datasets. We provide further details of the datasets in Appendix B.1.
**Competitors.** To comprehensively demonstrate the superiority of CPA-LGC, we present five MC recommendation methods (ExtandedSAE (Zhou et al., 2019), UBM (Wang et al., 2019), DMCF (Wang et al., 2019), AEMC (Zhou et al., 2019), and CFM (Han et al., 2017)) and six GNN-based recommendation methods (GC-MC (Chen et al., 2019), SpectralCF (Wang et al., 2019), NGCF (Wang et al., 2019), DGCF (Zhou et al., 2019), LightGCN (Chen et al., 2019), and LightGCN\({}_{\text{MC}}\)).
Specifically, for the five GNN-based recommendation methods except for LightGCN\({}_{\text{MC}}\), we use only overall ratings since those five were originally designed by leveraging single ratings. In our study, we additionally present a variant of LightGCN, dubbed LightGCN\({}_{\text{MC}}\), which applies LightGCN (Chen et al., 2019) to each of \(C\)+1 bipartite graphs constructed by the MC ratings and then concatenates the output representations of user/item nodes for the final prediction. The schematic overview of LightGCN\({}_{\text{MC}}\) is visualized in Figure 10 of Appendix B.2.
**Evaluation protocols.** We randomly select 70% of the interactions of each user for the training set, the other 10% for the validation set, and the remaining 20% for the test set. To evaluate the accuracy of top-\(K\) MC recommendation, we use the precision, recall, and normalized discounted cumulative gain (NDCG) as performance metrics, where \(K\) is set to 5 and 10 by default. In the inference phase, we view user-item interactions in terms of the _overall_ rating in the test set as positive and evaluate how well each method can rank the items in the test set higher than all unobserved items. We report the average of values obtained by performing the 10 independent evaluations for each measure.
**Implementation details.** Unless otherwise stated, we set the dimensionality of the embedding, \(d\), to 64 for all models as in (Chen et al., 2019; Wang et al., 2019). The model parameters including UCP and IC embeddings in CPA-LGC are initialized with the Xavier method (Chen et al., 2019). We use the Adam optimizer (Kingma and Ba, 2014), where the mini-batch size is set to 1024. We use the best hyperparameters of competitors and CPA-LGC obtained by extensive grid search on the validation set in the following ranges: \(\{1e^{-4},5e^{-4},1e^{-3},5e^{-3},1e^{-2}\}\) for the learning rate; \(\{1e^{-5},1e^{-4},1e^{-3},1e^{-2}\}\) for the regularization strength \(\lambda\); and \(\{1,2,3,4,5\}\) for the number of GNN layers, \(L\), in the six GNN-based competitors and CPA-LGC. In consequence, we set the hyperparameters as follows: learning rate= \(1e^{-3}\); \(\lambda=1e^{-3}\); and \(L=1\) for YM and \(L=3\) for other datasets. In CPA-LGC, to accentuate the importance of overall ratings, the edge weight associated with connections to criterion-item nodes for criterion 0 (_i.e._, \(w_{\text{u},\text{u}}\) in Eqs. (2) and (3)) is set as \(\alpha\) while the edge weight for other existing edges is set as 1. The value of \(\alpha\) is searched in range of \(\{0.5,1,1.5,2,2.5\}\), and we set \(\alpha=1.5\) for all the datasets unless otherwise specified. In PairNorm, scaling parameter \(s\) in Eq. (1) is set to 1. Additionally, we exclude \(f(\cdot)\) for the YP dataset as YP reveals the smallest \(\gamma\) (see Table 1), which represents the ratio of the number of overall ratings to the number of MC ratings, and a smaller value of \(\gamma\) would lead to a less over-smoothness degree. We implemented CPA-LGC based on Recbole (Zhou et al., 2019), an open-sourced recommendation library. All experiments are carried out with Intel (R) 12-Core (TM) i7-9700K CPUs @ 3.60 GHz and GPU of NVIDIA GeForce RTX 3080. Our source code is available at [https://github.com/jindeok/CPA-LGC-Recbole](https://github.com/jindeok/CPA-LGC-Recbole).
### Results and Analysis
In RQ1-RQ4, we provide experimental results on all datasets. For RQ5, we show here only the results on TA due to space limitations, since the results on other datasets showed similar tendencies to those on TA. We evaluate the performance in terms of the NDCG@10 in RQ3-RQ4. Additionally, we highlight the best and the second-best methods in each column of the following tables in bold and underline, respectively.
**RQ1: Comparison with five MC recommendation competitors.** We validate the superiority of CPA-LGC over five MC recommendation competitors through extensive experiments on the four datasets. Table 2 shows the results of all MC recommendation competitors and CPA-LGC. Our findings are as follows:
1. Expected but surprisingly, CPA-LGC_significantly_ and _consistently_ outperform all MC recommendation competitors on all datasets regardless of the metrics. Specifically, on TA, YM, RB, and YP, CPA-LGC outperforms best competitors by large margins by up to 104.09%, 49.93%, 136.37%, and 100.00% in terms of the Precision@5, respectively;
2. Unlike two-stage approaches (UBM, DMCF, and AEMC) that predict MC ratings excluding overall ratings and then integrate them to infer overall ratings, CFM is a collective matrix factorization method, which robustly shows better results. It means that jointly predicting the overall rating and other MC ratings via CF can be effective for the top-\(K\) MC recommendation;
3. Deep neural network-based methods (ExtendedSAE, DMCF, and AEMC) show satisfactory performance in some cases. Specifically, ExtendedSAE exhibits superb results among the competitors on YM, which is the smallest dataset in terms of the numbers of users and items. However, ExtendedSAE faces an out-of-memory (OOM) problem in the largest dataset, YP, due to its high input/output dimension calculated as the product of the numbers of users and items in the dataset, causing a significant space complexity;
4. Since the competitors do not use GNNs, they result in far inferior performance compared to CPA-LGC due to their
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{**Dataset**} & **\# of** & **\# of** & **\# of** & **\# of** & \multirow{2}{*}{\(C\)} & \multirow{2}{*}{\(Y\)} \\ & **users** & **items** & **overall ratings** & **MC ratings** & & \\ \hline
**TA** & 4,265 & 6,275 & 34,383 & 202,859 & 7 & 5.9 \\ \hline
**YM** & 1,821 & 1,472 & 46,176 & 175,468 & 4 & 3.8 \\ \hline
**RB** & 4,017 & 3,422 & 159,755 & 607,067 & 4 & 3.8 \\ \hline
**YP** & 58,971 & 19,820 & 445,724 & 1,408,487 & 3 & 3.1 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Statistics of the four datasets used in our experiments. Here, \(\gamma\) denotes the ratio of the number of MC ratings to the number of overall ratings.
inability to explicitly reflect the complex high-order connectivity in the embedding learning process, which could lead to suboptimal representations [40].
**RQ2: Comparison with six GNN-based recommendation competitors.** We validate the superiority of CPA-LGC over six GNN-based recommendation competitors. Specifically, since there is no prior work exploring GNN-based MC recommendation, we use single ratings (_i.e._, overall ratings) for the existing five GNN-based recommendation methods (GC-MC, SpecialrCF, NGCF, DGCF, and LightGCN), and we further implement a variant of LightGCN (LightGCNMC), which is designed for leveraging the MC ratings. Table 3 shows the results of all GNN-based recommendation competitors and CPA-LGC. Our findings are as follows:
1. Most importantly, our CPA-LGC also _significantly_ and _consistently_ outperforms other competing GNN-based methods on the four datasets, regardless of the metrics. Specifically,
\begin{table}
\begin{tabular}{c l l l l l l l l} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Metric} & \multicolumn{3}{c}{TA} & \multicolumn{3}{c}{YM} & \multicolumn{3}{c}{RB} & \multicolumn{3}{c}{YP} \\ & & \(K=5\) & \(K=10\) & \(K=5\) & \(K=10\) & \(K=5\) & \(K=10\) & \(K=5\) & \(K=10\) \\ \hline \multirow{3}{*}{ExtendedSafe} & \(Precision@K\) & 0.0012 & 0.0011 & 0.0675 & 0.0480 & 0.0210 & 0.0273 & OOM & OOM \\ & \(Recall@K\) & 0.0031 & 0.0092 & 0.0694 & 0.1000 & 0.0144 & 0.0374 & OOM & OOM \\ & \(NDCG@K\) & 0.0012 & 0.0043 & 0.1072 & 0.1154 & 0.0285 & 0.0435 & OOM & OOM \\ \hline \multirow{3}{*}{UBM} & \(Precision@K\) & 0.0158 & 0.0122 & 0.0160 & 0.0223 & 0.0250 & 0.0288 & 0.0137 & 0.0125 \\ & \(Recall@K\) & 0.0443 & 0.0533 & 0.0264 & 0.0294 & 0.0174 & 0.0355 & 0.0386 & 0.0713 \\ & \(NDCG@K\) & 0.0351 & 0.0346 & 0.0202 & 0.0245 & 0.0301 & 0.0440 & 0.0248 & 0.0341 \\ \hline \multirow{3}{*}{DMCF} & \(Precision@K\) & 0.0167 & 0.0137 & 0.0354 & 0.0242 & 0.0816 & 0.0721 & 0.0090 & 0.0075 \\ & \(Recall@K\) & 0.0174 & 0.0232 & 0.0333 & 0.0470 & 0.0493 & 0.0887 & 0.0102 & 0.0248 \\ & \(NDCG@K\) & 0.0115 & 0.0243 & 0.0541 & 0.0614 & 0.1104 & 0.1317 & 0.0304 & 0.0408 \\ \hline \multirow{3}{*}{AEMC} & \(Precision@K\) & 0.0156 & 0.0154 & 0.0358 & 0.0257 & 0.0997 & 0.0807 & 0.0093 & 0.0064 \\ & \(Recall@K\) & 0.0172 & 0.0251 & 0.0398 & 0.0540 & 0.0671 & 0.1090 & 0.0437 & 0.0671 \\ & \(NDCG@K\) & 0.0207 & 0.0241 & 0.0595 & 0.0693 & 0.1534 & 0.1780 & 0.0433 & 0.0544 \\ \hline \multirow{3}{*}{CFM} & \(Precision@K\) & 0.0220 & 0.0170 & 0.0420 & 0.0375 & 0.0739 & 0.0702 & 0.0180 & 0.0165 \\ & \(Recall@K\) & 0.0615 & 0.0740 & 0.0420 & 0.0613 & 0.1111 & 0.1997 & 0.0482 & 0.0891 \\ & \(NDCG@K\) & 0.0487 & 0.0480 & 0.0392 & 0.0583 & 0.1078 & 0.1391 & 0.0349 & 0.0492 \\ \hline \multirow{3}{*}{CPA-LGC} & \(Precision@K\) & **0.0449** & **0.0273** & **0.1012** & **0.0788** & **0.2177** & **0.1739** & **0.0360** & **0.0276** \\ & \(Recall@K\) & **0.0901** & **0.1053** & **0.1211** & **0.1725** & **0.1863** & **0.2745** & **0.0859** & **0.1286** \\ & \(NDCG@K\) & **0.0830** & **0.0880** & **0.1392** & **0.1532** & **0.2823** & **0.2892** & **0.0713** & **0.0859** \\ \hline \multirow{3}{*}{Gain} & \(Precision@K\) & +140.9\% & +0.65\% & +49.5\% & +49.5\% & +41.4\% & +136.3\% & +141.20 & +100.00 & +6.27.5 \\ & \(Recall@K\) & +46.50 \% & +42.30 \% & +74.50 \% & +72.50 \% & +67.69 \% & +37.64 \% & +78.22 \% & +44.33 \% \\ & \(NDCG@K\) & +70.43 \% & +83.33 \% & +29.85 \% & +32.76 \% & +88.96 \% & +107.91 \% & +64.67 \% & +57.905 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Performance comparison among CPA-LGC and benchmark MC recommendation methods for the four benchmark datasets. Here, the best (\(X\)) and second-best performers (\(Y\)) are highlighted in bold and underline, respectively. The gain against the second performer is calculated by \(\frac{X-Y}{Y}\times 100\) (%).
\begin{table}
\begin{tabular}{c l l l l l l l l} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Metric} & \multicolumn{3}{c}{TA} & \multicolumn{3}{c}{TB} & \multicolumn{3}{c}{YP} \\ & & \(K=5\) & \(K=10\) & \(K=5\) & \(K=10\) & \(K=5\) & \(K=10\) & \(K=5\) & \(K=10\) & \(K=5\) & \(K=10\) \\ \hline \multirow{3}{*}{GC-MC} & \(Precision@K\) & 0.0060 & 0.0055 & 0.0603 & 0.0508 & 0.1543 & 0.1234 & 0.0208 & 0.0175 \\ & \(Recall@K\) & 0.0157 & 0.0284 & 0.0745 & 0.1246 & 0.1762 & 0.2547 & 0.0533 & 0.0895 \\ & \(NDCG@K\) & 0.0112 & 0.0159 & 0.0820 & 0.0798 & 0.2232 & 0.2354 & 0.0409 & 0.0530 \\ \hline \multirow{3}{*}{SpectraLCF} & \(Precision@K\) & 0.0015 & 0.0016 & 0.0594 & 0.0472 & 0.1655 & 0.1306 & 0.0086 & 0.0081 \\ & \(Recall@K\) & 0.0054 & 0.0111 & 0.0798 & 0.1202 & 0.1646 & 0.2424 & 0.0190 & 0.0351 \\ & \(NDCG@K\) & 0.0037 & 0.0058 & 0.0842 & 0.0963 & 0.2255 & 0.2339 & 0.0148 & 0.0204 \\ \hline \multirow{3}{*}{NGCF} & \(Precision@K\) & 0.0181 & 0.0119 & 0.0814 & 0.0609 & 0.1730 & 0.1380 & 0.0232 & 0.0188 \\ & \(Recall@K\) & 0.0475 & 0.0646 & 0.1010 & 0.1156 & 0.1777 & 0.2648 & 0.0600 & 0.0985 \\ & \(NDCG@K\) & 0.0393 & 0.0454 & 0.1139 & 0.1277 & 0.2372 & 0.2534 & 0.0471 & 0.0600 \\ \hline \multi
on TA, YM, RB, and YP, CPA-LGC outperforms the best competitors by up to 58.66%, 25.09%, 20.81%, and 34.83% in terms of the Precision@5, respectively;
2. Among the five competitors using single ratings, LightGCN mostly performs best (with the only exception on \(\mathrm{YM}\) in case of \(K=5\)) as it tends to exhibit state-of-the-art performance in wide fields of recommendation [7; 15; 32; 44];
3. A substantial improvement of LightGCNMC over LightGCN is observed on all the datasets, which implies that naively incorporating MC rating information into graph convolution models is even beneficial in achieving potential gains;
4. The accuracies of LightGCNMC are far below those of CPA-LGC. This means that capturing the collaborative signal on a bipartite graph constructed by each of MC ratings in a separate manner does not make full use of MC ratings as long as graph convolution is concerned;
5. Comparing to the results in Table 2, the six GNN-based approaches can still be effective against the five non-GNN methods, while mostly showing robust results over all datasets. This again validates our claim that it is beneficial to take advantage of GNNs for accurate top-\(K\) MC recommendations.
The above empirical results demonstrate the effectiveness of our MC expansion graph as well as our CPA-LGC that accommodates two new embeddings (_i.e._, UCP embeddings and IC embeddings) for precisely grasping the criteria preference of each user.
**RQ3: Hyperparameter sensitivity analysis.** In Figure 5, we investigate the impact of three key hyperparameters, including \(L\), \(d\), and \(\alpha\) in CPA-LGC, on the recommendation accuracy.
**(Effect of \(L\))** The number of GNN layers, \(L\), decides the degree of exploitation of high-order connectivity among user nodes and criterion-item nodes. From Figure 5a, except for YM, we observe that the recommendation accuracy in terms of the NDCG@10 steadily increases until \(L\) reaches 3 and then gradually decreases. This implies that multi-layer LGC is indeed effective in most cases but stacking too many layers may intensify over-smoothing, thereby leading to performance degradation. On the other hand, for the YM dataset, the performance tends to monotonically decrease with \(L\), which means that, due to the over-smoothing effect, it is recommended to use only direct neighbors in graph convolution.
**(Effect of \(d\))** As shown in Figure 5b, the effect of the dimensionality \(d\) of embeddings on the recommendation accuracy is generally observed to be positive for all the datasets. However, increasing \(d\) leads to high computation and overfitting problems [8]. The NDCG@10 slightly deteriorates when \(d>256\), manifesting the importance of choosing an appropriate \(d\) to improve the recommendation accuracy while maintaining the computational efficiency.
**(Effect of \(\alpha\))** The parameter \(\alpha\) controls the relative importance of overall ratings compared to MC ratings. From Figure 5c, it is observed that the highest NDCG@10 is achieved at \(\alpha=1.5\) regardless of datasets, but further increasing \(\alpha\) deteriorates the recommendation accuracy. This implies that overemphasizing overall ratings in LGC may dilute the information acquired from user-item interactions on other criteria and harm the model's performance.
**RQ4: Ablation study.** To analyze the contribution of each component in CPA-LGC, we conduct an ablation study in comparison with three variants depending on which sources are taken into account for designing the CPA-LGC architecture. The performance comparison among the four methods is presented in Table 4_w.r.t._ the NDCG@10 using four datasets.
* CPA-LGC: corresponds to the original CPA-LGC method without removing any components;
* CPA-LGC-MC: uses user embeddings \(\mathbf{e}_{u}^{(I)}\) and item embeddings \(\mathbf{e}_{b}^{(I)}\) for criterion 0 in the LGC operation based on the graph construction only with overall ratings;
* CPA-LGC-c: removes UCP embeddings \(\mathbf{p}_{u}^{(I)}\) and IC embeddings \(\mathbf{p}_{e}^{(I)}\) in CPA-LGC;
* CPA-LGC-f: removes the layer-wise over-smoothing alleviation operation \(f(\cdot)\).
Our observations are as follows:
1. The original CPA-LGC method always exhibits substantial gains over other variants, which demonstrates that each component plays a crucial role in the success of the proposed method;
2. The performance gap between CPA-LGC and CPA-LGC-MC tends to be much higher than CPA-LGC and other variants except for YP. This finding indicates that our MC expansion graph is most influential in achieving high recommendation accuracies by precisely capturing the collaborative signal in high-order connectivities between user nodes and criterion-item nodes;
3. In comparison with CPA-LGC-f, using \(f(\cdot)\) at each layer is shown to yield a positive contribution on the three datasets (TA, YM, and RB) but not on YP. Recall that \(\gamma\) in Table 1 is the ratio of the number of MC ratings to the number of overall ratings, which signifies the tendency of how much the degree of each node is increased by constructing the MC expansion graph. Since YP has the smallest \(\gamma\) (_i.e._, \(\gamma=3.1\)
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & **TA** & **YM** & **RB** & **YP** \\ \hline CPA-LGC & **0.088** & **0.153** & **0.289** & 0.068 \\ CPA-LGC-MC & 0.064 & 0.128 & 0.251 & 0.060 \\ CPA-LGC-c & 0.067 & 0.134 & 0.253 & 0.058 \\ CPA-LGC-f & 0.070 & 0.131 & 0.259 & **0.072** \\ \hline \hline \end{tabular}
\end{table}
Table 4. Performance comparison among CPA-LGC and its three variants in terms of the NDCG@10. Here, the best and second performers are highlighted in bold and underline, respectively.
Figure 5. The effect of three hyperparameters on the accuracies of CPA-LGC.
from Table 1) out of all the datasets, over-smoothing may not be severe on YP and thus using \(f(\cdot)\) is not beneficial in this case.
**RQ5: In-depth analysis of the smoothness.** To validate that over-smoothing can be mitigated by layer-wisely employing the PairNorm operation \(f(\cdot)\) in LGC, we analyze the distribution of the Euclidean distances between all node representations at each GNN layer. Figures 5(a) and 5(b) visualize the distributions of such distances when LGC is performed without and with \(f(\cdot)\) in the MC expansion graph, respectively, for the TA dataset. One can see that using \(f(\cdot)\) at each layer increases the average of the pairwise squared distances between node representations, thereby alleviating potential over-smoothing in the MC expansion graph.
## 5. Related Work
In this section, we review some representative methods in two broader fields of research, including 1) MC recommender systems and 2) GNN-based recommender systems.
**MC recommender systems.** Efforts have consistently been made to incorporate MC rating information in order to improve the accuracy of recommendations. As an early attempt, a support vector regression-based approach (Han et al., 2017) was presented to determine the relative importance of the individual criteria ratings. MSVD (Krishnan et al., 2017) was developed by applying a multilinear singular value decomposition technique to capture implicit relations among users, items, and criteria. UBM (Wang et al., 2017) was proposed by using a utility function in such a way that the user expectations are learned by learning-to-rank methods. CFM (Mikolov et al., 2016) was designed by collectively using matrix factorization for MC rating matrices. DTTD (Chen et al., 2017) was developed by incorporating cross-domain knowledge along with side information. Moreover, due to the proliferation of deep learning, there has been a steady push to design DNN-based recommender systems. For example, ExtendedSAE (Wang et al., 2017) was proposed to capture the relationship between each user's MC and overall ratings using the stacked auto-encoder. LatentMC (Krishnan et al., 2017) was designed with variational auto-encoders to map user reviews into latent vectors, which constitute latent MC ratings. DMCF (Wang et al., 2017) was developed for predicting MC ratings with a DNN while the predicted ratings are aggregated by another DNN. AEMC (Wang et al., 2017) was proposed by deep autoencoders, which exploits the nontrivial, nonlinear, and hidden relations between users with regard to preferences for criteria. However, the aforementioned methods may 1) be unable to explicitly learn the high-order proximity between users and items (Mikolov et al., 2016; Krizhevsky et al., 2016; Krizhevsky et al., 2016; Krizhevsky et al., 2016; Krizhevsky et al., 2016; Krizhevsky et al., 2016; Krizhevsky et al., 2016), 2) lack scalability (Wang et al., 2017), or 3) rely on side information such as user reviews (Chen et al., 2017; Krizhevsky et al., 2016). Such limitations of the methods result in unsatisfactory recommendation performance and a lack of robustness to the varying availability of information.
**GNN-based recommender systems.** GNN-based recommendation has been actively studied accordingly to boost the performance of recommendations. As the first attempt to apply GCN to a recommendation system, GC-MC (Chen et al., 2017) was proposed by taking into account matrix completion for recommender systems from the point of view of link prediction on graphs. PinSage (Pingage, 2017) was developed by combining the random walk with graph convolution to perform a web-scale recommendation task. SpectralCF (Wang et al., 2017) was developed by performing the eigendecomposition on the adjacency matrix of a user-item bipartite graph, so as to discover possible connections between user-item pairs. DGCF (Krizhevsky et al., 2016) was introduced by separating user intent factors and generating disentangled representations. As one of the follow-up studies, by capturing the high-order collaborative signal existing in user-item interactions, NGCF (Wang et al., 2017) achieved superb performance compared to previous GNN-based approaches. However, extensive ablation studies in LightGCN (Chen et al., 2017) convinced that non-linear activation and feature transformation in NGCF are not effective in performing better recommendations; LightGCN has been shown to exhibit state-of-the-art performance in most general recommender systems by removing the two components from the GCN layers in NGCF. Yet, existing GNN-based approaches are limited to single rating recommendation scenarios and do not take into account the MC interactions between users and items.
## 6. Conclusions and Future Work
In this paper, we explored an open yet important problem of how to design MC recommender systems with the aid of GNNs. To tackle this challenge, we introduced CPA-LGC, a novel lightweight MC recommendation method that is capable of precisely capturing the criteria preference of users as well as the collaborative signal in MC ratings via LGC. Through extensive experiments on four MC recommendation datasets, we comprehensively demonstrated (a) the superiority of CPA-LGC over eleven benchmark methods, (b) the impact of tuning key hyperparameters in CPA-LGC, (c) the effectiveness of component in CPA-LGC, (d) the degree of over-smoothing alleviation using the PairNorm operation, and (e) the computational efficiency with a linear scaling in \(|\mathcal{E}|\). Potential avenues of our future research include the design of a new GNN architecture that can learn edge weights as trainable parameters along with node representations to automatically learn each user's preference.
###### Acknowledgements.
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2021R1A2C3004345, No. RS-2023-00220762).
Figure 6. Distribution of the Euclidean distances between node representations at each GNN layer on TA for when LGC is performed (a) without \(f(\cdot)\) and (b) with \(f(\cdot)\). In (b), we only show the distribution of the distances in the range of \([0,2]\) due to space limitations. |
2302.13977 | High-order variational Lagrangian schemes for compressible fluids | We present high-order variational Lagrangian finite element methods for
compressible fluids using a discrete energetic variational approach. Our
spatial discretization is mass/momentum/energy conserving and entropy stable.
Fully implicit time stepping is used for the temporal discretization, which
allows for a much larger time step size for stability compared to explicit
methods, especially for low-Mach number flows and/or on highly distorted
meshes. Ample numerical results are presented to showcase the good performance
of our proposed scheme. | Guosheng Fu, Chun Liu | 2023-02-27T17:17:03Z | http://arxiv.org/abs/2302.13977v1 | # High-order variational Lagrangian schemes for compressible fluids
###### Abstract.
We present high-order variational Lagrangian finite element methods for compressible fluids using a discrete energetic variational approach. Our spatial discretization is mass/momentum/energy conserving and entropy stable. Fully implicit time stepping is used for the temporal discretization, which allows for a much larger time step size for stability compared to explicit methods, especially for low-Mach number flows and/or on highly distorted meshes. Ample numerical results are presented to showcase the good performance of our proposed scheme.
Key words and phrases:Discrete Energetic Variational Approach; Lagrangian Hydrodynamics; High-order finite elements; Entropy stability 2020 Mathematics Subject Classification: 65N30, 65N12, 76S05, 76D07 G. Fu's research is partially supported by NSF grant DMS-2012031. C. Liu's research is partially supported by NSF grants DMS-1950868, DMS-2153029 and DMS-2118181.
the thermodynamic variables, which turns out to be closely related to the high-order finite element scheme [10]. However, we use a completely different derivation. While most of the existing Lagrangian schemes are obtained by directly discretizing the underling PDE system, the starting point of our spatial discretization is an _energy dissipation law_. In particular, we present a class of variational Lagrangian schemes for compressible flows using a discrete Energetic variational approach, which is an analogue to the energetic variational approach (EnVarA) [19, 14] in a semi-discrete level.
For a given energy-dissipation law and the kinematic (transport) relation, the EnVarA [19, 14] provides a general framework to determine the dynamics of system in a unique and well-defined way, through two distinct variational processes: Least Action Principle (LAP) and Maximum Dissipation Principle (MDP). This approach is originated from pioneering work of Onsager [26, 27] and Rayleigh [33], and has been successfully applied to build up many mathematical models [19, 34, 11, 14]. Most of existing EnVarA literature focuses on the isothermal case where temperature variation is not allowed. Here we adopt the approach used in [20] to model a thermodynamic system with temperature variation using EnVarA. The obtained model is then discretized in space using the discrete EnVarA, which leads to a high-order variational Lagrangian finite element scheme. Main structures of the continuous model including mass/momentum/energy conservation and entropy stability are naturally preserved in the proposed spatial discretization. The resulting ODE system is further discretized in time using high-order implicit time integrators. Due to the special structure of the ODE system, a nonlinear system only for the velocity degrees of freedom (DOFs) needs to be solved in each time step. The allowed time step size for stability is drastically improved over explicit time stepping, especially in the low Mach number regime, at the expense of a nonlinear system solve. Ample numerical examples are used to show the good performance of the proposed scheme.
We summarize the main features of our scheme:
* Space-time high-order accuracy.
* Mass/momentum/energy conservation and entropy stability for the spatial discretization.
* Implicit time stepping allows for large time step size especially for low Mach number flows and/or on highly distorted meshes.
The rest of the paper is organized as follows. In Section 2, we present the ideal gas model using EnVarA. In Section 3, a variational Lagrangian scheme is constructed using discrete EnVarA. The implicit temporal discretization of the resulting ODE system from Section 3 is then presented in Section 4. Numerical examples are presented in Section 5, followed by a summary in Section 6. In the Appendix, we briefly discuss our approach to the compressible isothermal case where temperature is fixed.
## 2. The energetic variational approach for ideal gas
The EnVarA [14] is a tool to _derive_ the force balance equation starting from a total energy \(E^{\text{total}}\) and an energy dissipation rate functional \(\mathcal{D}\), where \(E^{\text{total}}\) is the sum of the kinetic energy \(\mathcal{K}=\int_{\Omega}\frac{1}{2}\rho|\mathbf{u}|^{2}\text{dx}\) and the Helmholtz free energy \(\mathcal{F}=\int_{\Omega}\psi(\rho,\theta)\text{dx}\), and
\[2\mathcal{D}=\int_{\Omega}\eta|\nabla_{s}\mathbf{u}|^{2}+(\xi-\frac{2}{3}\eta)| \nabla\cdot\mathbf{u}|^{2}\text{dx} \tag{1}\]
is the rate of energy dissipation with dynamic viscosity \(\eta\) and bulk viscosity \(\xi\), which may depend on both \(\rho\) and \(\theta\). Here \(\nabla_{s}\) denotes the symmetric gradient operator, \(\rho\) is the density, \(\theta\) is the absolute temperature, and \(\mathbf{u}\) is the fluid velocity.
Thermodynamics of idea gas is well studied in the literature [1, 2, 3, 7, 8, 12, 22, 29]. In classical thermodynamics, the concept of free energy proves to be useful [1, 29]. The internal energy and pressure of an ideal gas [22, 2] have a linear relationship with temperature and the product of
temperature and density, respectively. Using this observation, Liu and Sulzbach [20] proposed a definition for the free energy density of an ideal gas:
\[\psi(\rho,\theta)=(c_{p}-c_{v})\theta\rho\log(\rho)-c_{v}\rho\theta\log(\theta), \tag{2}\]
which we utilize in our current work. This definition includes the specific heat at constant volume \(c_{v}\) and specific heat at constant pressure \(c_{p}\). Associated with the free energy density (2), we define the three thermodynamic variables, namely pressure \(p\), internal energy (per unit mass) \(e\), and entropy (per unit mass) \(s\):
\[p :=\psi_{\rho}\rho-\psi=(c_{p}-c_{v})\rho\theta, \tag{3a}\] \[e :=(\psi-\psi_{\theta}\theta)/\rho=c_{v}\theta,\] (3b) \[s :=-\psi_{\theta}/\rho=\log(\theta^{c_{v}}/\rho^{c_{p}-c_{v}})+c_{v}, \tag{3c}\]
which are easily verified to satisfy the famous Gibbs equation (by chain rule) [8]:
\[\theta\partial s=\partial e+p\partial(\frac{1}{\rho}), \tag{4}\]
where \(\partial\) represents any differentiation.
In Lagrangian coordinates, we introduce the flow map: \(\mathbf{x}(\mathbf{X},t):\Omega^{0}\to\Omega^{t}\), which satisfies the trajectory equation
\[\frac{d}{dt}\mathbf{x}(\mathbf{X},t)=\mathbf{u}(\mathbf{x}(\mathbf{X},t),t), \tag{5}\]
with initial condition \(\mathbf{x}(\mathbf{X},0)=\mathbf{X}\). Since we are concerning with conventional ideal gas, we postulate the kinematics of the temperature \(\theta\) being transported along the trajectory: \(\frac{d}{dt}\theta=\theta_{t}+\mathbf{u}\cdot\nabla\theta\).
### Kinematics: mass conservation
Within an Lagrangian control volume \(V^{t}:=\{\mathbf{x}(\mathbf{X},t):\forall\mathbf{X}\in V^{0}\}\), mass does not change over time:
\[\frac{d}{dt}\int_{V^{t}}\rho\,\mathrm{d}\mathbf{x}=\int_{V^{0}}\frac{d}{dt}(\rho J )\,\mathrm{d}\mathbf{X}=0, \tag{6}\]
where \(J=\mathrm{Det}(\nabla_{X}\mathbf{x})\) is the Jacobian determinant. Since equality (6) is valid for any control volume \(V(t)\), there must hold
\[\frac{d}{dt}(\rho J)=0,\quad\text{ or }\quad\rho(\mathbf{x}(\mathbf{X},t),t)=\rho_{0}( \mathbf{X})/J(\mathbf{X},t), \tag{7}\]
where \(\rho_{0}:\Omega^{0}\to\mathbb{R}^{+}\) is the initial density. The equality (7) represents strong mass conservation. Writing mass conservation (7) back to the Eulerian coordinates, we have \(\rho_{t}+\nabla\cdot(\rho\mathbf{u})=0\), which is often referred to as the continuity equation. By abuse of notation, from it we can also get \(\delta\rho=-\nabla\cdot(\rho\delta\mathbf{x})\), where \(\delta\) represents the variational derivative. Such relation will be used and made clear in the derivations later in this paper.
### Force balance: LAP and MDP
Here we combine the Least Action Principle (LAP) and Maximum Dissipation Principle (MDP) [26, 27, 33] to derive the force balance equation. The action functional for the system is
\[\mathcal{A}=\int_{0}^{T}\left(\mathcal{X}-\mathcal{F}\right)\mathrm{dt}=\int_ {0}^{T}\int_{\Omega^{t}}\left(\frac{1}{2}\rho|\mathbf{u}|^{2}-\psi(\rho,\theta) \right)\mathrm{d}\mathbf{x}\mathrm{dt}.\]
The LAP performs variation on the kinetic and free energies to derive the inertial and conservative forces:
\[\mathbf{f}_{\text{inertial}}=\frac{\delta\int_{0}^{T}\mathcal{X}\mathrm{dt}}{ \delta\mathbf{x}},\quad\mathbf{f}_{\text{cons}}=\frac{\delta\int_{0}^{T}\mathcal{F} \mathrm{dt}}{\delta\mathbf{x}}.\]
Taking variation on the kinetic energy and using mass conservation (7), we get
\[\delta\int_{0}^{T}\mathcal{K}\mathrm{dt}= \delta\int_{o}^{T}\int_{\Omega^{0}}\frac{1}{2}\rho_{0}(X)|\mathbf{x}_{ t}|^{2}\mathrm{dXdt}=\int_{o}^{T}\int_{\Omega^{0}}\rho_{0}(X)\mathbf{x}_{t}\,\delta\mathbf{x}_{ t}\,\mathrm{dXdt}\] \[= -\int_{o}^{T}\int_{\Omega^{0}}\rho_{0}(X)\mathbf{x}_{tt}\,\delta\mathbf{x }\,\mathrm{dXdt}\] \[= -\int_{o}^{T}\int_{\Omega^{0}}\rho_{0}(X)(\mathbf{u}_{t}+\mathbf{u}\cdot \nabla\mathbf{u})\,\delta\mathbf{x}\,\mathrm{dXdt}\] \[= -\int_{o}^{T}\int_{\Omega^{t}}\rho(\mathbf{x},t)\dot{\mathbf{u}}\,\delta \mathbf{x}\,\mathrm{dXdt} \tag{8}\]
where we used the short-hand notation \(\dot{\mathbf{u}}:=\frac{d}{dt}\mathbf{u}(\mathbf{x}(\mathbf{X},t),t)\) for the material derivative. This implies that \(\mathbf{f}_{\text{inertial}}=-\rho\dot{\mathbf{u}}.\) Taking variation on the Helmholtz free energy \(\mathcal{F}\) (Hamilton's principle of virtual work) leads to
\[\delta\int_{0}^{T}\mathcal{F}\mathrm{dt} =\delta\int_{0}^{T}\int_{\Omega^{t}}\psi(\rho,\theta)\,\mathrm{ dxdt}=\ \int_{0}^{T}\int_{\Omega^{t}}\psi_{\rho}\delta\rho+\psi_{\theta}\delta\theta\, \mathrm{dxdt}\] \[=\] \[= \int_{0}^{T}\int_{\Omega^{t}}(\rho\nabla\psi_{\rho}-\psi_{\theta} \nabla\theta)\cdot\delta\mathbf{x}\,\mathrm{dxdt}\] \[= \int_{0}^{T}\int_{\Omega^{t}}\nabla p\cdot\delta\mathbf{x}\,\mathrm{ dxdt} \tag{9}\]
where we used mass conservation and the kinematic transport of the temperature defined after formula (5) in the second row, and the definition of the pressure (3a) in the last row:
\[\nabla p=\nabla(\psi_{\rho}\rho-\psi)=\rho\nabla\psi_{\rho}+\underbrace{\psi _{\rho}\nabla\rho-\psi_{\rho}\nabla\rho}_{=0}-\psi_{\theta}\nabla\theta.\]
This gives the conservative force \(\mathbf{f}_{\text{cons}}=\nabla p.\)
The MDP performs variation on the energy dissipation rate (1) to get the dissipative force:
\[\mathbf{f}_{\text{diss}}=\frac{\delta\mathcal{D}}{\delta\mathbf{u}},\]
which implies
\[\mathbf{f}_{\text{diss}}=-\nabla\cdot\left(\eta\nabla_{s}\mathbf{u}\cdot+(\xi-\frac{2 }{3}\eta)(\nabla\cdot\mathbf{u})\mathbf{I}\right),\]
where \(\mathbf{I}\) is the identity matrix.
Combining these, we get the force balance equation [14]
\[\frac{\delta\mathcal{A}}{\delta\mathbf{x}}=\frac{\delta\mathcal{D}}{\delta\mathbf{u}}, \tag{10}\]
which takes the following form
\[\rho\dot{\mathbf{u}}+\nabla p-\nabla\cdot\left(\eta\nabla_{s}\mathbf{u}+(\xi-\frac{2}{ 3}\eta)(\nabla\cdot\mathbf{u})\mathbf{I}\right)=0, \tag{11}\]
This is the usual momentum equation for the Navier-Stokes equations, with its natural weak formulation
\[\int_{\Omega^{t}}\left(\rho\dot{\mathbf{u}}\cdot\mathbf{w}-p\nabla\cdot\mathbf{w}+\eta \nabla_{s}\mathbf{u}\cdot\nabla_{s}\mathbf{w}+(\xi-\frac{2}{3}\eta)(\nabla\cdot\mathbf{u}) (\nabla\cdot\mathbf{w})\right)\mathrm{dx}=0, \tag{12}\]
for any test function \(\mathbf{w}\) with homogeneous boundary conditions.
### Internal energy and entropy equations
The Gibbs equation (4) naturally gives an update equation for the internal energy:
\[\rho\dot{e}=\rho\theta\dot{s}-p\rho(1\dot{/}\rho)=\rho\theta\dot{s}-p\nabla\cdot \mathbf{u} \tag{13}\]
The rate of change of the entropy \(s\) can be contributed by the entropy flux \(\mathbf{j}\) and entropy production \(\Delta\):
\[\rho\dot{s}=\nabla\cdot\mathbf{j}+\Delta. \tag{14}\]
The second law of thermodynamics states that entropy production is non-negative: \(\Delta\geq 0\). If we take the entropy flux \(\mathbf{j}\) being given by Durham relation
\[\mathbf{j}\theta=q, \tag{15}\]
and the heat flux
\[q=\kappa\nabla\theta \tag{16}\]
according to Fourier's law in which \(\kappa\) is the heat conductivity. From here, the explicit expression of \(\Delta\) can be derived via conservation of total energy: there holds
\[0=\frac{d}{dt}\int_{\Omega}\left(\frac{1}{2}\rho|\mathbf{u}|^{2}+\rho e\right) \mathrm{dx}=\int_{\Omega}\rho\dot{\mathbf{u}}\cdot\mathbf{u}+\rho\dot{e}\mathrm{dx}. \tag{17}\]
Using equations (12) with test function \(\mathbf{w}=\mathbf{u}\), (13), and (14), we get:
\[0= \int_{\Omega}-(\eta|\nabla_{s}\mathbf{u}|^{2}+(\xi-\frac{2}{3}\eta)( \nabla\cdot\mathbf{u})^{2})+\theta\left(\nabla\cdot(\frac{\kappa\nabla\theta}{ \theta})+\Delta\right)\mathrm{dx}\] \[= \int_{\Omega}\left(-(\eta|\nabla_{s}\mathbf{u}|^{2}+(\xi-\frac{2}{3} \eta)(\nabla\cdot\mathbf{u})^{2})-\frac{\kappa|\nabla\theta|^{2}}{\theta}+\theta \Delta\right)\mathrm{dx},\]
where we applied the chain rule for the heat flux term and used the homogeneous boundary condition \(\nabla\theta\cdot\mathbf{n}=0\) on \(\partial\Omega\). This implies that we can take the entropy dissipation rate as
\[\Delta=\left(\eta|\nabla_{s}\mathbf{u}|^{2}+(\xi-\frac{2}{3}\eta)(\nabla\cdot\bm {u})^{2}+\frac{\kappa|\nabla\theta|^{2}}{\theta}\right)/\theta, \tag{18}\]
which satisfies the second law of thermodynamics as long as \(\theta>0\). This implies the following entropy equation:
\[\rho\theta\dot{s}=\nabla\cdot(\kappa\nabla\theta)+\eta|\nabla_{s}\mathbf{u}|^{2} +(\xi-\frac{2}{3}\eta)(\nabla\cdot\mathbf{u})^{2}. \tag{19}\]
Plugging (14) and (18) back to the internal energy equation (13), we obtain:
\[\rho\dot{e}=-p\nabla\cdot\mathbf{u}+\nabla\cdot(\kappa\nabla\theta)+\eta|\nabla_{ s}\mathbf{u}|^{2}+(\xi-\frac{2}{3}\eta)(\nabla\cdot\mathbf{u})^{2}. \tag{20}\]
By (3b), equation (20) equivalently gives the dynamics of temperature \(\theta\), which has the heat equation as the leading term.
### Summary
Combining the above results, we finally obtain the model equations:
\[\dot{\mathbf{x}} =\mathbf{u}, \tag{21a}\] \[\dot{(\rho\dot{J})} =0,\] (21b) \[\rho\dot{\mathbf{u}} = -\nabla p+\nabla\cdot\left(\eta\nabla_{s}\mathbf{u}+(\xi-\frac{2}{3} \eta)(\nabla\cdot\mathbf{u})\mathbf{I}\right),\] (21c) \[\rho\dot{e} = -p\nabla\cdot\mathbf{u}+\nabla\cdot(\kappa\nabla\theta)+(\eta|\nabla _{s}\mathbf{u}|^{2}+(\xi-\frac{2}{3}\eta)(\nabla\cdot\mathbf{u})^{2}), \tag{21d}\]
where
\[e=c_{v}\theta,\quad p=(c_{p}-c_{v})\rho\theta. \tag{21e}\]
This is nothing but the compressible Navier-Stokes equations of an ideal gas [8]. Moreover, this system further satisfy the entropy equation (19). In the next section we derive a variational Lagrangian scheme for this system, where the spatial derivatives are evaluated by pulling back to the reference configuration (Lagrange coordinates); see, e.g., [36, 6, 10].
## 3. A discrete energetic variational approach: spatial discretization
In this section, we construct a variational Lagrangian scheme based on a discrete EnVarA. For simplicity, we ignore heat conduction in the model, i.e. we take \(\kappa=0\) in this section.
### Notation and the finite element spaces
Our grid-based scheme starts with a conforming triangulation \(\mathcal{T}^{0}_{h}=\{T^{0}_{\ell}\}_{\ell=1}^{N_{T}}\) of the initial configuration \(\Omega^{0}\) with \(N_{T}\) elements, where we assume the element \(T^{0}_{\ell}:=\Phi^{0}_{T_{\ell}}(\widehat{T})\) is obtained from a polynomial mapping \(\Phi^{0}_{T_{\ell}}\) from the reference element \(\widehat{T}\), which, for simplicity, is a simplex or a hypercube.
We denote \(\mathcal{T}^{k}(\widehat{T})\) as the polynomial space of degree no greater than \(k\) if \(\widehat{T}\) is a reference simplex, or the tensor-product polynomial space of degree no greater than \(k\) in each direction if \(\widehat{T}\) is a reference hypercube, for \(k\geq 1\). The mapped polynomial space on a spatial physical element \(T^{0}\in\mathcal{T}^{0}_{h}\) is denoted as
\[\mathcal{T}^{k}(T^{0}):=\{\widehat{v}\circ(\Phi^{0}_{T})^{-1}:\;\forall \widehat{v}\in\mathcal{T}^{k}(\widehat{T}))\}.\]
We denote \(\{\widehat{\boldsymbol{\xi}_{i}}\}_{i=1}^{N_{k}}\) as a set of quadrature points with positive weights \(\{\widehat{\omega}_{i}\}_{i=1}^{N_{k}}\) that is accurate for polynomials of degree up to \(2k+1\) on the reference element \(\widehat{T}\), i.e.,
\[\int_{\widehat{T}}\widehat{f}\,\mathrm{dx}=\sum_{i=1}^{N_{k}}\widehat{\omega} _{i}\widehat{f}(\widehat{\boldsymbol{\xi}_{i}}),\quad\forall\widehat{f}\in \mathcal{P}^{2k+1}(\widehat{T}). \tag{22}\]
Note that when \(\widehat{T}\) is a reference square, we simply use the Gauss-Legendre quadrature rule with \(N_{k}=(k+1)^{2}\), which is optimal. On the other hand, when \(\widehat{T}\) is a reference simplex, the optimal choice of quadrature rule is more complicated; see, e.g., [38, 37] and references cited therein. Table 1 list the number \(N_{k}\) for \(0\leq k\leq 6\) of the symmetric quadrature rules provided in [38].
The integration points and weights on a physical element \(T^{0}_{\ell}\) are simply obtained via mapping: \(\{\boldsymbol{\xi}^{\ell}_{i}:=\Phi_{T^{0}_{\ell}}(\widehat{\boldsymbol{\xi} _{i}})\}_{i=1}^{N_{k}}\), and \(\{\omega^{\ell}_{i}:=|\nabla\Phi_{T^{0}_{\ell}}(\widehat{\boldsymbol{\xi}_{i }})|\widehat{\omega}_{i}\}_{i=1}^{N_{k}}\). To simplify the notation, we denote the set of physical integration points and weights
\[\Xi^{k}_{h} :=\{\boldsymbol{\xi}^{\ell}_{i}:\;\;1\leq i\leq N_{k},\,1\leq \ell\leq N_{T}\}, \tag{23a}\] \[\Omega^{k}_{h} :=\{\omega^{\ell}_{i}:\;\;1\leq i\leq N_{k},\,1\leq\ell\leq N_{T}\}, \tag{23b}\]
and denote \((\cdot,\cdot)_{h}\) as the discrete inner-product on the mesh \(\mathcal{T}^{0}_{h}\) using the quadrature points \(\Xi^{k}_{h}\) and weights \(\Omega^{k}_{h}\):
\[(\alpha,\beta)_{h}:=\sum_{\ell=1}^{N_{T}}\sum_{i=1}^{N_{k}}\alpha(\boldsymbol{ \xi}^{\ell}_{i})\beta(\boldsymbol{\xi}^{\ell}_{i})\omega^{\ell}_{i}.\]
\begin{table}
\begin{tabular}{c c c c c c c} & \(k=0\) & \(k=1\) & \(k=2\) & \(k=3\) & \(k=4\) & \(k=5\) & \(k=6\) \\ \hline \(N_{k}\) on Triangle & 1 & 6 & 7 & 15 & 19 & 28 & 37 \\ \(N_{k}\) on Tetrahedron & 1 & 8 & 14 & 36 & 61 & 109 & 171 \\ \end{tabular}
\end{table}
Table 1. Number of quadrature points \(N_{k}\) for the quadrature rule on a simplex that is accurate up to degree \(2k+1\) for \(0\leq k\leq 6\).
We are now ready to present our continuous and discontinuous finite element spaces:
\[\mathbf{V}^{k}_{h} :=\{\mathbf{v}\in[H^{1}(\Omega^{0})]^{d}:\ \ \mathbf{v}|_{T^{0}_{\ell}}\in[ \mathscr{P}^{k}(T^{0}_{\ell})]^{d},\ \ \forall T^{0}_{\ell}\in\mathscr{T}^{0}_{h}\}, \tag{24}\] \[W^{k}_{h} :=\{w\in L^{2}(\Omega^{0}):\ \ w|_{T^{0}_{\ell}}\in W^{k}(T^{0}_{ \ell}),\ \ \forall T^{0}_{\ell}\in\mathscr{T}^{0}_{h}\}, \tag{25}\]
where the local space
\[W^{k}(T^{0}_{\ell}):=\mathscr{P}^{k}(T^{0}_{\ell})\oplus\delta W_{k}(T^{0}_{ \ell}),\]
is associated with the integration rule in (22) such that \(\dim W^{k}(T^{0}_{\ell})=N_{k}\), and the nodal conditions
\[\varphi^{\ell}_{i}(\mathbf{\xi}^{\ell}_{j})=\delta_{ij},\quad\forall 1\leq j \leq N_{k}, \tag{26}\]
in which \(\delta_{ij}\) is the Kronecker delta function determines a unique solution \(\varphi^{\ell}_{i}\in W^{k}(T^{0}_{\ell})\). This implies that \(\{\varphi^{\ell}_{i}\}_{i=1}^{N_{k}}\) is a set of nodal bases for the space \(W^{k}(T^{0}_{\ell})\), i.e.,
\[W^{k}(T^{0}_{\ell})=\text{span}_{1\leq i\leq N_{k}}\{\varphi^{\ell}_{i}\}. \tag{27}\]
When \(T^{0}_{\ell}\) is a mapped hypercube, we have \(N_{k}=(k+1)^{2}\), hence \(\delta W_{k}(T^{0}_{\ell})=\emptyset\) and \(W^{k}(T_{\ell})\) is simply the (mapped) tensor product polynomial space \(\mathscr{P}^{k}(T^{0}_{\ell})\). On the other hand, when \(T^{0}_{\ell}\) is a mapped simplex, we have \(\dim\delta W_{k}(T^{0}_{\ell})=N_{k}-\dim\mathscr{P}^{k}(T^{0}_{\ell})>0\) for \(k\geq 1\). However, we emphasize that the explicit expression of \(\delta W_{k}(T^{0}_{\ell})\) or the basis function \(\phi^{\ell}_{i}\) does not matter in our discretization, as only their nodal degrees of freedom (DOFs) on the quadrature nodes will enter into the numerical integration. Any function \(\alpha(\mathbf{X},t)\) in \(W^{k}_{h}\) (for fixed \(t\)) can be expressed as
\[\alpha(\mathbf{X},t)=\sum_{\ell=1}^{N_{T}}\sum_{i=1}^{N_{k}}\mathsf{a}^{\ell}_{i} (t)\phi^{\ell}_{i}(\mathbf{X}),\]
where \(\{\mathsf{a}^{\ell}_{i}(t)\}\) are the unknown coefficients. We refer to \(W^{k}_{h}\) as the (discontinuous) _integration rule space_, which _only_ contains the \(N_{T}\times N_{k}\) quadrature points and weights (23), and is easy to implement in practice.
We further denote a set of basis functions for \(\mathbf{V}^{k}_{h}\) as \(\{\mathbf{\varphi}_{i}\}_{i=1}^{N_{V}^{k}}\), where \(N_{V}^{k}\) is the dimension of \(\mathbf{V}^{k}_{h}\). We use the continuous space \(\mathbf{V}^{k}_{h}\) to approximate the flow map \(\mathbf{x}_{h}\) and velocity \(\mathbf{u}_{h}\), and the discontinuous space \(W^{k}_{h}\) to approximate the density \(\rho_{h}\), pressure \(p_{h}\), internal energy \(e_{h}\), temperature \(\theta_{h}\), and entropy \(s_{h}\). More specifically, we have
\[\mathbf{x}_{h}(\mathbf{X},t) =\ \sum_{i=1}^{N_{V}^{k}}\mathsf{x}_{i}(t)\mathbf{\varphi}_{i}(\mathbf{X}), \mathbf{u}_{h}(\mathbf{X},t) =\ \sum_{i=1}^{N_{V}^{k}}\mathsf{u}_{i}(t)\mathbf{\varphi}_{i}(\mathbf{X}), \tag{28a}\] \[\rho_{h}(\mathbf{X},t) =\ \sum_{\ell=1}^{N_{T}}\sum_{i=1}^{N_{k}}\rho^{\ell}_{i}(t)\phi^{ \ell}_{i}(\mathbf{X}),\ \ p_{h}(\mathbf{X},t) =\ \sum_{\ell=1}^{N_{T}}\sum_{i=1}^{N_{k}}\mathsf{p}^{\ell}_{i}(t) \phi^{\ell}_{i}(\mathbf{X}),\] (28b) \[e_{h}(\mathbf{X},t) =\ \sum_{\ell=1}^{N_{T}}\sum_{i=1}^{N_{k}}e^{\ell}_{i}(t)\phi^{ \ell}_{i}(\mathbf{X}),\ \ \ \theta_{h}(\mathbf{X},t) =\ \sum_{\ell=1}^{N_{T}}\sum_{i=1}^{N_{k}}\theta^{\ell}_{i}(t) \phi^{\ell}_{i}(\mathbf{X}),\] (28c) \[s_{h}(\mathbf{X},t) =\ \sum_{\ell=1}^{N_{T}}\sum_{i=1}^{N_{k}}\mathsf{s}^{\ell}_{i}(t) \phi^{\ell}_{i}(\mathbf{X}), \tag{28d}\]
where \(\mathsf{X}_{h}=[\mathsf{x}_{1},\cdots,\mathsf{x}_{N_{V}^{k}}]^{T}\), \(\mathsf{U}_{h}=[\mathsf{u}_{1},\cdots,\mathsf{u}_{N_{V}^{k}}]^{T}\), \(\mathsf{R}_{h}=[\rho^{1}_{1},\cdots,\rho^{N_{T}}_{N_{k}}]^{T}\), \(\mathsf{P}_{h}=[\mathsf{p}^{1}_{1},\cdots,\mathsf{p}^{N_{T}}_{N_{k}}]^{T}\), \(\mathsf{E}_{h}=[\mathsf{e}^{1}_{1},\cdots,\mathsf{e}^{N_{T}}_{N_{k}}]^{T}\), \(\mathsf{G}_{h}=[\theta^{1}_{1},\cdots,\mathsf{g}^{N_{T}}_{N_{k}}]^{T}\), and \(\mathsf{S}_{h}=[\mathsf{s}^{1}_{1},\cdots,\mathsf{s}^{N_{T}}_{N_{k}}]^{T}\) are the time dependent coefficient vectors for \(\mathbf{x}_{h}\), \(\mathbf{u}_{h}\), \(\rho_{h}\), \(p_{h}\), \(e_{h}\), \(\theta_{h}\), and \(s_{h}\), respectively. Note that by the thermodynamic relations (3), there are only two independent thermodynamic variables. Here we take density \(\rho\) and temperature \(\theta\) as the independent variables. The other variables will be updated through the discrete formulas (31) below.
### Trajectory equation and mass conservation
With the notation given in (28), the trajectory equation (5) simply implies that
\[\mathsf{X}^{\prime}_{h}(t)=\mathsf{U}_{h}(t). \tag{29}\]
We require mass conservation to be satisfied pointwise at the quadrature nodes level, specifically, (7) implies that
\[\rho^{\ell}_{i}(t)=\rho_{0}(\mathbf{\xi}^{\ell}_{i})/J_{h}(\mathbf{\xi}^{ \ell}_{i},t),\quad\forall 1\leq\ell\leq N_{T},\;1\leq i\leq N_{k}, \tag{30}\]
where the discrete Jacobian on the quadrature point \(\mathbf{\xi}^{\ell}_{i}\) is \(J_{h}(\mathbf{\xi}^{\ell}_{i},t):=\text{Det}(\nabla_{X}\mathbf{x}_{h}(\mathbf{\xi}^{\ell} _{i},t))\).
### Thermodynamic relations
We require the relations in (3) for the thermodynamic variables be satisfied on the quadrature points level, which implies
\[\mathsf{p}^{\ell}_{i} =(c_{p}-c_{v})\rho^{\ell}_{i}\theta^{\ell}_{i}, \tag{31a}\] \[\mathsf{e}^{\ell}_{i} =c_{v}\theta^{\ell}_{i},\] (31b) \[\mathsf{s}^{\ell}_{i} =c_{v}\log(\theta^{\ell}_{i})-(c_{p}-c_{v})\log(\rho^{\ell}_{i}) +c_{v}, \tag{31c}\]
for all \(1\leq i\leq N_{k}\) and \(1\leq\ell\leq N_{T}\). It is easy to see that the (pointwise) Gibbs equation (4) is satisfied:
\[\rho^{\ell}_{i}\theta^{\ell}_{i}(\mathsf{s}^{\ell}_{i})^{\prime} =\rho^{\ell}_{i}(\mathsf{e}^{\ell}_{i})^{\prime}-\frac{\mathsf{p}^{\ell}_{i} }{\rho^{\ell}_{i}}(\rho^{\ell}_{i})^{\prime}.\]
By the density definition (30) and Jacobi's formula, we have
\[(\rho^{\ell}_{i})^{\prime}=-\rho^{\ell}_{i}\nabla\cdot\mathbf{u}_{h}(\mathbf{\xi}^{ \ell}_{i}),\]
which implies that
\[\rho^{\ell}_{i}\theta^{\ell}_{i}(\mathsf{s}^{\ell}_{i})^{\prime} =\rho^{\ell}_{i}(\mathsf{e}^{\ell}_{i})^{\prime}+\mathsf{p}^{\ell}_{i}\nabla \cdot\mathbf{u}_{h}(\mathbf{\xi}^{\ell}_{i}). \tag{32}\]
We note that the above pointwise relations also hold for the classical low-order SGH schemes [35, 36, 5] where the thermodynamic variables are approximated via piecewise constants, which, however, does not hold in general for the high order scheme [10] due to the use of a different high-order thermodynamic finite element space.
### Discrete EnVarA and velocity equation
Instead of discretizing the force balance equation (21c), here we discretize the energy law and use the EnVarA to _derive_ the discrete force balance equation directly. We denote the discrete action functional
\[\mathcal{A}_{h}:=\int_{0}^{T}(\mathcal{X}_{h}-\mathcal{F}_{h}) \text{dt}, \tag{33}\]
where the discrete kinetic energy \(\mathcal{X}_{h}\) and the discrete Helmholtz free energy \(\mathcal{F}_{h}\) are given as
\[\mathcal{X}_{h}=\frac{1}{2}(\rho_{h}J_{h}\mathbf{u}_{h},\mathbf{u}_{h})_{h }=\sum_{\ell=1}^{N_{T}}\sum_{i=1}^{N_{k}}\frac{1}{2}\rho_{0}(\mathbf{\xi}^{\ell}_{ i})|\mathbf{u}_{h}(\mathbf{\xi}^{\ell}_{i},t)|^{2}\omega^{\ell}_{i},\]
and
\[\mathcal{F}_{h}=(\psi(\rho_{h},\theta_{h})J_{h},1)_{h}=\sum_{\ell =1}^{N_{T}}\sum_{i=1}^{N_{k}}\psi\left(\rho_{h}(\mathbf{\xi}^{\ell}_{i},t),\theta_ {h}(\mathbf{\xi}^{\ell}_{i},t)\right)J_{h}(\mathbf{\xi}^{\ell}_{i},t)\omega^{\ell}_{i}.\]
Moreover, the discrete dissipation rate is given as
\[\mathcal{D}_{h}:=\frac{1}{2}(\eta J_{h}\nabla_{s}\mathbf{u}_{h}, \nabla_{s}\mathbf{u}_{h})_{h}+\frac{1}{2}\left((\xi-\frac{2}{3}\eta)J_{h}\nabla \cdot\mathbf{u}_{h},\nabla\cdot\mathbf{u}_{h}\right)_{h}. \tag{34}\]
The discrete force balance equation (10) is then
\[\frac{\delta\mathcal{A}_{h}}{\delta\mathsf{x}_{j}}=\frac{\delta\mathcal{D}_{h}}{ \delta\mathsf{u}_{j}},\quad\forall 1\leq j\leq N_{V}^{k}. \tag{35}\]
Elementary calculation, using (29), (30) and Jacobi's formula, yields that
\[\frac{\delta\mathcal{A}_{h}}{\delta\mathsf{x}_{j}}=-(\rho_{0}\dot{\mathbf{u}}_{h}, \mathbf{\varphi}_{j})_{h}+(p_{h}J_{h},\nabla\cdot\mathbf{\varphi}_{j})_{h}, \tag{36}\]
where \(\dot{\mathbf{u}}_{h}=\sum_{i=1}^{N_{V}^{k}}\mathsf{u}_{i}(t)^{\prime}\mathbf{\varphi}_ {i}\), and the pressure \(p_{h}\in W_{h}^{k}\) satisfies
\[\mathsf{p}_{i}^{\ell}=\psi_{\rho}(\rho_{i}^{\ell},\theta_{i}^{\ell})\rho_{i}^{ \ell}-\psi(\rho_{i}^{\ell},\theta_{i}^{\ell})=(c_{p}-c_{v})\rho_{i}^{\ell} \theta_{i}^{\ell},\]
according to (31a). We also have
\[\frac{\delta\mathcal{D}_{h}}{\delta\mathsf{u}_{j}}=(\eta J_{h}\nabla_{s}\mathbf{u }_{h},\nabla_{s}\mathbf{\varphi}_{j})_{h}+((\xi-\frac{2}{3}\eta)J_{h}\nabla\cdot \mathbf{u}_{h},\nabla\cdot\mathbf{\varphi}_{j})_{h},\quad\forall 1\leq j\leq N_{V}^{k}. \tag{37}\]
Plugging (36) and (37) back to (35), and using the definition of the pressure, we get the semi-discrete force balance equation:
\[(\rho_{0}\dot{\mathbf{u}}_{h},\mathbf{\varphi}_{j})_{h}-\left((c_{p}-c_{v})\rho_{0} \theta_{h},\nabla\cdot\mathbf{\varphi}_{j}\right)_{h}+(\sigma_{h},\nabla\mathbf{ \varphi}_{j})_{h}=0,\quad\forall 1\leq j\leq N_{V}^{k}, \tag{38}\]
where \(\sigma_{h}\) is the viscous stress defined as
\[\sigma_{h}:=\eta J_{h}\nabla_{s}\mathbf{u}_{h}+(\xi-\frac{2}{3}\eta)J_{h}\nabla \cdot\mathbf{u}_{h}\mathbf{I}. \tag{39}\]
Here the evaluation of spatial derivative terms shall be pulled back to the initial configuration \(\Omega^{0}\). In particular,
\[\nabla\mathbf{u}_{h}=(F_{h})^{-1}\nabla_{X}\mathbf{u}_{h},\]
where \(F_{h}=\nabla_{X}\mathbf{x}_{h}\) is the deformation tensor. Equation (38) provides an ODE system for the velocity coefficient vector \(\mathsf{U}_{h}(t)\). Taking test function \(\mathbf{\varphi}_{j}\) as a constant vector, we immediately obtain global momentum conservation:
\[\frac{d}{dt}(\rho_{0}\mathbf{u}_{h},1)_{h}=(\rho_{0}\dot{\mathbf{u}}_{h},1)_{h}=0.\]
### Energy conservation and temperature equation
We use energy conservation to get an update equation for the temperature coefficient vector \(\Theta_{h}(t)\). In the absence of heat conduction (\(\kappa=0\)), the spatial discretization of the internal energy equation (20) leads to
\[(\rho_{h}\dot{e}_{h},J_{h}\phi_{j}^{\ell})_{h}=-(p_{h}J_{h}\nabla\cdot\mathbf{u}_{ h},\phi_{j}^{\ell})_{h}+(\sigma_{h}:\nabla\mathbf{u}_{h},\phi_{j}^{\ell})_{h}, \quad\forall 1\leq\ell\leq N_{T},\,1\leq j\leq N_{k}. \tag{40}\]
Equivalently, the equation (40) has the following pointwise form for the coefficient vector \(\mathsf{E}_{h}\):
\[\rho_{0}(\mathbf{\xi}_{j}^{\ell})(\mathsf{e}_{j}^{\ell})^{\prime}=-\mathsf{p}_{j}^ {\ell}J_{h}(\mathbf{\xi}_{j}^{\ell})(\nabla\cdot\mathbf{u}_{h})(\mathbf{\xi}_{j}^{\ell})+ \sigma_{h}(\mathbf{\xi}_{j}^{\ell}):\nabla\mathbf{u}_{h}(\mathbf{\xi}_{j}^{\ell}), \tag{41}\]
for all \(1\leq\ell\leq N_{T}\), \(1\leq j\leq N_{k}\). Plugging in the relations (31a) and (31b) back to (41), we get the following ODE system for the coefficent vector \(\Theta_{h}(t)\):
\[c_{v}\rho_{0}(\mathbf{\xi}_{j}^{\ell})(\theta_{j}^{\ell})^{\prime}=-(c_{p}-c_{v}) \rho_{0}(\mathbf{\xi}_{j}^{\ell})(\nabla\cdot\mathbf{u}_{h})(\mathbf{\xi}_{j}^{\ell})\theta _{j}^{\ell}+\sigma_{h}(\mathbf{\xi}_{j}^{\ell}):\nabla\mathbf{u}_{h}(\mathbf{\xi}_{j}^{\ell}), \tag{42}\]
One key observation is that (42) is a _linear_ ODE system for \(\Theta_{h}\). Moreover, taking \(\mathbf{\varphi}_{j}=\mathbf{u}_{h}\) in (38) and \(\phi_{j}^{\ell}=1\) in (40) and adding, we obtain the total energy conservation:
\[\frac{d}{dt}\left(\frac{1}{2}\rho_{0}|\mathbf{u}_{h}|^{2}+\rho_{0}e_{h},1\right)_{h }=(\rho_{0}\dot{\mathbf{u}}_{h},\mathbf{u}_{h})_{h}+(\rho_{0}\dot{e}_{h},1)_{h}=0.\]
### The entropy equation
Combining the Gibbs equation (32) with the internal energy equation (41) and simplifying, we obtain the ODE system satisfied by the entropy:
\[J_{h}(\mathbf{\xi}_{j}^{\ell})\rho_{i}^{\ell}\theta_{i}^{\ell}(\mathbf{\varsigma}_{i}^{ \ell})^{\prime}=\sigma_{h}(\mathbf{\xi}_{j}^{\ell}):\nabla\mathbf{u}_{h}(\mathbf{\xi}_{j}^{ \ell}), \tag{43}\]
for all \(1\leq\ell\leq N_{T}\), \(1\leq j\leq N_{k}\). This implies that
\[\frac{d}{dt}(\rho_{0}s_{h},1)_{h}=(\rho_{0}\dot{s}_{h},1)_{h}=(\frac{\sigma_{h }:\nabla\mathbf{u}_{h}}{\theta_{h}},1)_{h}, \tag{44}\]
where positivity of the right hand side, i.e., semi-discrete entropy stability, is guaranteed as long as the temperature \(\Theta_{h}>0\). We remark that the entropy stability (44) is a direct consequence of our special choice of (nodal) thermodynamic finite element space (25) and (27).
### Artificial viscosity
To make the scheme robust even in the case of zero _physical viscosities_ with \(\eta=\xi=0\), we add artificial viscosity [35] to the system so that shocks can be dissipated. Specifically, we add to the stress term (39) an artificial stress tensor \(\sigma_{h}^{av}\) of the following form:
\[\sigma_{h}\leftarrow\sigma_{h}+\sigma_{h}^{av},\quad\text{ where }\sigma_{h}^{av}:=\mu_{av}J_{h}\nabla_{s}\mathbf{u}_{h}, \tag{45}\]
in which, following [10], the artificial viscosity coefficient \(\mu_{av}\) is:
\[\mu_{av}=\rho_{h}\left(q_{2}\ell_{s_{1}}^{2}|\Delta_{s_{1}}\mathbf{u}_{h}|+q_{1} \psi_{0}\psi_{1}\ell_{s_{1}}c_{s}\right) \tag{46}\]
where \(q_{1}\) and \(q_{2}\) are linear and quadratic scaling coefficients, \(c_{s}=\sqrt{\gamma p_{h}/\rho_{h}}\) is the speed of sound with \(\gamma=c_{p}/c_{v}\) being the adiabatic constant, \(\Delta_{s_{1}}\mathbf{u}_{h}:=s_{1}\cdot\nabla\mathbf{u}_{h}\cdot s_{1}\) is the directional measure of compression and \(\ell_{s_{1}}=\ell_{0}|J_{h}s_{1}|\) is the directional length scale along the direction \(s_{1}\), and the two linear switches are \(\phi_{0}=\frac{|\nabla\cdot\mathbf{u}_{h}|}{|\nabla\mathbf{u}_{h}|}\), and \(\phi_{1}=\begin{cases}1,&\text{if }\Delta_{s}\mathbf{u}_{h}<0,\\ 0,&\text{if }\Delta_{s}\mathbf{u}_{h}\geq 0.\end{cases}\) Here the direction \(s_{1}\) is the unit eigenvector of the symmetric tensor \(\nabla_{s}\mathbf{u}_{h}\) with the smallest eigenvalue \(\lambda_{1}\), i.e.,
\[(\nabla_{s}\mathbf{u}_{h})s_{1}=\lambda_{1}s_{1},\quad|s_{1}|=1,\text{ and }\lambda_{1}\text{ is the smallest eigenvalue}.\]
With this notation, we have \(\Delta_{s_{1}}\mathbf{u}_{h}=\lambda_{1}\). Moreover \(\ell_{0}=h_{0}/k\) is the mesh size of the initial domain divided by the polynomial degree \(k\). We refer interested reader to [10] and references cited therein for more discussion on the choice of the artificial viscosity coefficient.
### Summary
The final form of the semi-discrete scheme is summarized in Algorithm 1 below. This spatial discretization is high-order, mass/momentum/energy conserving, and entropy stable.
* Find \(\mathbf{x}_{h},\mathbf{u}_{h}\in\mathbf{V}_{h}^{k}\), and \(\theta_{h}\in W_{h}^{k}\) such that the ODE system \[\mathsf{X}_{h}^{\prime}(t) =\mathsf{U}_{h}(t),\] \[(\rho_{0}\dot{\mathbf{u}}_{h},\mathbf{\varphi}_{h})_{h}-\left((c_{p}-c_{ v})\rho_{0}\theta_{h},\nabla\cdot\mathbf{\varphi}_{h}\right)_{h}+(\sigma_{h}, \nabla\mathbf{\varphi}_{h})_{h} =0,\quad\forall\mathbf{\varphi}_{h}\in\mathbf{V}_{h}^{k},\] \[(\theta_{j}^{\ell})^{\prime}+(\gamma-1)(\nabla\cdot\mathbf{u}_{h})( \mathbf{\xi}_{j}^{\ell})\theta_{j}^{\ell}-\frac{\sigma_{h}(\mathbf{\xi}_{j}^{\ell})}{c _{v}\rho_{0}(\mathbf{\xi}_{j}^{\ell})}:\nabla\mathbf{u}_{h}(\mathbf{\xi}_{j}^{\ell}) =0,\quad\forall 1\leq\ell\leq N_{T},1\leq j\leq N_{k},\] holds for the coefficient vectors \(\mathsf{X}_{h}\), \(\mathsf{U}_{h}\), and \(\Theta_{h}\), where the numerical stress \[\sigma_{h}=(\eta+\mu_{av})J_{h}\nabla_{s}\mathbf{u}_{h}+(\xi-\frac{2}{3}\eta)J_{h} \nabla\cdot\mathbf{u}_{h}\mathbf{I},\] in which the artificial viscosity \(\mu_{av}\) is given in (46). Here the notation (28) for the finite element approximations is used.
* The density approximation \(\rho_{h}\in W_{h}^{k}\) satisfies mass conservation (30), and the pressure, internal energy, and entropy approximations \(p_{h},e_{h},s_{h}\in W_{h}^{k}\) satisfy the thermodynamic relations (31).
## 4. Temporal discretization
In this section, we focus on the discretization of the ODE system in Algorithm 1. We use fully implicit time discretizations so that the fully discrete scheme is robust for all mach numbers. We refer to [10, 30] for alternative conservative explicit schemes.
### First order energy dissipative scheme
Using implicit Euler for the time derivative terms, we arrive at the following first order scheme: Given data \(\mathbf{x}_{h}^{n-1},\mathbf{u}_{h}^{n-1}\in\mathbf{V}_{h}^{k}\) and \(\theta_{h}^{n-1}\in W_{h}^{k}\) at time \(t^{n-1}\), and time step size \(\delta t\), find solution \(\mathbf{x}_{h}^{n},\mathbf{u}_{h}^{n}\in\mathbf{V}_{h}^{k}\) and \(\theta_{h}^{n}\in W_{h}^{k}\) such that
\[\frac{\mathbf{x}_{h}^{n}-\mathbf{x}_{h}^{n-1}}{\delta t} =\mathbf{u}_{h}^{n}, \tag{47a}\] \[(\rho_{0}\frac{\mathbf{u}_{h}^{n}-\mathbf{u}_{h}^{n-1}}{\delta t},\mathbf{ \varphi}_{h})_{h}-((c_{p}-c_{v})\rho_{0}\theta_{h}^{n},\nabla\cdot\mathbf{\varphi} _{h})_{h}+(\sigma_{h}^{n},\nabla\mathbf{\varphi}_{h})_{h} =0,\quad\forall\varphi_{h}\in\mathbf{V}_{h}^{k},\] (47b) \[\frac{\theta_{j}^{\ell,n}-\theta_{j}^{\ell,n-1}}{\delta t}+( \gamma-1)\nabla\cdot\mathbf{u}_{h}^{n}(\mathbf{\xi}_{j}^{\ell})\theta_{j}^{\ell,n}- \frac{\sigma_{h}^{n}(\mathbf{\xi}_{j}^{\ell}):\nabla\mathbf{u}_{h}^{n}(\mathbf{\xi}_{j}^{ \ell})}{c_{v}\rho_{0}(\mathbf{\xi}_{j}^{\ell})} =0,\quad\forall j,\ell, \tag{47c}\]
where the stress
\[\sigma_{h}^{n}=(\eta+\mu_{av}^{n-1})J_{h}^{n}\nabla_{s}\mathbf{u}_{h}^{n}+(\xi- \frac{2}{3}\eta)J_{h}^{n}\nabla\cdot\mathbf{u}_{h}^{n}\mathbf{I},\]
in which the artificial viscosity coefficient \(\mu_{av}^{n}\) is evaluated at time \(t^{n}\), and the Jacobian determinant \(J_{h}^{n}=|\nabla_{X}\mathbf{x}_{h}^{n}|\). The above system can be solved by first expressing \(\mathbf{x}_{h}^{n}\) and \(\theta_{h}^{n}\) in terms of \(\mathbf{u}_{h}^{n}\) using (47a) and (47c):
\[\mathbf{x}_{h}^{n} =\mathbf{x}_{h}^{n-1}+\delta t\mathbf{u}_{h}^{n}, \tag{48a}\] \[\theta_{j}^{\ell,n} =\frac{\theta_{j}^{\ell,n-1}+\delta t\frac{\sigma_{h}^{n}(\mathbf{ \xi}_{i}^{\ell}):\nabla\mathbf{u}_{h}^{n}(\mathbf{\xi}_{i}^{\ell})}{c_{v}\rho_{0}(\mathbf{ \xi}_{i}^{\ell})}}{1+\delta t(\gamma-1)\nabla\cdot\mathbf{u}_{h}^{n}(\mathbf{\xi}_{i}^{ \ell})}, \tag{48b}\]
and then solve the nonlinear system for \(\mathbf{u}_{h}^{n}\) in (47b) using (48). We use Newton's method to solve this nonlinear system for the velocity DOFs.
For the scheme (47), positivity of density on the quadrature points is guaranteed as long as the Jacobian \(J_{h}^{n}>0\) on these quadrature points. And positivity of the temperature (hence positivity of pressure and internal energy) is satisfied as long as the denominator of right hand side of (48b) stays positive, i.e.,
\[1+\delta t(\gamma-1)\nabla\cdot\mathbf{u}_{h}^{n}(\mathbf{\xi}_{i}^{\ell})>0,\quad \forall i,\ell.\]
Moreover, strong mass conservation is satisfied due to (30), and global momentum conservation is satisfied by taking \(\mathbf{\varphi}_{j}\) to be a global constant in (47b). Finally, taking \(\mathbf{\varphi}_{j}=\mathbf{u}_{h}^{n}\) in (47b) and combining with (47c), we get
\[(\rho_{0}(\mathbf{u}_{h}^{n}-\mathbf{u}_{h}^{n-1}),\mathbf{u}_{h}^{n})_{h}+(\rho_{0}(e_{h} ^{n}-e_{h}^{n-1}),1)_{h}=0,\]
which implies that
\[\left(\rho_{0}(\frac{1}{2}|\mathbf{u}_{h}^{n}|^{2}+e_{h}^{n}),1\right)_{h}-\left( \rho_{0}(\frac{1}{2}|\mathbf{u}_{h}^{n-1}|^{2}+e_{h}^{n-1}),1\right)_{h}=-\left( \rho_{0}\frac{1}{2}|\mathbf{u}_{h}^{n}-\mathbf{u}_{h}^{n-1}|^{2},1\right)_{h}\leq 0\]
Hence, the total energy is _dissipated_ over time for the scheme (47).
### Second order energy-conservative scheme
Energy conservation can be recovered from the first order scheme (47) by applying a time filter, which is the same as the midpoint rule; see [4]. This time stepping algorithm has two steps and is recorded in Algorithm 2 for reference.
```
\(\bullet\) Apply the backward Euler scheme (47) with half time step \(\delta t/2\) to get approximations at midpoint \(t^{n-\frac{1}{2}}=t^{n-1}+\delta t/2\), and denote the solutions as \(\mathbf{x}_{h}^{n-\frac{1}{2}}\), \(\mathbf{u}_{h}^{n-\frac{1}{2}}\), and \(\theta_{h}^{n-\frac{1}{2}}\). \(\bullet\) Apply a time filter (forward Euler) step to approximation the solutions at time \(t^{n}=t^{n-1}+\delta t\): \[\mathbf{x}_{h}^{n}=2\mathbf{x}_{h}^{n-\frac{1}{2}}-\mathbf{x}_{h}^{n-1},\ \ \mathbf{u}_{h}^{n}=2\mathbf{u}_{h}^{n-\frac{1}{2}}-\mathbf{u}_{h}^{n-1},\ \ \theta_{h}^{n}=2\theta_{h}^{n-\frac{1}{2}}-\theta_{h}^{n-1}.\] (49) \(\bullet\) The density, pressure, internal energy, and entropy approximations \(\rho_{h},p_{h},e_{h},s_{h}\in W_{h}^{k}\) are then recovered through (30) and (31).
```
**Algorithm 2** The midpoint rule with Backward Euler - Forward Euler implementation.
We notice that the BE step implies
\[(\rho_{0}(\mathbf{u}_{h}^{n-\frac{1}{2}}-\mathbf{u}_{h}^{n-1}),\mathbf{u}_{h}^{n-\frac{1}{ 2}})_{h}+(\rho_{0}(e_{h}^{n-\frac{1}{2}}-e_{h}^{n-1}),1)_{h}=0,\]
By the extrapolation relations (49), we have
\[(\mathbf{u}_{h}^{n-\frac{1}{2}}-\mathbf{u}_{h}^{n-1})\cdot\mathbf{u}_{h}^{n-\frac{1}{2}}= \frac{1}{4}(|\mathbf{u}_{h}^{n}|^{2}-|\mathbf{u}_{h}^{n-1}|^{2}),\ \text{and}\ e_{h}^{n-\frac{1}{2}}-e_{h}^{n-1}=\frac{1}{2}(e_{h}^{n-1}-e_{h}^{ n-1}).\]
Combining these equations, we get total energy conservation for this two-step method:
\[\left(\rho_{0}(\frac{1}{2}|\mathbf{u}_{h}^{n}|^{2}+e_{h}^{n}),1\right)_{h}=\left( \rho_{0}(\frac{1}{2}|\mathbf{u}_{h}^{n-1}|^{2}+e_{h}^{n-1}),1\right)_{h}.\]
### High-order BDF schemes
It is natural to extend the implicit Euler scheme (47) to higher order by replacing the backward Euler time difference terms in (47) using higher-order backward difference formulas (BDFs). For completeness, we record these high-order schemes with uniform time stepping in Algoritm 3 below.
``` \(\bullet\) Given data \(\mathbf{x}_{h}^{n-j},\mathbf{u}_{h}^{n-j}\in\mathbf{V}_{h}^{k}\) and \(\theta_{h}^{n-j}\in W_{h}^{k}\) at time \(t^{n-j}=(n-j)\delta t\) for \(j=1\cdots,m\), find solution \(\mathbf{x}_{h}^{n},\mathbf{u}_{h}^{n}\in\mathbf{V}_{h}^{k}\) and \(\theta_{h}^{n}\in W_{h}^{k}\) at time \(t^{n}=n\delta t\) such that \[D_{\delta t}^{m}(\mathbf{x}_{h}^{n}) =\mathbf{u}_{h}^{n},\] (50a) \[(\rho_{0}D_{\delta t}^{m}(\mathbf{u}_{h}^{n}),\mathbf{\varphi}_{h})_{h}- \left((c_{p}-c_{v})\rho_{0}\theta_{h}^{n},\nabla\cdot\mathbf{\varphi}_{h}\right)_ {h}+(\sigma_{h}^{n},\nabla\mathbf{\varphi}_{h})_{h} =0,\quad\forall\varphi_{h}\in\mathbf{V}_{h}^{k},\] (50b) \[D_{\delta t}^{m}(\theta_{j}^{\ell,n})+(\gamma-1)\nabla\cdot\mathbf{u }_{h}^{n}(\mathbf{\xi}_{j}^{\ell})\theta_{j}^{\ell,n}-\frac{\sigma_{h}^{n}(\mathbf{ \xi}_{j}^{\ell}):\nabla\mathbf{u}_{h}^{n}(\mathbf{\xi}_{j}^{\ell})}{c_{v}\rho_{0}(\mathbf{ \xi}_{j}^{\ell})} =0,\quad\forall j,\ell,\] (50c)
where \(D_{\delta t}^{m}(\alpha)\) is the approximation to the time derivative term \(\alpha(t)^{\prime}\) using BDF\([m]\), e.g.,
\[D_{\delta t}^{2}(\alpha^{n})=\frac{3\alpha^{n}-4\alpha^{n-1}+\alpha^{n-2}}{2 \,\delta t},\quad D_{\delta t}^{3}(\alpha^{n})=\frac{11\alpha^{n}-18\alpha^{n -1}+9\alpha^{n-2}-2\alpha^{n-3}}{6\,\delta t}.\]
\(\bullet\) The density, pressure, internal energy, and entropy approximations \(\rho_{h},p_{h},e_{h},s_{h}\in W_{h}^{k}\) are then recovered through (30) and (31).
## 5. Numerical results
We present numerical results in this section using the open-source finite-element software NGSolve [31], [https://ngsolve.org/](https://ngsolve.org/). For all the simulation results, we consider invisid models with _zero_ physical viscosities.
We use a (variable time step size) BDF2 time stepping with high-order spatial discretizations in Algorithm 3 for all the examples, except for Example 5.1 where higher order BDF time steppings are also used to verify the space/time high-order accuracy of the proposed methods. For problems with shocks, we take \(q_{1}=0.5\) and \(q_{2}=2\) as the default choice of the artificial viscosity parameters in (46) unless otherwise stated.
We take the time step size as
\[\delta t=\min\{\text{CFL}\frac{h_{min}}{|\mathbf{u}_{h}|+c_{s}}\}, \tag{51}\]
where the length scale \(h_{min}=h_{0}\alpha_{0}/k\) with \(\alpha_{0}\) being the minimal singular value of the Jacobian matrix \(\nabla_{X}\mathbf{x}_{h}\). Here the default choice of the CFL constant is taken to be \(\text{CFL}=1\) unless otherwise stated. The automatic time-step control detailed in [10, 7.3] is also used for the examples with shocks. The Newton's method is used to solve the nonlinear system for velocity DOFs in each time step, where the average iteration counts for all cases are observed to be around 4-8. Most of the linearized systems in each Newton iteration are solved using the sparse Cholesky factorization, with the only exception of the low-Mach number cases in Example 5.6 where a direct pardiso solver is used as the matrix failed to be positive definite therein.
### Accuracy test: 2D Taylor-Green Vortex
We consider the invisid 2D Taylor-Green Vortex problem proposed in [10] to check the high-order convergence of our proposed algorithm on deforming domains. Following [10], we take \(\gamma=5/3\) and add a source term
\[e_{\text{src}}(\mathbf{x},t)=\frac{3\pi}{8}(\cos(3\pi x)\cos(\pi y)-\cos(\pi x) \cos(3\pi y))\]
to the internal energy equation (20). The computational domain is a unit square with wall boundary conditions, and the initial conditions are taken such that the exact solutions are:
\[\rho(\mathbf{x},t) =1,\] \[\mathbf{u}(\mathbf{x},t) =(\sin(\pi x)\cos(\pi y),-\cos(\pi x)\sin(\pi y)),\] \[p(\mathbf{x},t) =\frac{1}{4}(\cos(2\pi x)+\cos(2\pi y))+1.\]
We turn off artificial viscosity in the numerical simulations, and perform mesh convergence studies for the \(L^{2}\)-errors of velocity and internal energy at final time \(t=0.5\) on a sequence of four consecutive uniform rectangular meshes with size \(2^{3+l}\times 2^{3+l}\) for \(l=0,1,2,3\). We consider the BDF\([m]\) scheme Algorithm 3 with polynomials of degree \(m\) used for the spatial discretization for \(m=1,2,3,4\). The time step size is taken be \(\delta t=0.05/2^{l}\) for \(l=0,1,2,3\). The midpoint rule Algorithm 2 (with smaller time steps) is used to generate the starting values for high order BDF schemes. History of convergence for the two \(L^{2}\) errors at \(t=0.5\) are recorded in Table 2. We observe \(m\)-th order convergence for both variables for all \(1\leq m\leq 4\). Hence, spatial and temporal high order convergence is achieved. This convergence rate is optimal for the BDF time stepping, but suboptimal by one order for the spatial discretization. We find the convergence behavior of the BDF\([m+1]\)-\(P^{m}\) scheme (not reported here for simplicity) is similar to BDF\([m]\)-\(P^{m}\) for \(1\leq m\leq 3\), which suggests the one-order spatial convergence rate reduction is unavoidable for our scheme.
We remark that our spatial discretization can be made slightly more efficient by taking polynomial degree one order lower for the thermodynamic variables than the flow map approximation, while still maintaining a similar convergence behavior. However, since the major computational
cost for our scheme is in the nonlinear system solver in (47b). Such efficiency gain is not significant, and will not be investigated further in this work.
### 1D Shock Tube
We consider a simple 1D Riemann problem, the Sod shock tube on the domain \(\Omega=[-5,5]\) with initial condition
\[(\rho,\mathbf{u},p)=(1,0,1)\text{ for }x\in[-5,0],\quad(\rho,\mathbf{u},p)=(0.125,0,0.1) \text{ for }x\in[0,5].\]
Here \(\gamma=1.4\). We apply the BDF2 scheme with polynomial degree \(k=4\) on an initial mesh with 20 uniform cells. The results at final time \(t=2.0\) on all \(20\times(4+1)=100\) quadrature points are shown in Figure 1, along with the deformed cells. The numerical approximation agrees with the exact Riemann solution quite well even on this coarse mesh, where the shock is resolved within 2 cells. As typical of Lagrangian schemes, the contact discontinuity is captured without dissipation. We also observe the "wall heating" phenomenon in the internal energy at the contact.
### 2D Sedov explosion
The Sedov explosion [32] models the expanding wave by an intense explosion in a perfect gas. It is a standard problem to test the ability of codes to preserve the radial symmetry of shocks. The domain is a square \(\Omega=[0,1.2]\times[0,1.2]\). The initial condition is set to have unit density and zero velocity, and also to have zero internal energy except at the left bottom corner cell \(T_{0}\), where its is a bilinear function whose value is \(\frac{0.2448\times 4}{\text{area}(T_{0})}\) on the left/bottom corner vertex and zero on the other three vertices. So the total initial internal energy is \(\int_{\Omega_{0}}\rho\text{edX}=0.2448.\) Symmetry boundary conditions are imposed on the left and bottom boundaries, while free boundary conditions are used on the top and right boundaries. The analytic solution at time \(t=1\) gives a shock at radius \(r=1\) with a peak density of 6.
We apply the BDF2 scheme with polynomial degree \(k=4\) on initial uniform rectangular meshes of size \(N\times N\) with \(N=16\) and \(N=32\). The density field on deformed meshes, and also the scattered plot of density versus radius \(r=\sqrt{x^{2}+y^{2}}\) on all quadrature points are shown in Figure 2. The radial symmetry of the solution is preserved, and the numerical solution agrees quite well with the analytic solution in Figure 2(c).
\begin{table}
\begin{tabular}{c c|c c|c c} \hline \hline \(m\) & mesh & \(\|\mathbf{u}_{h}-\mathbf{u}\|_{\Omega}\) & order & \(\|e_{h}-e\|_{\Omega}\) & order \\ \hline \multirow{4}{*}{1} & \(8\times 8\) & 1.052e-01 & – & 1.337e-01 & – \\ & \(16\times 16\) & 4.131e-02 & 1.35 & 7.284e-02 & 0.88 \\ & \(32\times 32\) & 1.949e-02 & 1.08 & 3.697e-02 & 0.98 \\ & \(64\times 64\) & 9.710e-03 & 1.00 & 1.852e-02 & 1.00 \\ \hline \multirow{4}{*}{2} & \(8\times 8\) & 1.077e-02 & – & 1.263e-02 & – \\ & \(16\times 16\) & 3.250e-03 & 1.73 & 3.578e-03 & 1.82 \\ & \(32\times 32\) & 7.933e-04 & 2.03 & 8.764e-04 & 2.03 \\ & \(64\times 64\) & 1.964e-04 & 2.01 & 2.192e-04 & 2.00 \\ \hline \multirow{4}{*}{3} & \(8\times 8\) & 5.590e-03 & – & 2.947e-03 & – \\ & \(16\times 16\) & 5.809e-04 & 3.27 & 3.175e-04 & 3.21 \\ & \(32\times 32\) & 7.070e-05 & 3.04 & 4.181e-05 & 2.92 \\ & \(64\times 64\) & 8.766e-06 & 3.01 & 5.404e-06 & 2.95 \\ \hline \multirow{4}{*}{4} & \(8\times 8\) & 3.986e-03 & – & 2.510e-03 & – \\ & \(16\times 16\) & 1.361e-04 & 4.87 & 7.305e-05 & 5.10 \\ \cline{1-1} & \(32\times 32\) & 5.690e-06 & 4.58 & 2.928e-06 & 4.64 \\ \cline{1-1} & \(64\times 64\) & 4.755e-07 & 3.58 & 3.304e-07 & 3.15 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Example 5.1: History of convergence for the BDF\([m]\)-\(P^{m}\) scheme for \(1\leq m\leq 4\).
### 2D Noh explosion
The Noh explosion problem [25] consists of an ideal gas with \(\gamma=5/3\), initial density \(\rho_{0}=1\), initial internal energy \(e_{0}=0\), and initial velocity \(\mathbf{u}_{0}=(-\frac{x}{\sqrt{x^{2}+y^{2}}},-\frac{y}{\sqrt{x^{2}+y^{2}}})\). The computational domain is \(\Omega=[0,1]\times[0,1]\). We use symmetry boundary conditions on left
Figure 1. Example 5.2: Results at final time \(t=0.2\) sampled on 100 quadrature points. The gray lines are the deformed cell boundaries.
Figure 2. Example 5.3: (a)-(b): Density contour on deformed domain at final time \(t=1.0\) using BDF2 time stepping with polynomial degree \(k=4\) on rectangular meshes with size \(N\times N\). (c): Scattered plot of density v.s. radius on all quadrature points at final time \(t=1\).
and bottom boundaries, and free boundary conditions on top and right boundaries. Similar to the previous case, this problem has a radial symmetry, and the analytic solution at time \(t=0.6\) gives a shock at radius \(r=0.2\) with a peak density of \(16\).
We apply the BDF2 scheme with polynomial degree \(k=4\) on initial uniform rectangular meshes of size \(N\times N\) with \(N=16\) and \(N=32\). For this problem, we observe the default choice of artificial viscosity coefficients with \(q_{1}=0.5\) and \(q_{2}=2\) leads to quite large post-shock oscillations. So we increase these coefficients to \(q_{1}=1\) and \(q_{2}=4\) in our numerical experiments reported here. The radial symmetry of the solution is preserved as seen in (a) and (b) of Figure 3, and the numerical solution has a good agreement with the analytic solution for \(0.1<r<0.3\) in Figure 2(c), although it is slightly oscillatory.
### Triple point problem
The triple point problem is a multimaterial test case proposed in [17]; see also [13]. The initial data is shown in Fig. 4. The computational domain has a rectangular shape with \(7\times 3\) edge ratio. It includes three materials at rest located in \(\Omega_{1}=[0,1]\times[0,3]\), \(\Omega_{2}=[1,7]\times[0,1.5]\), and \(\Omega_{3}=[1,7]\times[1.5,3]\), initially forming a T-junction. The high-pressure material in \(\Omega_{1}\) creates a shock wave moving to the right. Due to different material properties in \(\Omega_{2}\) and \(\Omega_{3}\), the shock wave moves faster in \(\Omega_{3}\), which leads to vortex formation around the triple point. For pure Lagrangian methods, there is a limit to how long this problem can be run due to vortex generation. Similar to [10], we run simulation till time \(t=3.3\).
Figure 4. Example 5.5: Initial data for the triple point problem.
Figure 3. Example 5.4: (a)-(b): Density contour on deformed domain at final time \(t=0.6\) using BDF2 time stepping with polynomial degree \(k=4\) on rectangular meshes with size \(N\times N\). Here the gray square is the initial domain. (c): Scattered plot of density v.s. radius on all quadrature points at final time \(t=0.6\).
We apply the BDF2 scheme with polynomial degree \(k=4\) and on initial uniform rectangular meshes of size \(28\times 12\) and \(56\times 24\). Here we reduce the artificial viscosity coefficients to be \(q_{1}=0.25\) and \(q_{2}=1\), and take a larger CFL number with \(\text{CFL}=3\). A total of \(238\) time steps is used to drive the solution to final time \(t=3.3\) on the fine mesh with \(56\times 28\) cells. We note that an explicit scheme would require about two order of magnitude more time steps for a stable simulation. For example, the Laghos code freely available in the github repository [https://github.com/CEED/Laghos](https://github.com/CEED/Laghos), which implements the high-order Lagrangian finite element scheme in [10], requires about \(90,000\) time steps for \(k=4\) on the fine mesh with RK4 time stepping and CFL=1. We further note that the default choice of parameters also work for this problem; we make the modifications to illustrate that the high-order method is still robust with a larger time step size and a smaller artificial viscosity.
The density plot in log-scale on the deformed meshes are shown in Figure 5. We observe the material interfaces are sharply preserved as typical of Lagrangian schemes, and the the shock locations are essentially the same, but the total amount of "roll-up" at the triple point increases as the mesh refines.
### Gresho vortex
The Gresho vortex problem [15] is an example of a stationary, incompressible rotating flow around the origin in two spatial dimensions, where centrifugal forces are exactly balanced by pressure gradients. It was first applied to the compressible Euler equations in [18]. Here we use the low-Mach setup given in [23]. The domain is a unit square \(\Omega=(0,1)\times(0,1)\) with wall boundary conditions, \(\gamma=1.4\), and the initial conditions are
\[\rho_{0}=1,\boldsymbol{u}_{0}=\ u_{\phi}\boldsymbol{e}_{\phi},p_{0}=\ \frac{1}{\gamma M_{\max}^{2}}-\frac{1}{2}+\begin{cases}12.5r^{2}&\text{if }r<0.2,\\ 4\text{ln}(5r)+4-20r+12.5r^{2}&\text{if }0.2\geq r\geq 0.4,\\ 4\text{ln}2-2&\text{if }r>0.4,\end{cases}\]
where the radius \(r=\sqrt{(x-0.5)^{2}+(y-0.5)^{2}}\), angular velocity \(u_{\phi}\) is
\[u_{\phi}(r)=\begin{cases}5r&\text{if }r<0.2,\\ 2-5r&\text{if }0.2\geq r\geq 0.4,\\ 0&\text{if }r>0.4,\end{cases}\]
the unit vector \(\boldsymbol{e}_{\phi}=(-(y-0.5)/r,(x-0.5)/r)\), and \(M_{\max}\) is the parameter used to adjust the maximum Mach number of the problem, where \(M(r)=\frac{|\boldsymbol{u}|}{\sqrt{\gamma p/\rho}}\) is the Mach number. The maximum number of the initial condition is achieved at \(r=0.2\) with \(M(0.2)=M_{\max}\).
We apply the BDF2 scheme with polynomial degree \(k=2\) on a \(40\times 40\) mesh, and \(k=4\) on a \(20\times 20\) mesh. The total number of velocity DOFs are the same for the two cases. Since the problem is smooth, we turn off the artificial viscosity. The period of one rotation for \(r=0.2\) is \(2\pi r=0.4\pi\). We run simulation till final time \(t=\frac{3}{4}\times 0.4\pi\) so that the internal flow has rotated \(\frac{3}{4}\times 180=135\) degrees. The default choice of time step size (51) is linear proportional to the Mach number since
Figure 5. Example 5.5: Results at final time \(t=3.3\).
\(\max c_{s}\approx 1/M_{\max}\). We remove this Mach-number dependency on time step size by multiplying the sound speed in (51) with the maximum Mach number, i.e.,
\[\delta t=\min\{\text{CFL}\frac{h_{min}}{|\mathbf{u}_{h}|+M_{\max}c_{s}}\}.\]
We use \(M_{\max}=0.1,0.01,0.001\), and take the CFL number to be CFL=0.25 for all cases. The relative Mach number \(M/M_{\max}\) at final time \(t=\frac{3}{4}\times 0.4\pi\) for all cases are shown in Figure 6. The first row of Figure 6 show results for the \(k=2\) simulations, where we clearly observe a locking phenomena as the Mach number decreases. On the other hand, the higher order simulations with \(k=4\) leads to almost identical results for all three Mach numbers. This example illustrates the advantage of using a higher order scheme over a low-order scheme in the low-Mach number regime.
Moreover, the total number of time steps for \(k=4\) are between 900 to 1000 for all three cases. If an explicit scheme (e.g., RK3) were to be used to solve this problem, the total number of time steps will be about three orders of magnitude larger when \(M_{\max}=0.001\) due to sound speed based CFL constraints. Finally, we note that the nonlinear system in each time step is harder to solve as the Mach number \(M_{\max}\) decreases. For example, Newton's method with sparse Cholesky direct solver works for \(M_{\max}=0.1\) with CFL=1, but it fails for \(M_{\max}=0.001\), where we have to reduce CFL to be 0.25 and replace the Cholesky solver by a pardiso solver, which indicates the linear system for \(M_{\max}=0.001\) in the Newton iteration is no longer positive definite. The linear system solver issue in the low Mach number regime will be further investigate in our future work.
### Shock-bubble interaction
This test case corresponds to the interaction of shock wave with a cylindrical Helium bubble surrounded by air at rest [28]. We use the same steup as in [13, Section 8.4]. The initial domain is a rectangular box \(\Omega_{0}=(0,L)\times(-H/2,H/2)=(0,0.650)\times(-0.089,0.089)\), which includes a circular bubble with center \((0.320,0)\) and radius \(r_{b}=0.025\). Initial data are shown in Figure 7(a). Wall boundary conditions at each boundary is prescribed except at the right boundary, where we impose a piston-like boundary condition with inward velocity \((-124.824,0)\). The left going shock wave hits the bubble at time \(t_{i}=668.153\times 10^{-6}\). The final time of simulation is \(t_{f}=t_{i}+674\times 10^{-6}=1342.153\times 10^{-6}\), which corresponds to the time where experimental shadow-graph extracted from [16] is displayed in [28].
We use two unstructured triangular meshes that is fitted to the bubble boundary for this problem. The coarse mesh has 8324 triangular cells whose mesh size is \(h=H/32\), while the fine mesh has 33526 cells whose mesh size is \(h=H/64\); see Figure 7(b)-(c) for the zoom-in view of the two meshes around the bubble. The BDF2 time stepping is used in combination with polynomial degree \(k=2\) and \(k=4\) on these two meshes. The zoom-in view around the deformed bubble at final time \(t_{f}\) are shown in Figure 8. We observe that the location and shape of the deformed bubble is similar for each simulation, with a better resolution being obtained on a finer mesh with a higher order polynomial degree. These shapes are also qualitatively similar to the experimental Schlieren image in Figure 8(e) obtained from [16].
We display in Figure 9 the time evolution of the bubble at times \(t=800\times 10^{-6},1100\times 10^{-6},1342.153\times 10^{-6}\) for \(k=4\) on the two meshes. We note that the results obtained with both meshes are quite similar.
### Multimaterial implosion in cylindrical geometry
The aim of this example is to access the capability of the implicit high-order Lagrangian scheme to handle a multi-mode implosion in cylindrical geometry. Here we consider a simple 1D multimaterial implosion problem on unstructured 2D triangular meshes. The problem consists of a low-density material with \(\rho_{1}=0.05\) in the radial range \(r\in[0,1]\) surrounded by a shell of high density material with \(\rho_{2}=1.0\) in the radial range \(r\in(1.0,1.2]\). Both materials are initially at rest with pressure \(p=0.1\) and adiabatic index \(\gamma=5/3\). This problem was originally proposed in [13] with a time dependent pressure source on the outer radial surface \(r=1.2\). Here we use the modification in [10] that applies a constant radial
velocity source of \(\mathbf{u}=-5(x,y)/1.2\) on the outer boundary, which drives a cylindrical shock wave inwards. Due to the 1D setup (symmetry), the material interface should be a function of radius only for all time. The movement of the interface radius against time is shown on the left panel of Figure 11. We clearly observe the deceleration of the interface starting around \(t=0.12\), and the so-called stagnation phase is reached around \(t=0.14\) where the radius obtains its minimum value. The flow becomes Rayleigh-Taylor unstable after this time as a small perturbation of the interface will grow exponentially as function of time due to the fact that the light fluid inside is pushing the heavy fluid outside after the stagnation phase. It is very challenging to preserve the interface
Figure 6. Example 5.6: Density contour on deformed domain at final time \(t=1.0\) using BDF2 time stepping and Algorithm 1 with polynomial degree \(k\) on a rectangular mesh with size \(N\times N\) for different choices of \(k\) and \(N\). (parameters: CFL = 1.0, \(q_{1}=0.5\), \(q_{2}=2.0\))
Figure 7. Example 5.7: Geometry setup and zoomed-in meshes around the bubble.
symmetry for a Lagrangian scheme on general unstructured meshes, especially when time past the stagnation phase.
Due to symmetry of the problem, we take the computational domain to be a quarter circle \(\Omega=\{(x,y):x\geq 0,y\geq 0,x^{2}+y^{2}\leq 1.2^{2}\}\) with wall boundary conditions on the left and bottom boundaries. We apply the BDF2 scheme with polynomial degree \(k=2,4\) on two set of unstructured triangular meshes. The coarse mesh has mesh size \(h=0.05\) and is used for \(k=4\), while the fine mesh is a uniform refinement of the coarse mesh and is used for \(k=2\); see Figure 10(d) for the coarse mesh and Figure 10(a) for the fine mesh. We take \(q_{1}=1\) and \(q_{2}=4\) as the artificial viscosity coefficients and set CFL number equals \(0.25\). The two simulations have the same number of velocity DOFs, and requires a similar amount of total time steps to reach the final time \(t=0.16\). Density plots in log-scale on the deformed meshes are shown in Figure 10 at times \(t=0.08\) and \(t=0.16\), along with the initial density at \(t=0\). We observe while the results for \(t=0.08\) are similar for both cases, where radial symmetry of the material interface is preserved quite well. The results for \(k=4\) at \(t=0.16\) (which past the stagnation phase) is better than that for \(k=2\) in terms of the interface symmetry. In Figure 11 we plot the time evolution of the average radius of the material interface, and the normalized standard deviation of this radius at different times as an indication of the symmetry error over time. We observe similar results for the average radius for both simulations, and a smaller symmetry error for the higher order case.
## 6. Conclusion
We presented a class of high-order variational Lagrangian schemes for compressible flow. The discrete EnVarA approach is used to derive the high-order spatial finite element discretization. Features of our spatial discretization include mass/momentum/energy conservation and entropy stability. Fully implicit time stepping is then applied to the resulting ODE system. Each time step requires a nonlinear system solve for the velocity DOFs only. Ample numerical results are shown to support the good performance of the proposed scheme. We plan to extend this pure Lagrangian scheme to the arbitrary Eulerian Lagrangian framework, which has the potential to address the mesh distortion issue.
Here we briefly discuss the isothermal case [14] where the temperature does not change over time. In this case, the free energy (2) is a function of density only: \(\psi=\psi(\rho)\). Typical choices include \(\psi(\rho)=\alpha\rho^{\gamma}\) or \(\psi(\rho)=\alpha\rho\log(\rho)\).
Figure 8. Example 5.7: Zoom on the deformed bubble at final time \(t\)= 1342.153e-6. Here (e) is the Schlieren image from experimental data [16].
This model only has two thermodynamic variables: density \(\rho\) and pressure
\[p=\psi_{\rho}\rho-\psi.\]
The EnVarA derivation leads to the following model equations; see also [14]:
\[\dot{\mathbf{x}}= \mathbf{u},\qquad(\dot{\rho}J)=\ 0, \tag{52a}\] \[\rho\dot{\mathbf{u}}= -\nabla p+\nabla\cdot\left(\eta\nabla_{s}\mathbf{u}+(\xi-\frac{2}{3} \eta)(\nabla\cdot\mathbf{u})\mathbf{I}\right), \tag{52b}\]
where \(p=\psi_{\rho}\rho-\psi\).
The spatial discretization for the model (52) is summarized below:
Figure 9. Example 5.7: Density contour in the vicinity of the bubble at various times.
Find \(\mathbf{x}_{h},\mathbf{u}_{h}\in\mathbf{V}_{h}^{k}\) such that
\[\dot{\mathbf{x}}_{h} =\mathbf{u}_{h},\] (53a) \[(\rho_{0}\dot{\mathbf{u}}_{h},\mathbf{\varphi}_{h})_{h}-(p_{h},\nabla\cdot \mathbf{\varphi}_{h})_{h}+(\sigma_{h},\nabla\mathbf{\varphi}_{h})_{h} =0,\quad\forall\varphi_{h}\in\mathbf{V}_{h}^{k},\] (53b) where the pressure \[p_{h}\in W_{h}^{k}\] is a function of the density approximation with \[p_{i}^{\ell}=\psi_{\rho}(\rho_{i}^{\ell})\rho_{i}^{\ell}-\psi(\rho_{i}^{\ell}),\] in which \[\rho_{h}\in W_{h}^{k}\] satisfies mass conservation (30), and the stress \[\sigma_{h}=(\eta+\mu_{av})J_{h}\nabla_{s}\mathbf{u}_{h}+(\xi-\frac{2}{3}\eta)J_{h} \nabla\cdot\mathbf{u}_{h}\mathbf{I},\] in which the artificial viscosity coefficient \(\nu_{av}\) is given in (46).
Figure 11. Example 5.7: Density contour in the vicinity of the bubble at various times.
Figure 10. Example 5.7: Density contour in the vicinity of the bubble at various times.
It is clear that the spatial discretization in Algorithm 4 is mass and momentum conservative. Next we prove Algorithm 4 is also entropy stable. Using a similar derivation as in (9) and (36) and the definition of density and pressure, we obtain
\[\frac{d}{dt}(\psi(\rho_{h})J_{h},1)_{h}=-(p_{h}J_{h},\nabla\cdot\mathbf{u}_{h})_{h}.\]
Taking test function \(\mathbf{\varphi}_{h}=\mathbf{u}_{h}\) in (53b) and use the above relation, we get
\[\frac{d}{dt}\underbrace{\left(\frac{1}{2}\rho_{0}|\mathbf{u}_{h}|^{2}+\psi(\rho_{ h})J_{h},1\right)_{h}}_{\text{total entropy}}=-(\sigma_{h},\nabla\mathbf{u}_{h})\leq 0.\]
The ODE system in Algorithm 4 can be discretized using the implicit schemes discussed in Section 4. Here we present a variational implicit time discretization similar to the backward Euler scheme in Section 4.1. Given data \(\mathbf{x}^{n-1},\mathbf{u}_{h}^{n-1}\in V_{h}^{k}\) at time \(t^{n-1}\) and time step size \(\delta t\), denote the following discrete energy functional for the flow map \(\mathbf{x}_{h}\):
\[E_{h}(\mathbf{x}_{h}):=\left(\frac{\rho_{0}|\mathbf{x}_{h}-\mathbf{x}_{h}^{n-1}-\delta t \mathbf{u}_{h}^{n-1}|^{2}}{2\delta t^{2}}+\psi\left(\rho_{0}/J_{h}\right)J_{h},1 \right)_{h}+\frac{1}{2}(\Delta_{h},1)_{h}, \tag{54}\]
where the dissipation
\[\Delta_{h}:=(\eta+\mu_{av}^{n-1})J_{h}|\nabla_{s}\frac{\mathbf{x}_{h}-\mathbf{x}_{h}^ {n-1}}{\delta t}|^{2}+(\xi-\frac{2}{3}\eta)J_{h}|\nabla\cdot\frac{\mathbf{x}_{h}- \mathbf{x}_{h}^{n-1}}{\delta t}|^{2},\]
with artificial viscosity \(\mu_{av}^{n-1}\) explicitly evaluated at time level \(t^{n-1}\), and \(J_{h}=|\nabla_{X}\mathbf{x}_{h}|\). The flow map at next time level is obtained by solving the following minimization problem:
\[\mathbf{x}_{h}^{n}:=\operatorname{argmin}_{\mathbf{x}_{h}\in V_{h}^{k},J_{h}>0}E_{h}( \mathbf{x}_{h}). \tag{55}\]
Assuming piecewise constant viscosity coefficients \(\eta,\xi\), and \(\mu_{av}^{n-1}\), the Euler-Lagrangian equation for this minimization problem is simply the Backward Euler scheme in (47) applied to the ODE system (53). We note that such energy minimization interpretation is not available for the non-isothermal case discussed in Section 4 due to the temperature equation (47c).
|
2304.09479 | DiFaReli: Diffusion Face Relighting | We present a novel approach to single-view face relighting in the wild.
Handling non-diffuse effects, such as global illumination or cast shadows, has
long been a challenge in face relighting. Prior work often assumes Lambertian
surfaces, simplified lighting models or involves estimating 3D shape, albedo,
or a shadow map. This estimation, however, is error-prone and requires many
training examples with lighting ground truth to generalize well. Our work
bypasses the need for accurate estimation of intrinsic components and can be
trained solely on 2D images without any light stage data, multi-view images, or
lighting ground truth. Our key idea is to leverage a conditional diffusion
implicit model (DDIM) for decoding a disentangled light encoding along with
other encodings related to 3D shape and facial identity inferred from
off-the-shelf estimators. We also propose a novel conditioning technique that
eases the modeling of the complex interaction between light and geometry by
using a rendered shading reference to spatially modulate the DDIM. We achieve
state-of-the-art performance on standard benchmark Multi-PIE and can
photorealistically relight in-the-wild images. Please visit our page:
https://diffusion-face-relighting.github.io | Puntawat Ponglertnapakorn, Nontawat Tritrong, Supasorn Suwajanakorn | 2023-04-19T08:03:20Z | http://arxiv.org/abs/2304.09479v3 | # DiFaReli: Diffusion Face Relighting
###### Abstract
We present a novel approach to single-view face relighting in the wild. Handling non-diffuse effects, such as global illumination or cast shadows, has long been a challenge in face relighting. Prior work often assumes Lambertian surfaces, simplified lighting models or involves estimating 3D shape, albedo, or a shadow map. This estimation, however, is error-prone and requires many training examples with lighting ground truth to generalize well. Our work bypasses the need for accurate estimation of intrinsic components and can be trained solely on 2D images without any light stage data, multi-view images, or lighting ground truth. Our key idea is to leverage a conditional diffusion implicit model (DDIM) for decoding a disentangled light encoding along with other encodings related to 3D shape and facial identity inferred from off-the-shelf estimators. We also propose a novel conditioning technique that eases the modeling of the complex interaction between light and geometry by using a rendered shading reference to spatially modulate the DDIM. We achieve state-of-the-art performance on standard benchmark Multi-PIE and can photorealistically relight in-the-wild images. Please visit our page: [https://diffusion-face-relighting.github.io](https://diffusion-face-relighting.github.io)
## 1 Introduction
The ability to relight face images under any lighting condition has a wide range of applications, such as in Augmented Reality, where consistent lighting for all individuals in the scene is essential to achieve realism. Another use is in portrait photography, where one may aim to soften cast shadows to create a more pleasing, diffuse appearance. Yet, relighting single-view face images remains unsolved.
Relighting a face image requires modeling the physical interactions between the geometry, material, and lighting, which are not inherently present in a 2D image and difficult to estimate accurately. Earlier work [4, 53, 26, 68, 56] thus often assumes Lambertian surfaces and a simplified lighting model, which struggle to model complex light interactions like global illumination, subsurface scattering, or cast shadows. Using multi-view, multi-illumination data from a light stage or a simulation, [38, 72] proposed relighting pipelines that predict surface normals, albedo, and a set of diffuse and specular maps with neural networks given a target HDR map. Some recent methods aim to specifically model cast shadows by predicting a shadow map with a neural network [23, 35] or rendering a shadow map through physical ray tracing with estimated geometry [22].
These approaches share a common scheme in which they
first intrinsically decompose the face image into its surface normals, albedo, and lighting parameters, then use them along with a shadow or visibility map to render a relit output. However, one major issue of this scheme stems from its over-reliance on the accuracy of the estimated components, which are difficult to estimate correctly in real-world scenarios. For instance, when an input image contains cast shadows that need to be removed, these approaches often leave behind shadow residuals in the predicted albedo map, which in turn produces artifacts in the final output (Figure 3). Estimating the geometry for other areas like hair and ears is also extremely challenging, and they are often omitted from relighting pipelines, resulting in unrealistic final composites (Figure 3, 4).
This paper introduces an alternative approach that does not rely on accurate intrinsic decomposition of the face and can be trained exclusively on 2D images, without any 3D face scan, multi-view images, or lighting ground truth, once given a few off-the-shelf estimators. The general idea of our method is simple: we first encode the input image into a feature vector that disentangles the light information from other information about the input image. Then, we modify the light encoding in the feature vector and decode it. The challenge, however, is how to disentangle the light encoding well enough so that the decoding will only affect the shading without altering the person's shape and identity. Our key idea is to leverage a conditional diffusion implicit model [59] with a novel conditioning technique for this task and learn the complex light interactions implicitly via the generative model trained on a real-world 2D face dataset.
Our method relies on mechanisms recently introduced in Denoising Diffusion Implicit Models (DDIM) [59] and Diffusion Autoencoders (DiffAE) [41]. By exploiting the deterministic reversal process of DDIM proposed by Song et al.[59], DiffAE shows how one can encode an image into a meaningful semantic code and disentangle it from other information, which includes stochastic variations. By modifying the semantic code and decoding it, DiffAE can manipulate semantic attributes in a real image. Relighting can be thought of as a manipulation of the "light" attribute in the input image. But unlike DiffAE, which discovers semantic attributes automatically and encodes them in a _latent_ code, our method requires an explicit and interpretable light encoding that facilitates lighting manipulation by the user.
To solve this problem without access to the lighting ground truth, we use an off-the-shelf estimator, DECA [13], to encode the lighting information as spherical harmonic (SH) coefficients and rely on a conditional DDIM to decode and learn to disentangle the light information in the process. Unlike prior work, our use of SH lighting is not for direct rendering of the output shading, as this would be restricted by the limited capacity of SH lighting to express complex illumination. Rather, it is used to condition a generative process that learns the complex shading prior to reproduce real-world 2D face images. To help preserve the input's identity during relighting, we also condition the DDIM on other attributes, such as the face shape and deep feature embeddings from a face recognition model, ArcFace [8].
Another key component is our novel technique for conditioning the DDIM. Instead of treating the SH lighting as a global, non-spatial condition vector as in DiffAE or other diffusion models, we render a shading reference using the known SH equation and feed it to another network called _Modulator_, which computes layer-wise spatial modulation
Figure 2: **Pipeline overview. We use off-the-shelf estimators to encode the input image into encodings of light, shape, camera, face embedding, shadow scalar, and background image, which are then fed to DDIM via “spatial” and “non-spatial” conditioning techniques. For spatial conditioning, the modified SH, 3D shape, and camera encodings are rendered to a shading reference, which is then concatenated with the background image. This concatenated image is fed into _Modulator_ to produce spatial modulation weights for DDIM’s first half. Non-spatial conditioning feeds a stack of 3D shape, camera, face embedding, and a modified shadow scalar to a set of MLPs for modulating the DDIM with our modified version of adaptive group normalization (AdaGN).**
weights for the DDIM. This conditioning technique helps retain spatial information in the shading reference and provides an easy-to-learn conditioning signal as the pixel intensities in the shading reference correlate more directly with the output RGB pixels.
With our novel framework, the visibility of cast shadows can also be modeled with a simple modification: add one conditioning scalar that indicates the "degree" of cast shadows to the DDIM. At test time, we can strengthen or attenuate cast shadows by modifying this scalar. Since our diffusion-based framework does not directly use this flag or the shape and SH parameters in a physical image formation model, imprecise estimation of these parameters can be tolerated and does not significantly compromise our quality.
Our method produces highly plausible and photorealistic results and can convincingly strengthen or attenuate cast shadows. Moreover, we can reproduce the original facial details with high fidelity, which is difficult for competing methods that predict an albedo map with neural networks. We conduct qualitative and quantitative evaluations and achieve state-of-the-art performance on a standard benchmark, Multi-Pie [17]. To summarize, our contributions are:
* A state-of-the-art face relighting framework based on a conditional DDIM that produces photorealistic shading without requiring accurate intrinsic decomposition or 3D and lighting ground truth.
* A novel conditioning technique that converts a shading reference rendered from the estimated light and shape parameters into layer-wise spatial modulation weights.
## 2 Related work
A common approach to face relighting [4, 65, 26, 53, 68, 56] is to decompose an input image into multiple intrinsic components (e.g., lighting, albedo, surface normals) and recompose the image back with modified light-related components. The decomposition can be done by regularized optimization [4], fitting a morphable model [5, 68], or predicted from a neural network [53, 26, 65, 35, 69, 38, 56]. Most earlier methods [4, 53, 26, 68, 56] assume Lambertian surfaces, a simplified lighting model, such as second-order spherical harmonics, and a physical image formation model based on these simplified assumptions. Thus, they cannot handle non-diffuse effects, such as specular highlights or cast shadows, commonly occur in real-world scenarios.
Rather than decomposing an image into physical components, some techniques [77, 61] rely on an encoder-decoder network with a bottleneck layer that holds a latent lighting representation. Zhou et al. [77] force such a latent code to be predictive of the SH lighting and train another regressor that can map the SH lighting of a reference image back to a latent code for relighting. Sun et al. [61] rely on a similar idea but use a low-resolution illumination map, obtained from a light stage, e.g., [70], instead of the SH lighting. In principle, these learning techniques can learn to handle hard shadows and specularities, given sufficient examples. However, in practice, these approaches still struggle to model those effects due to their small light stage data [61] or limited variations in their synthetic dataset [77]. In contrast, our framework can be trained on 2D face images, which are cheaply available and cover far more diverse scenarios.
**Handling non-diffuse components.** Relighting non-diffuse components has long been a challenge. Nestmeyer et al. [35] propose a two-stage framework to predict non-diffuse components as a residual correction of a diffuse rendering from their first stage. Cast shadows are predicted separately as a visibility map, which is multiplied to the output. Wang et al. [69] propose a technique based on intrinsic decomposition that predicts shadow and specular maps by learning from their own large-scale relighting dataset. Pandey et al. [38] introduces a pipeline that predicts a set of specular maps with varying degrees of Phong exponents using estimated surface normals and an input HDR environment map. These maps along with diffuse and albedo maps are used to predict a relit image with a UNet. Yeh et al. [72] uses a pipeline similar to [38] but with synthetic light stage data generated from 3D face scans, and an albedo refinement step to reduce the domain gap between synthetic and real data. Hou et al. (2021) [23] compute a shadow map based on a morphable model fitted to the input and standard ray tracing, then use it to help predict the ratio of pixel luminance changes for relighting. Hou et al. (2022) [22] predict a shadow mask via ray tracing based on their estimated depth map and render a relit image with estimated albedo and shading maps from neural networks.
While these methods [35, 23, 22] produce promising non-diffuse effects, their physical image formation model makes it difficult to operate in the wild when the estimated geometry is inaccurate. As a result, some [23, 22] can only relight the face region but not the ears or hair and still struggle to handle in-the-wild cast shadows (Figure 3). Neural rendering approaches [69, 38] can tolerate some estimation error, but high-frequency details are often lost, even when predicted by a UNet [69] and still require light stage data. The synthetic light stage data of [72] shows great potential but currently relies on 3D face scans to generate, which are difficult to obtain compared to 2D images.
**Style transfer-based methods.** Another class of relighting approaches is based on style transfer. Although some of these methods [28, 31, 54] do not directly solve face relighting, they can be adapted for this task by transferring the lighting and shading styles from one image to another. However, the style representation used in these methods captures broad information beyond the lighting condition and cannot produce accurate relit results. Shu et al. [55] solves relighting using color histogram matching that is re
designed be spatially varying and dependent on the face geometry and can be solved as a mass transport problem. However, this technique does not model self-occlusion required for handling cast shadows and can easily suffer from occlusion by hair or accessories.
**GAN-based methods.** A few techniques use GANs [16] to solve relighting [63, 32]. Tewari et al. [63] rely on StyleRig [64], a technique to enable semantic control of StyleGAN [25] by mapping a set of morphable model parameters along with an initial StyleGAN latent code to a new one representing the target parameters. Specifically, they extend StyleRig, which only works on synthetic images, to real images by optimizing a latent code that reproduces the input image and use StyleRig to manipulate the lighting condition. Similarly, Mallikarjun et al. [32] maps a target illumination and a StyleGAN latent code predicted from pSp network [45] to a new code that represents a relit image. However, these techniques tend to change the identity and facial details of the input person due to the imperfect GAN inversion. New GAN inversion techniques [12, 46] are promising, but no relighting results with these techniques have been demonstrated. Our solution overcomes this issue by leveraging DDIM's near-perfect inversion and produces high-fidelity results that preserve the original details.
## 3 Approach
Given an input face image, we seek to relight this image under a target lighting condition, described by spherical harmonic coefficients and an additional scalar representing the "degree" of visible cast shadows. To explain our method, we first cover relevant background on DDIM [59] and a key finding from DiffAE [41] that shows how a conditional DDIM can perform attribute manipulation on real images by acting as both a decoder and a "stochastic" encoder.
### Background: Conditional DDIM & DiffAE
Our method relies on a conditional Denoising Diffusion Implicit Model (DDIM) [59], which is a variant of diffusion models [58, 19, 60]. (For a full review and notation convention, please refer to [59].) Unlike standard diffusion models, DDIM uses a non-Markovian inference process that relies on the conditional distribution \(q(\mathbf{x}_{t-1}\mid\mathbf{x}_{t},\mathbf{x}_{0})\) that is conditioned on \(\mathbf{x}_{0}\) (the original image) in addition to \(\mathbf{x}_{t}\).
One important implication is that the generative process can be made deterministic, allowing us to deterministically map \(\mathbf{x}_{T}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) to \(\mathbf{x}_{0}\) and vice versa. Here the mapping from \(\mathbf{x}_{0}\) to \(\mathbf{x}_{T}\) can be viewed as the encoding of an input image \(\mathbf{x}_{0}\) to a latent variable \(\mathbf{x}_{T}\).
Diffusion Autoencoders (DiffAE) [41] show that such image encoding yields \(\mathbf{x}_{T}\) that contains little semantic information about the input image \(\mathbf{x}_{0}\) and propose to condition the DDIM also on a learnable latent variable \(\mathbf{z}\) predicted from a separate image encoder. By jointly training the image encoder and the DDIM, the encoded \(\mathbf{z}\) now captures meaningful semantics, while the encoded \(\mathbf{x}_{T}\), inferred by reversing the deterministic generative process of the DDIM, captures the rest of the information not encoded in \(\mathbf{z}\), such as stochastic variations. The resulting latent code (\(\mathbf{z}\), \(\mathbf{x}_{T}\)) can also be decoded back to the input image near-perfectly using the same conditional DDIM. By modifying the semantic latent variable \(\mathbf{z}\) and decoding the new (\(\mathbf{z}^{\prime}\), \(\mathbf{x}_{T}\)), DiffAE can manipulate semantic attributes of a real input image--a capability that inspires our work.
### Method overview
The general idea of our method is to encode the input image into a feature vector that disentangles the light information from other information about the input image. Then, the relit image is produced by modifying the light encoding in the feature vector and decoding the resulting vector with a conditional DDIM (see Figure 2). This process is similar to how DiffAE performs attribute manipulation; however, our task requires well-disentangled and interpretable light encoding that facilities lighting manipulation by the user.
To solve this, we use off-the-shelf estimators to encode an input image into light, shape, and camera encodings, as well as a face embedding, a shadow scalar, and a background image (Section 3.3). Then, these encodings are used to condition our DDIM decoder (Section 3.4) with a novel conditioning technique (Section 3.5). For training, we use a standard diffusion objective to reconstruct training images (Section 3.6). To relight, we reverse the generative process of the DDIM conditioned on the input's encodings to obtain \(\mathbf{x}_{T}\), modify the light encoding, and decode \(\mathbf{x}_{T}\) using the modified encodings (Section 3.7).
### Encoding
The goal of this step is to encode the input face image \(I\in\mathbb{R}^{H\times W\times 3}\) into a feature vector:
\[\mathbf{f}=(\mathbf{l},\mathbf{s},\mathbf{cam},\boldsymbol{\xi},c,\mathbf{bg}), \tag{1}\]
where \(\mathbf{l}\in\mathbb{R}^{9\times 3}\) represents \(2^{\text{nd}}\)-order spherical harmonic lighting coefficients, \(\mathbf{s}\in\mathbb{R}^{|\mathbf{s}|}\) represents parameterized face shape, \(\mathbf{cam}\in\mathbb{R}^{1+2}\) represents orthographic camera parameters, \(\boldsymbol{\xi}\in\mathbb{R}^{512}\) is a deep feature embedding based on ArcFace [8], \(c\) is a scalar that indicates the degree of visible cast shadows, and \(\mathbf{bg}\in\mathbb{R}^{H\times W\times 3}\) contains the background pixels with the face, hair, neck masked out. These variables will be inferred using off-the-shelf or pretrained estimators.
**Light, shape, & camera encodings \((\mathbf{l},\mathbf{s},\mathbf{cam})\).** We use an off-the-shelf single-view 3D face reconstruction method, DECA [13]. Given a face image, DECA predicts the 3D face shape, camera pose, albedo map, and spherical harmonic lighting (SH) coefficients.
For our light encoding \(\mathbf{l}\), we directly use the SH coefficients from DECA, consisting of 9 coefficients for each
channel of the RGB. DECA's 3D face shape is parameterized based on FLAME model [27] as blendshapes with three linear bases for identity shape, pose, and expression. Their respective coefficients are denoted by \(\mathbf{\beta}\), \(\mathbf{\theta}\), \(\mathbf{\psi}\). Our face shape encoding \(\mathbf{s}\) is the combined \((\mathbf{\beta},\mathbf{\theta},\mathbf{\psi})\in\mathbb{R}^{|\mathbf{\beta}|+|\mathbf{\theta}|+| \mathbf{\psi}|}\). DECA assumes orthographic projection and models the camera pose with isotropic scaling and 2D translation. We combine the scaling and translation parameters into \(\mathbf{cam}\in\mathbb{R}^{1+2}\). Note that we do not use the predicted albedo map because its estimation by DECA can be unreliable and we found it empirically unnecessary.
**Identity encoding \((\mathbf{\xi})\).** To compute our deep feature embedding that helps preserve the input's identity, we use ArcFace[8], a pre-trained face recognition model based on ResNet [18]. This model has been shown to produce discriminative and identity-preserving feature embeddings.
**Cast shadow encoding \((c)\).** This scalar describes the degree of visible cast shadows, typically caused by a dominant point or directional light source, such as the sun.
We trained a model to estimate \(c\) from a face image ourselves and fixed this pretrained estimator. To do this, we manually labeled around 1,000 face images with binary flags indicating whether cast shadows are visible. Following a technique proposed in DiffAE [41], we first use DiffAE's pretrained encoder to map each face image to a semantically meaningful latent code \(\mathbf{z}\) and train a logistic regression classifier on \(\mathbf{z}\) to predict the flag. \(c\) is then computed as the logit value of the logistic regression. As shown in [41], this technique helps reduce the number of training examples required to achieve good accuracy, but we note that \(c\) can be estimated in other ways, such as with a CNN.
**Background encoding \((\mathbf{bg})\).** To help fix the background during relighting, we condition the DDIM with an image of the input's background. The background region is detected using a face segmentation algorithm [73]. The ears, hair, and neck are not part of the background and can be relit by our algorithm (see Figure 6.)
### DDIM decoder & Modulator network
Our main network is a conditional DDIM that decodes our feature vector (with modified lighting information) to a relit version of the input image. In practice, the feature vector is used to _condition_ the DDIM that maps \(\mathbf{x}_{T}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) to the original input \(\mathbf{x}_{0}\) during training or maps \(\mathbf{x}_{T}=\text{DDIM}^{-1}(\mathbf{x}_{0})\) from reversing the generative process to the relit output during relighting (Section 3.7). This conditioning involves another network called _Modulator_ network, which converts the light, shape, and camera encodings into spatial modulation weights for the DDIM decoder.
The architecture of the DDIM decoder is based on Dhariwal et al. [10], which is a modified UNet built from a stack of residual blocks interleaved with self-attention layers. We provide full details in Appendix C. Our only differences are that 1) the output of each residual block in the first half of the UNet will be modulated by the signal from the Modulator network and 2) we use our own version of adaptive
Figure 3: **Reilt images on FFHQ [25].** This dataset contains a variety of face images in the wild environment. Our method produces more realistic relit images than previous methods and can add/remove cast shadows and highlights to match the reference lighting.
group normalization. Our Modulator network has the same architecture as the first half of our DDIM's UNet, but they do not share weights.
### Conditioning DDIM decoder
Conditioning a diffusion model on a condition vector can be done in various ways, such as through adaptive group normalization [71, 41, 10] or attention-based mechanisms [36, 47], among others. In our problem, the lighting information is encoded explicitly as SH coefficients and their interaction with 3D shape, specifically the surface normals, can be precisely modeled with the SH lighting equation. Our idea is to ease the modeling of the known interaction by rendering a shading reference of the target relit face. The primary goal of this reference is to convey the information about the target lighting and shading in a spatially-aligned manner, not the geometry or the exact shading intensities. The following sections detail this "spatial" conditioning technique as well as a standard non-spatial conditioning technique used for other encodings.
**Spatial conditioning.** This technique is used for the light, shape, camera and background encodings (\(\mathbf{l},\mathbf{s},\mathbf{c}\mathbf{am}\), \(\mathbf{bg}\)). Given the face shape \(\mathbf{s}\), we first convert it to a triangle mesh using the three linear bases of FLAME model [27] and remove the ears, eyeballs, neck, and scalp from the mesh to retain only the face region (See Figure 2). We remove those parts because they are often inaccurate and hard to estimate correctly (e.g., occluded ears behind hair). We assume a constant gray albedo (0.7, 0.7, 0.7) and render this mesh in the camera pose described by \(\mathbf{cam}\) with surface colors computed with 1 using the standard SH lighting equation. The details are in Appendix F, and we discuss this albedo choice and the inherent albedo-light ambiguity in Section 5.
Then, this shading reference \(R\), which shows a shaded face in the shape and pose of the input person under the target lighting, is concatenated with the background image \(\mathbf{bg}\) and fed to the Modulator network. Let us denote the output of each residual block \(i\) in the Modulator network by \(\mathbf{m}_{i}\in\mathbb{R}^{H_{i}\times W_{i}\times D_{i}}\), and the output of the corresponding residual block in the identical DDIM's first half by \(\mathbf{o}_{i}\in\mathbb{R}^{H_{i}\times W_{i}\times D_{i}}\). In the DDIM, we take each residual block's output \(\mathbf{o}_{i}\) and replace it with \(\mathbf{o}^{\prime}_{i}\), which will be used as input to the subsequent layer in the network:
\[\mathbf{o}^{\prime}_{i}=\mathbf{o}_{i}\odot\tanh(\mathbf{m}_{i}), \tag{2}\]
where \(\odot\) is the element-wise multiplication. This conditioning technique allows the shaded image \(R\) and the background to retain their spatial structure and facilitate local conditioning of the generation as they are spatially aligned with the input (e.g., their facial parts and background are in the same positions).
**Non-spatial conditioning.** This technique is used for \((\mathbf{s},\mathbf{c}\mathbf{am},\boldsymbol{\xi},c)\). The direct use of \(\mathbf{s},\mathbf{c}\mathbf{am}\) again in this technique is empirically found to be helpful, in addition to their indirect use through the shading reference. We use a similar conditioning technique as used in [10, 41] based on adaptive group normalization (AdaGN) [71] for these encodings and also for the time embedding in the standard diffusion model training \(\gamma(t)\), where \(\gamma\) is a sinusoidal encoding function [10]. Given an input feature map \(\mathbf{h}_{j}\in\mathbb{R}^{H_{j}\times W_{j}\times D_{j}}\), we compute
\[\mathrm{AdaGN}_{j}(\mathbf{h}_{j},\mathbf{s},\mathbf{c}\mathbf{am},\boldsymbol {\xi},c,t)=\mathbf{k}_{j}(\mathbf{t}_{j}^{s}\mathrm{GN}(\mathbf{h}_{j})+ \mathbf{t}_{j}^{b}), \tag{3}\]
where \(\mathbf{k}_{j}=\text{MLP}_{j}^{3}(\text{Concat}(\mathbf{s},\mathbf{c}\mathbf{am },\boldsymbol{\xi},c))\in\mathbb{R}^{D_{j}}\) is the output of a 3-layer MLP with the SiLU activation [11], and \((\mathbf{t}_{j}^{s},\mathbf{t}_{j}^{b})\in\mathbb{R}^{2\times D_{j}}=\text{ MLP}_{j}^{1}(\gamma(t))\) is the output from a single-layer MLP also with the SiLU activation. \(\mathrm{GN}\) is the standard group normalization. We apply our AdaGN in place of all the AdaGNs in the original architecture of [10], which occur throughout the UNet. (Details in Appendix C.)
### Training
We jointly train the DDIM decoder, parameterized as a noise prediction network \(\boldsymbol{\epsilon}_{\theta}\), and the Modulator network \(M_{\phi}(\mathbf{l},\mathbf{s},\mathbf{c}\mathbf{am},\mathbf{bg})\) using standard diffusion training [19, 59, 41]. Here we consider the MLPs in Figure 2 as part of the DDIM. We adopt the simplified, re-weighted version of the variational lower bound with \(\boldsymbol{\epsilon}\) parameterization:
\[L_{\text{simple}}=\mathbb{E}_{t,\mathbf{x}_{0},\boldsymbol{\epsilon}}\| \boldsymbol{\epsilon}_{\theta}(\mathbf{x}_{t},t,M_{\phi},\mathbf{s},\mathbf{c} \mathbf{am},\boldsymbol{\xi},c)-\boldsymbol{\epsilon}\|_{2}^{2},\]
where \(\boldsymbol{\epsilon}_{\theta}\) is trained to predict the added noise \(\boldsymbol{\epsilon}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) in \(\mathbf{x}_{t}=\sqrt{\alpha_{t}}\mathbf{x}_{0}+\sqrt{1-\alpha_{t}}\epsilon\), given a training image \(\mathbf{x}_{0}\). We define \(\alpha_{t}\) as \(\prod_{s=1}^{t}(1-\beta_{s})\), where \(\beta_{t}\) is the noise level at timestep \(t\) in the Gaussian diffusion process \(q(\mathbf{x}_{t}|\mathbf{x}_{t-1})=\mathcal{N}(\sqrt{1-\beta_{t}}\mathbf{x}_{t -1},\beta_{t}\mathbf{I})\). We use a linear noise schedule and a total step \(T=1000\). Note that we do not reverse \(\mathbf{x}_{T}=\text{DDIM}^{-1}(\mathbf{x}_{0})\) during training.
### Relighting
To relight an input image, we first encode the input image into our feature vector \(\mathbf{f}\) (Equation 1), then reverse the deterministic generative process of our DDIM conditioned on \(\mathbf{f}\), starting from the input image \(\mathbf{x}_{0}\) to \(\mathbf{x}_{T=1000}\).
\[\mathbf{x}_{t+1}=\sqrt{\alpha_{t+1}\mathbf{g}_{\theta}}(\mathbf{x}_{t},t, \mathbf{f})+\sqrt{1-\alpha_{t+1}}\boldsymbol{\epsilon}_{\theta}(\mathbf{x}_{t },t,\mathbf{f}), \tag{4}\]
where \(\mathbf{g}_{\theta}\) represents the predicted \(\mathbf{x}_{0}\), which is reparameterized from \(\boldsymbol{\epsilon}_{\theta}\) and is computed by:
\[\mathbf{g}_{\theta}(\mathbf{x}_{t},t,\mathbf{f})=\frac{1}{\sqrt{\alpha_{t}}} \left(\mathbf{x}_{t}-\sqrt{1-\alpha_{t}}\boldsymbol{\epsilon}_{\theta}( \mathbf{x}_{t},t,\mathbf{f})\right). \tag{5}\]
After obtaining \(\mathbf{x}_{T}\), we modify the SH light encoding \(\mathbf{l}\) and the cast shadow flag \(c\) to the target \(\mathbf{l}^{\prime}\) and \(c^{\prime}\), which can be set manually or inferred from a reference lighting image using DECA and our cast shadow estimator. Then, we decode
the modified \(\mathbf{f}^{\prime}=(\mathbf{I}^{\prime},\mathbf{s},\mathbf{cam},\mathbf{\xi},c^{ \prime},\mathbf{bg})\) using the reverse of Equation 4, starting from \(\mathbf{x}_{T}\) to produce the final output.
The reverse process to obtain \(\mathbf{x}_{T}\) is key to reproducing high-frequency details from the input image. As demonstrated in DiffAE [41], DDIM will encode any information not captured in the conditioning feature vector \(\mathbf{f}\) in the noise map \(\mathbf{x}_{T}\). This information includes high-frequency details, such as the hair pattern or skin texture.
**Improved DDIM sampling with mean-matching.** We observe that when the input image contains a background with extreme intensities (e.g., too dark or too bright), DDIM can produce results with a slight change in the overall brightness. We alleviate this issue by computing the mean pixel difference between each \(\mathbf{x}_{t}\) during DDIM's generative reversal (\(\mathbf{x}_{0}\rightarrow\mathbf{x}_{T}\)) and \(\mathbf{x}_{t}\) from self-decoding of the reversed noise \(\mathbf{x}_{T}\). This sequence of mean differences is then applied to the decoding for relighting (Appendix B).
## 4 Experiments
In this section, we present quantitative and qualitative results of our proposed method. We provide a comparison of our relighting performance (Section 4.1) to the state of the art on Multi-PIE dataset and ablation studies (Section 4.3) on the non-spatial and light conditioning. Implementation and dataset details are in Appendix B.
**Evaluation metrics.** We use DSSIM [35], LPIPS [75], and MSE. DSSIM measures the structure dissimilarity, and LPIPS measures the perceptual quality. All metrics are computed between each relit image and its ground-truth image only on the face region following [22, 23] using the same face parsing algorithm [73].
### Relighting performance
We evaluate our relighting performance on Multi-PIE dataset [17] against recent state-of-the-art methods [23, 22, 35, 38]. Note that Pandey et al. [38] solve a different problem setup (also [69, 72]) and require an HDR environment map as input, which has to be first estimated from a target image, making a comparison with [38] not entirely apples-to-apples. The results of [38] in our experiment were generated by the authors themselves, including the HDR maps. Other test sets and code of [38] were not released.
Our experiment has two setups where the target lighting is taken either from **i). the same person.** This setup uses the same test set as [23], which contains 826 testing samples from 329 subjects. Or **ii). a different person.** This setup contains 200 random triplets of input, target, and ground-truth images, where the target image is of a different person.
The results are shown in Table 1 and Figure 4. For both setups, our method achieves the best performance across all metrics with minimal artifacts and can convincingly relight the neck and ears or remove cast shadows, e.g., from the nose of the lady. We include a comparison with [53, 77, 61] and more qualitative results of [23, 22, 38] in Appendix E.
### Qualitative evaluations
**Relighting on FFHQ dataset.** In Figure 3, we provide a qualitative comparison with two recent SOTAs [23, 22] on subjects with different head poses, genders, races, accessories. Our approach produces highly realistic results and can synthesize new highlights and eliminate hard shadows,
Figure 4: **Qualitative results on Multi-PIE. (Top) Self target lighting. (Bottom) Target lighting from others.**
Figure 5: **Varying degrees of cast shadow. We show the ability to change the degree of cast shadows by adjusting the scalar \(c\) and decode the modified feature vector.**
while the competing methods often leave behind shadow or shading residuals due to the inaccurate albedo prediction of these in-the-wild images.
**Shadow flag condition.** We show a novel ability to change the strength of cast shadows on FFHQ [25] in Figure 5. We generate these results by varying the cast shadow's logit value (\(c\)). Our method can realistically remove shadows (e.g., those cast by eyeglasses or face geometry) or intensify their effects. Figure 1 demonstrates how the direction and appearance of cast shadows can change according to a new target lighting condition (the bottom guy's chin).
### Ablation studies
**Light conditioning.** We compare our full pipeline against two alternatives for conditioning the DDIM on the light encoding: a) We do not use our Modulator network and instead feed the reference shading directly to the DDIM by concatenating it with each \(\mathbf{x}_{t}\) in every timestep. b) We do not use the shading reference and instead concatenate the light encoding 1 with \((\mathbf{s},\mathbf{cam},\boldsymbol{\xi},c)\) in the non-spatial conditioning technique.
We report the results in Table 2 and show a qualitative comparison in Appendix E. Using the light encoding as part of a non-spatial vector (b) performs worst among all three, whereas feeding the shading reference directly to the DDIM without our Modulator (a) improves the results but still lacks behind our proposed pipeline.
**Non-spatial conditioning.** In this section, we study the benefits of non-spatial, face-related conditions extracted from ArcFace (\(\boldsymbol{\xi}\)) and DECA (\(\mathbf{s}\), \(\mathbf{cam}\)) by evaluating the relight performance on: c) Our method with no \(\mathbf{s},\mathbf{cam},\boldsymbol{\xi}\). d) Our method with no \(\mathbf{s}\), \(\mathbf{cam}\). e) Our method with no \(\boldsymbol{\xi}\).
We report the results in Table 2 and a qualitative comparison in Appendix E. Removing all of \(\mathbf{s}\), \(\mathbf{cam},\boldsymbol{\xi}\) performs the worst, whereas removing \(\mathbf{s}\), \(\mathbf{cam}\) but retaining \(\boldsymbol{\xi}\) obtains a better MSE score. On the other hand, our full pipeline outperforms these alternatives on both DDSIM and LPIPS metrics, which agree with human perception.
## 5 Limitations & Discussion
Our method may not remove shadows cast by external objects and may sometimes remove sunglasses that resemble cast shadows in the eye regions (Figure 7). While our method can produce photorealistic images with plausible cast shadows, there is room for improvement to achieve physically consistent cast shadows in motion (see supplementary videos). Relighting to match a reference lighting image can be inaccurate as we rely on a light estimator, which is susceptible to the ambiguity where it is unclear if, e.g., a dark appearance is caused by the skin tone or dim lighting. Our diffusion-based model requires multiple network passes and is slower than other GAN-based methods.
In conclusion, we have presented a diffusion-based face relighting method that eliminates the need for accurate intrinsic decomposition and can be trained on 2D images without any 3D or lighting ground truth. Our key component is a conditional diffusion implicit model and a novel conditioning technique that maps a disentangled light representation to a relit image. This enables our method to achieve new state-of-the-art performance and produce highly photorealistic results for real-world scenarios.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & DDSIM\(\downarrow\) & MSE\(\downarrow\) & LPIPS\(\downarrow\) \\ \hline
**Light conditioning** & & & \\ a) No _Modulator_ & 0.0749 & 0.0081 & 0.0868 \\ b) Used as non-spatial & 0.0885 & 0.0098 & 0.0947 \\
**Ours** & **0.0670** & **0.0077** & **0.0789** \\
**Non-spatial condition vector** & & & \\ c) No \(\mathbf{s}\), \(\mathbf{cam}\), \(\boldsymbol{\xi}\) & 0.0713 & 0.0082 & 0.0909 \\ d) No \(\mathbf{s}\), \(\mathbf{cam}\) & 0.0674 & **0.0063** & 0.0846 \\ e) No \(\boldsymbol{\xi}\) & 0.0686 & 0.0074 & 0.0847 \\
**Ours** & **0.0670** & 0.0077 & **0.0789** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation study on conditioning methods.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & DDSIM\(\downarrow\) & MSE\(\downarrow\) & LPIPS\(\downarrow\) \\ \hline
**i). Same subject as target lighting** & & & \\ Nestmayer et al. [35] & 0.2226 & 0.0588 & 0.3795 \\ Pandey et al. [38] & 0.0875 & 0.0165 & 0.2010 \\ Hou et al. (CVPR’21) [23] & 0.1186 & 0.0303 & 0.2013 \\ Hou et al. (CVPR’22) [22] & 0.0990 & 0.0150 & 0.1622 \\
**Ours** & **0.0711** & **0.0122** & **0.1370** \\ \hline
**ii). Different subject as target lighting** & & & \\ Pandey et al. [38] & 0.1000 & 0.0252 & 0.2053 \\ Hou et al. (CVPR’21) [23] & 0.1056 & 0.0247 & 0.1989 \\ Hou et al. (CVPR’22) [22] & 0.1150 & 0.0238 & 0.2215 \\
**Ours** & **0.0969** & **0.0215** & **0.1669** \\ \hline \hline \end{tabular}
\end{table}
Table 1: State-of-the-art comparison on Multi-PIE.
Figure 6: Without \(\mathbf{bg}\) conditioning, the background and hat of the input person are not well preserved.
Figure 7: **Failure cases. (Left) Shadows cast by external objects are not relit. (Right) The sunglasses, which resemble cast shadows, are mistakenly removed.** |
2305.10215 | Long-term predictions of turbulence by implicit U-Net enhanced Fourier
neural operator | Long-term predictions of nonlinear dynamics of three-dimensional (3D)
turbulence are very challenging for machine learning approaches. In this paper,
we propose an implicit U-Net enhanced Fourier neural operator (IU-FNO) for
stable and efficient predictions on the long-term large-scale dynamics of
turbulence. The IU-FNO model employs implicit recurrent Fourier layers for
deeper network extension and incorporates the U-net network for the accurate
prediction on small-scale flow structures. The model is systematically tested
in large-eddy simulations of three types of 3D turbulence, including forced
homogeneous isotropic turbulence (HIT), temporally evolving turbulent mixing
layer, and decaying homogeneous isotropic turbulence. The numerical simulations
demonstrate that the IU-FNO model is more accurate than other FNO-based models
including vanilla FNO, implicit FNO (IFNO) and U-Net enhanced FNO (U-FNO), and
dynamic Smagorinsky model (DSM) in predicting a variety of statistics including
the velocity spectrum, probability density functions (PDFs) of vorticity and
velocity increments, and instantaneous spatial structures of flow field.
Moreover, IU-FNO improves long-term stable predictions, which has not been
achieved by the previous versions of FNO. Besides, the proposed model is much
faster than traditional LES with DSM model, and can be well generalized to the
situations of higher Taylor-Reynolds numbers and unseen flow regime of decaying
turbulence. | Zhijie Li, Wenhui Peng, Zelong Yuan, Jianchun Wang | 2023-05-17T13:47:08Z | http://arxiv.org/abs/2305.10215v3 | # Long-term predictions of turbulence by implicit U-Net enhanced Fourier neural operator
###### Abstract
Long-term predictions of nonlinear dynamics of three-dimensional (3D) turbulence are very challenging for machine learning approaches. In this paper, we propose an implicit U-Net enhanced Fourier neural operator (IU-FNO) for stable and efficient predictions on the long-term large-scale dynamics of turbulence. The IU-FNO model employs implicit recurrent Fourier layers for deeper network extension and incorporates the U-net network for the accurate prediction on small-scale flow structures. The model is systematically tested in large-eddy simulations of three types of 3D turbulence, including forced homogeneous isotropic turbulence (HIT), temporally evolving turbulent mixing layer, and decaying homogeneous isotropic turbulence. The numerical simulations demonstrate that the IU-FNO model is more accurate than other FNO-based models including vanilla FNO, implicit FNO (IFNO) and U-Net enhanced FNO (U-FNO), and dynamic Smagorinsky model (DSM) in predicting a variety of statistics including the velocity spectrum, probability density functions (PDFs) of vorticity and velocity increments, and instantaneous spatial structures of flow field. Moreover, IU-FNO improves long-term stable predictions, which has not been achieved by the previous versions of FNO. Besides, the proposed model is much faster than traditional LES with DSM model, and can be well generalized to the situations of higher Taylor-Reynolds numbers and unseen flow regime of decaying turbulence.
+
Footnote †: preprint: AIP/123-QED
Introduction
Neural networks (NNs) have been widely applied to improve or replace the conventional modeling of turbulent flows in computational fluid dynamics (CFD).[1; 2; 3; 4; 5] Various strategies based on NNs have been developed to enhance Reynolds-averaged Navier-Stokes simulation (RANS) and large-eddy simulation (LES) of turbulence.[6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18] Beck et al. proposed convolutional neural networks (CNNs) and residual neural networks (RNNs) to construct accurate subgrid-scale (SGS) models for LES.[19] Zhou et al. used an artificial neural network to develop a new SGS model for LES of isotropic turbulent flows.[14] Park and Wang also applied NNs to learn closures of SGS stress and thus improve the accuracy of turbulence modeling.[8; 20] Yang et al. introduced several physical insights to improve the extrapolation capabilities of neural networks for LES wall modeling.[21]
Deep neural networks have demonstrated a remarkable performance in approximating highly non-linear functions.[22] Several recent studies have focused on approximating the complete Navier-Stokes equations using deep neural networks.[23; 24; 25; 26; 27] Once trained, "black-box" neural network models can rapidly make inferences on modern computers, and can be much more efficient than traditional CFD methods. Moreover, some researchers have explored incorporating additional physical knowledge into deep learning methods.[28; 29; 30; 31; 32] Raissi et al. introduced a physics-informed neural networks (PINN) to solve general nonlinear partial differential equations.[33] Xu et al. utilized the physics-informed deep learning to address the missing flow dynamics by treating the governing equations as a parameterized constraint.[34] Chen et al. proposed a theory-guided hard constraint projection method to convert governing equations into a form that is easy to handle through discretization and then implements hard constraint optimization through projection in a patch.[35] Wang et al. incorporated physical constraints into the neural network design and developed a turbulent flow network (TF-Net). The TF-Net offers the flexibility of the learned representations and achieves the state-of-the-art prediction accuracy.[29] Jin et al. developed the Navier-Stokes flow nets (NSFnets) by embedding the governing equations, initial conditions, and boundary conditions into the loss function.[36]
While most previous neural network architectures are good at learning mappings between finite-dimensional Euclidean spaces, they are limited in their generalization ability for different parameters, initial conditions or boundary conditions.[37; 38; 39; 40; 33] Recently, Li et al.
proposed a novel Fourier neural operator (FNO) framework capable of efficiently learning the mapping between infinite dimensional spaces from input-output pairs [41]. The FNO model outperforms current state-of-the-art models, including U-Net [42], TF-Net [29], and ResNet [43], in two-dimensional (2D) turbulence prediction. Peng et al. presented an FNO model coupled with the attention that can effectively reconstruct statistical properties and instantaneous flow structures of 2D turbulence at high Reynolds numbers [44]. Wen et al. proposed an U-net enhanced FNO (U-FNO) for solving multiphase flow problems with superior accuracy and efficiency [45]. You et al. developed an implicit Fourier neural operator (IFNO), to model the increment between layers as an integral operator to capture the long-range dependencies in the feature space [46]. The developments and applications of FNO-based models have been increasing [47; 48; 49; 50; 51; 52; 53; 54; 55], however, the majority of the works have been focused on one-dimensional (1D) and two-dimensional (2D) problems. Modeling 3D turbulence using deep neural networks is a greater challenge due to the significant increase in the size and dimension of simulation data compared to 2D problems [56]. Moreover, modeling the non-linear interactions in 3D turbulence demands significant model complexity and a large number of parameters. Training models with such a huge number of parameters can be computationally expensive and requires significant memory usage, which can be very challenging due to hardware limitations.
Recently, Mohan et al. developed two reduced models of 3D homogeneous isotropic turbulence (HIT) and scalar turbulence based on the deep learning methods including convolutional generative adversarial network (C-GAN) and compressed convolutional long-short-term-memory (CC-LSTM) network [57]. Ren et al. proposed a data-driven model for predicting turbulent flame evolution based on machine learning methods with long short-term memory (LSTM) and convolutional neural network-long short-term memory (CNN-LSTM). The CNN-LSTM model has been shown to outperform the LSTM model in terms of overall performance [58]. Nakamura et al. combined a 3D convolutional neural network autoencoder (CNN-AE) and a long short-term memory (LSTM) to predict the 3D channel flow [59]. Lehmann et al. applied the FNO to predict ground motion time series from a 3D geological description [60]. Li et al. utilized FNO for large-eddy simulation (LES) of 3D turbulence [61]. Peng et al. proposed a linear attention coupled Fourier neural operator (LAFNO) for the simulation of 3D isotropic turbulence and free-shear turbulence [62]. In this work, we propose an implicit U-Net enhanced Fourier neural operator (IU-FNO) as a surrogate model for LES
of turbulence, in order to achieve stable, efficient and accurate predictions on the long-term large-scale dynamics of turbulence.
The rest of the paper is organized as follows: Section II describes the governing equations of the large-eddy simulation and three classical subgrid-scale models. Section III introduces three types of previous Fourier neural operator architectures, including vanilla FNO, implicit FNO (IFNO) and U-Net enhanced FNO (U-FNO). In Section IV, we propose a new FNO-based model, namely IU-FNO model. Section V introduces the data generation and training process, and presents the a \(posteriori\) performance of the IU-FNO model in comparison to other FNO-based models and classical dynamic Smagorinsky model for three types of turbulent flows, including the forced homogeneous isotropic turbulence (HIT), free-shear mixing layer turbulence, and decaying homogeneous isotropic turbulence. Discussions and conclusions are finally drawn in Section VI and VII respectively.
## II Governing equations and subgrid scale model
This section provides a brief introduction to the filtered incompressible Navier-Stokes (NS) equations for classical LES models for the unclosed subgrid-scale (SGS) stress.
The governing equations of the three-dimensional incompressible turbulence are given by [63; 64]
\[\frac{\partial u_{i}}{\partial x_{i}}=0, \tag{1}\]
\[\frac{\partial u_{i}}{\partial t}+\frac{\partial\left(u_{i}u_{j}\right)}{ \partial x_{j}}=-\frac{\partial p}{\partial x_{i}}+v\frac{\partial^{2}u_{i}}{ \partial x_{j}\partial x_{j}}+\mathcal{F}_{i}. \tag{2}\]
Here \(u_{i}\) denotes the \(i\)-th component of velocity, \(p\) is the pressure divided by the constant density, \(v\) represents the kinematic viscosity, and \(\mathcal{F}_{i}\) stands for a large-scale forcing to the momentum of the fluid in the \(i\)-th coordinate direction. In this paper, the convention of summation notation is employed.
The kinetic energy \(E_{k}\) is defined as \(E_{k}=\int_{0}^{\infty}E(k)dk=\frac{1}{2}\left(u^{rms}\right)^{2}\), where \(E(k)\) is the energy spectrum, and \(u^{rms}=\sqrt{\left\langle u_{i}u_{i}\right\rangle}\) is the root mean square (rms) of the velocity, and \(\left\langle\cdot\right\rangle\) denotes a spatial average along the homogeneous direction. In addition, the Kolmogorov length scale \(\eta\), the Taylor length scale \(\lambda\), and the Taylor-scale Reynolds number \(Re_{\lambda}\) are defined, respectively, as [63; 65]
\[\eta=\left(\frac{\nu^{3}}{\varepsilon}\right)^{1/4},\quad\lambda=\sqrt{\frac {5\nu}{\varepsilon}}u^{rms},\quad Re_{\lambda}=\frac{u^{rms}\lambda}{\sqrt{3} \nu}, \tag{3}\]
where \(\varepsilon=2v\left\langle S_{ij}S_{ij}\right\rangle\) denotes the average dissipation rate and \(S_{ij}=\frac{1}{2}\left(\partial u_{i}/\partial x_{j}+\partial u_{j}/\partial x _{i}\right)\) represents the strain rate tensor. Furthermore, the integral length scale \(L_{I}\) and the large-eddy turnover time \(\tau\) are given by [63]
\[L_{I}=\frac{3\pi}{2\left(u^{rms}\right)^{2}}\int_{0}^{\infty}\frac{E(k)}{k}dk, \quad\tau=\frac{L_{I}}{u^{rms}}. \tag{4}\]
A filtering methodology can be implemented to decompose the physical variables of turbulence into distinct large-scale and sub-filter small-scale components. [66; 67] The filtering operation is defined as \(\bar{f}(\mathbf{x})=\int_{\Omega}f(\mathbf{x}-\mathbf{r})G(\mathbf{r},\mathbf{ x};\Delta)d\mathbf{r}\), where \(f\) represents a variable in physical space, and \(\Omega\) is the entire domain. \(G\) and \(\Delta\) are the filter kernel and filter width, respectively. [63; 68] For any variable \(f\) in Fourier space, a filtered variable is given by \(\bar{f}(\mathbf{k})=\hat{G}(\mathbf{k})f(\mathbf{k})\). In the present study, a sharp spectral filter \(\hat{G}(\mathbf{k})=H\left(k_{c}-|\mathbf{k}|\right)\) is utilized in Fourier space for homogeneous isotropic turbulence. [63] Here, the cutoff wavenumber \(k_{c}=\pi/\Delta\), and \(\Delta\) denotes the filter width. The Heaviside step function \(H(x)=1\) if \(x\geq 0\); otherwise \(H(x)=0\). [63; 69]
The filtered incompressible Navier-Stokes equations can be derived for the resolved fields as follows [63; 68]
\[\frac{\partial\bar{u}_{i}}{\partial x_{i}}=0, \tag{5}\]
\[\frac{\partial\bar{u}_{i}}{\partial t}+\frac{\partial\left(\bar{u}_{i}\bar{u} _{j}\right)}{\partial x_{j}}=-\frac{\partial\bar{p}}{\partial x_{i}}-\frac{ \partial\tau_{ij}}{\partial x_{j}}+v\frac{\partial^{2}\bar{u}_{i}}{\partial x _{j}\partial x_{j}}+\overline{\mathcal{F}}_{i}. \tag{6}\]
Here, \(\tau_{ij}\) is the unclosed sub-grid scale (SGS) stress defined by \(\tau_{ij}=\overline{u_{i}u_{j}}-\bar{u}_{i}\bar{u}_{j}\). In order to solve the LES equations, it is crucial to model the SGS stress as a function of the filtered variables.
Subgrid-scale (SGS) models have been developed for the unclosed terms in the filtered incompressible Navier-Stokes equations. These models aim to accurately capture the nonlinear interactions between the resolved large-scales and unresolved small-scales. [70; 71] Appendix A provides a comprehensive introduction to three classical LES models, including dynamic Smagorinsky model (DSM), velocity gradient model (VGM) and dynamic mixed model (DMM).
## III Related Neural Operator and Modified Methods
Compared with traditional numerical methods and other neural operator methods, FNO shows a strong adaptability and generalization in dealing with high-dimensional and large
scale data.[72; 73; 61; 7; 2] The main idea of FNO is to use Fourier transform to map high-dimensional data into the frequency domain, and approximate nonlinear operators by learning the relationships between Fourier coefficients through neural networks. FNO can learn the rule of an entire family of PDE.[61] This part will mainly introduce the FNO and some typical improved methods based on it, including U-Net enhanced FNO (U-FNO) and implicit Fourier neural operator(IFNO).
### The Fourier neural operator
The Fourier neural operators (FNO) aims to map between two infinite-dimensional spaces by training on a finite set of input-output pairs. Denote \(D\subset\mathbb{R}^{d}\) as a bounded, open set and \(\mathcal{A}=\mathcal{A}\left(D;\mathbb{R}^{d_{a}}\right)\) and \(\mathcal{U}=\mathcal{U}\left(D;\mathbb{R}^{d_{u}}\right)\) as separable Banach spaces of function taking values in \(\mathbb{R}^{d_{a}}\) and \(\mathbb{R}^{d_{u}}\) respectively.[74] The construction of a mapping, parameterized by \(\theta\in\Theta\), allows the Fourier neural operators to learn an approximation of \(\mathcal{A}\rightarrow\mathcal{U}\). The optimal parameters \(\theta^{\dagger}\in\Theta\) are determined through a data-driven empirical approximation.[75] The neural operators employ iterative architectures \(v_{0}\mapsto v_{1}\mapsto\ldots\mapsto v_{T}\) where \(v_{j}\) for \(j=0,1,\ldots,T\) is a sequence of functions each taking values in \(\mathbb{R}^{d_{v}}\).[76] The FNO architecture is shown in Fig. 1 which consists of three main steps.
(1) The input \(a\in\mathcal{A}\) is lifted to a higher dimensional representation \(v_{0}(x)=P(a(x))\) by the local transformation \(P\) which is commonly parameterized by a shallow fully connected neural network.
Figure 1: The Fourier neural operator (FNO) architecture.
(2) The higher dimensional representation \(v_{0}(x)\) is updated iteratively by
\[v_{t+1}(x)=\sigma\left(Wv_{t}(x)+\left(\mathcal{K}(a;\phi)v_{t}\right)(x)\right), \quad\forall x\in D. \tag{7}\]
Where \(\mathcal{K}:\mathcal{A}\times\Theta_{\mathcal{K}}\rightarrow\mathcal{L}\left( \mathcal{U}\left(D;\mathbb{R}^{d_{v}}\right),\mathcal{U}\left(D;\mathbb{R}^{d _{v}}\right)\right)\) maps to bounded linear operators on \(\mathcal{U}\left(D;\mathbb{R}^{d_{v}}\right)\) and is parameterized by \(\phi\in\Theta_{\mathcal{K}}\), \(W:\mathbb{R}^{d_{v}}\rightarrow\mathbb{R}^{d_{v}}\) is a linear transformation, and \(\sigma:\mathbb{R}\rightarrow\mathbb{R}\) is non-linear activation function.
(3) The output \(u\in\mathcal{U}\) is obtained by \(u(x)=Q\left(v_{T}(x)\right)\) where \(Q:\mathbb{R}^{d_{v}}\rightarrow\mathbb{R}^{d_{u}}\) is the projection of \(v_{T}\) and it is parameterized by a fully connected layer.[41]
Denote \(\mathcal{F}\) and \(\mathcal{F}^{-1}\) as Fourier transform and its inverse transform of a function \(f:D\rightarrow\mathbb{R}^{d_{v}}\) respectively. By substituting the kernel integral operator in Eq. 7 with a convolution operator defined in Fourier space, the Fourier integral operator can be rewritten as Eq. 8.
\[\left(\mathcal{K}(\phi)v_{t}\right)(x)=\mathcal{F}^{-1}\left(R_{\phi}\cdot \left(\mathcal{F}v_{t}\right)\right)(x),\quad\forall x\in D. \tag{8}\]
Where \(R_{\phi}\) is the Fourier transform of a periodic function \(\mathcal{K}:\bar{D}\rightarrow\mathbb{R}^{d_{v}\times d_{v}}\) parameterized by \(\phi\in\Theta_{\mathcal{K}}\). The frequency mode \(k\in\mathbb{Z}^{d}\). The finite-dimensional parameterization is obtained by truncating the Fourier series at a maximum number of modes \(k_{\max}=\left|Z_{k_{\max}}\right|=\)\(\left|k\in\mathbb{Z}^{d}:\left|k_{j}\right|\leq k_{\max,j}\right.\), for \(\left.j=1,\ldots,d\right.\)\(\left|\). \(\mathcal{F}\left(v_{t}\right)\in\mathbb{C}^{n\times d_{v}}\) can be obtained by discretizing domain \(D\) with \(n\in\mathbb{N}\) points, where \(v_{t}\in\mathbb{R}^{n\times d_{v}}\).[41] By simply truncating the higher modes, \(\mathcal{F}\left(v_{t}\right)\in\mathbb{C}^{k_{\max}\times d_{v}}\) can be obtained, here \(\mathbb{C}\) is the complex space. \(R_{\phi}\) is parameterized as complex-valued-tensor (\(k_{\max}\times d_{v}\times d_{v}\)) containing a collection of truncated Fourier modes \(R_{\phi}\in\mathbb{C}^{k_{\max}\times d_{v}\times d_{v}}\). Therefore, Eq. 9 can be derived by multiplying \(R_{\phi}\) and \(\mathcal{F}\left(v_{t}\right)\).
\[\left(R_{\phi}\cdot\left(\mathcal{F}v_{t}\right)\right)_{k,l}=\sum_{j=1}^{d_{v }}R_{\phi k,l,j}\left(\mathcal{F}v_{t}\right)_{k,j},\quad k=1,\ldots,k_{\max}, \quad j=1,\ldots,d_{v}. \tag{9}\]
### U-net enhanced Fourier neural operator
Wen et al.[45] pointed out that FNO models may suffer from lower training accuracy due to the regularization impact of the FNO architecture in the multiphase flow problems.[41] They introduced an improved version of the Fourier neural operator, named U-FNO, which combines the strengths of both FNO-based and CNN-based models. The detailed description of the U-FNO network architecture is given in Appendix B.
We propose a modified U-FNO architecture to better utilize the U-Net for learning small-scale flow structures, as shown in Fig. 20(b). The formulation of iterative network update
is given by
\[v_{t+1}(x):=\sigma\left(Wv_{t}(x)+\mathcal{F}^{-1}\left(R_{\phi}\cdot\left( \mathcal{F}v_{t}\right)\right)(x)+\mathcal{U}^{*}s_{t}(x)\right),\quad\forall x \in D. \tag{10}\]
\[s_{t}(x):=v_{t}(x)-\mathcal{F}^{-1}\left(R_{\phi}\cdot\left(\mathcal{F}v_{t} \right)\right)(x),\quad\forall x\in D. \tag{11}\]
Here, \(s_{t}(x)\in\mathbb{R}^{d_{v}}\) denotes the small-scale flow field which can be obtained by subtracting the large-scale flow field from the original flow field \(v(x)\). Then the U-Net \(\mathcal{U}^{*}\) is used to learn the small-scale flow field. Finally, the full-field information transformed by \(W\) is used to combine with FNO and U-Net, and then is connected with a nonlinear activation function \(\sigma\) to form a new U-FNO network.
Compared with the original U-FNO, our improved U-FNO performs better in 3D turbulence problems. Specifically, the minimum testing loss of original U-FNO and our modified U-FNO are 0.220 and 0.198, respectively. Therefore, the U-FNO mentioned later in this article refers to the modified U-FNO in Fig. 20(b).
### The implicit Fourier neural operator
It has been demonstrated that with a large enough value of depth L, the FNO can serve as a universal approximator capable of accurately representing any continuous operator.[77] However, the increase of Fourier layers brings challenge for training the network due to the vanishing of gradient problem.[78] To overcome the shortage mentioned above, the idea of employing the shared hidden layer has been suggested.[79; 80; 81] You et al.[46] proposed the implicit Fourier neural operators (IFNOs) and demonstrated the technique of shallow-to-deep training. The detailed description of the IFNO network architecture is given in Appendix B.
It should be noted that the numbers of parameters of the hidden layer are independent of the layers, which distinguishes it from the FNOs. It thus can greatly reduces the total number of parameters of the model and memory-usage. Furthermore, this architecture allows for the simple implementation of the shallow-to-deep initialization method.
With increasing depth of the layer (\(1/L=\Delta t\to 0\)), Eq. B1 can be regarded as a discrete version of ordinary differential equations (ODEs).[46] Therefore, the network update can be reinterpreted as a discretization of a differential equation, and the optimal parameters obtained with \(L\) layers can be served as the initial guess for deeper networks. The shallow-to-deep technique involves interpolating optimal parameters at depth \(L\) and scaling them to
maintain the final time of the differential equation [46]. This technique can effectively improve the accuracy of the network and reduce the memory cost.
## IV The Implicit U-Net Enhanced Fourier Neural Operator (IU-FNO)
We introduce an implicit U-Net enhanced Fourier neural operator (IU-FNO) to integrate the advantages of U-FNO and IFNO. The architecture of IU-FNO is shown in Fig. 2. The velocity field from the first several time nodes is utilized as the input to the model, which is then converted into a high-dimensional representation via the lifting layer \(P\). Then the velocity field is iteratively updated through the implicit U-Fourier layers, and finally the output is obtained through the projection of \(Q\), which is the velocity field of the next time-node. The fundamental differences between the IU-FNO and FNO models are their network structures. FNO adopts a multilayer structure, where multiple Fourier layers with independent trainable parameters are connected in series. In contrast, the IU-FNO model utilizes a single Fourier layer with shared parameters and incorporates a U-net network to capture small-scale flow structures.
The formulation of iterative implicit U-Fourier layer update can be derived as
\[v(x,(l+1)\Delta t)=\mathcal{L}^{\text{IUFNO}}[v(x,l\Delta t)]:=v(x,l\Delta t)+ \Delta t\sigma\left(c(x,l\Delta t)\right),\quad\forall x\in D, \tag{12}\]
\[c(x,l\Delta t):=Wv(x,l\Delta t)+\mathcal{F}^{-1}\left(R_{\phi}\cdot\left( \mathcal{F}v(x,l\Delta t)\right)\right)(x)+\mathcal{U}^{*}s(x,l\Delta t), \quad\forall x\in D, \tag{13}\]
\[s(x,l\Delta t):=v(x,l\Delta t)-\mathcal{F}^{-1}\left(R_{\phi}\cdot\left( \mathcal{F}v(x,l\Delta t)\right)\right)(x),\quad\forall x\in D. \tag{14}\]
Here, \(c(x,l\Delta t)\in\mathbb{R}^{d_{v}}\) has the global scale information of the flow field by combining large-scale information learned by FFT and small-scale information \(s(x,l\Delta t)\) learned by the U-Net network \(\mathcal{U}^{*}\). \(s(x,l\Delta t)\in\mathbb{R}^{d_{v}}\) is obtained by subtracting the large-scale information from the complete field information \(v(x,l\Delta t)\), shown in Eq. 14. \(\mathcal{U}^{*}\) is a CNN-based network, which provides a symmetrical structure with both an encoder and a decoder. The encoder is responsible for extracting feature representations from the input data, while the decoder generates the output signals [29; 82]. Furthermore, U-Net incorporates skip connections, enabling direct transmission of feature maps from the encoder to the decoder, thereby preserving the intricate details within the fields. The U-Net architecture has a relatively small number of parameters, such that its combination with FNO has a minimal effect on the overall numbers
of parameters. Additionally, the implicit utilization of a shared hidden layer has significantly reduced the number of network parameters, which can make the network very deep.
We compare the numbers of parameters with Fourier layer \(L\) of different FNO-based models in Fourier mode equal to 20, as shown in Tab. 1. The numbers of parameters of FNO and U-FNO models are 331.8 Million and 332.0 Million, respectively, when the number of Fourier layers is set to four. However, as the number of layers increases, the size of these parameters also increases, resulting in huge computational demands that can pose significant challenges for training. By using the implicit method of sharing hidden layers, the number
\begin{table}
\begin{tabular}{c c c c c} Model & \(L=4(T=4)\) & \(L=10\) & \(L=20\) & \(L=40\) \\ \hline FNO & 331.8M & 829.5M & 1659M & 3318M \\ U-FNO & 332.0M & 830.0M & 1660M & 3320M \\ IFNO & 82.97M & 82.97M & 82.97M & 82.97M \\ IU-FNO & 83.02M & 83.02M & 83.02M & 83.02M \\ \end{tabular}
\end{table}
Table 1: Comparison of the numbers of parameters (calculated in Millions) with Fourier layer \(L\) of different FNO-based models.
Figure 2: The architecture of implicit U-Net enhanced Fourier neural operator (IU-FNO).
of network parameters of the IFNO and IU-FNO models can be independent of the number of Fourier layers \(L\). Specifically, the number of model parameters of IU-FNO is almost the same as that of IFNO. Moreover, it shows a significant reduction of approximately 75% of parameter number compared with FNO.
## V Numerical examples
In this section, the flow fields of the filtered direct numerical simulation (fDNS) of three types of turbulent flows are used for the evaluations of four FNO-based models, by comparing them against traditional LES with dynamic Smagorinsky model.[83] The instantaneous snapshots of the fDNS data are employed for initializing the LES. The three types of turbulent flows includes forced homogeneous isotropic turbulence (HIT), temporally evolving turbulent mixing layer, and decaying HIT.
In a _posteriori_ analysis, we perform the numerical simulations with ten different random initializations for each method in forced HIT, five different initializations in temporally evolving turbulent mixing layer, and five different initializations in decaying HIT respectively. We report the average value of the statistical results of different random initializations in the _posteriori_ analysis.
### Forced homogeneous isotropic turbulence
The direct numerical simulation of forced homogeneous isotropic turbulence is performed with the uniform grid resolutions of \(256^{3}\) in a cubic box of \((2\pi)^{3}\) with periodic boundary conditions.[84; 85] The governing equations are spatially discretized using the pseudo-spectral method and a second-order two-step Adams-Bashforth explicit scheme is utilized for time integration.[86; 87; 88] The aliasing error caused by nonlinear advection terms is eliminated by truncating the high wavenumbers of Fourier modes by the two-thirds rule.[86] The large-scale forcing is applied by fixing the velocity spectrum within the two lowest wavenumber shells in the velocity field to maintain the turbulence in the statistically steady state.[85] The kinematic viscosity is adopted as \(\nu=0.00625\), leading to the Taylor Reynolds number \(Re_{\lambda}\approx 100\). To ensure that the flow has reached a statistically steady state, we save the data after a long period (more than \(10\tau\), here \(\tau=L_{I}/u^{\rm rms}\approx 1.0\) is large-eddy turnover times).
The DNS data is filtered into large-scale flow fields at grid resolutions of \(32^{3}\) by the sharp spectral filter (described in Section II) with cutoff wavenumber \(k_{c}=10\). The time step is set to \(0.001\) and the snapshots of the numerical solution are taken every \(200\) steps as a time node. \(45\) distinct random fields are utilized as initial conditions, with \(600\) time nodes being saved for each group of computations. Therefore, the fDNS data with tensor size of \([45\times 600\times 32\times 32\times 32\times 3]\) can be obtained and serve as a training and testing dataset.[61] Specifically, the dataset we use to train the neural operator model consists of \(45\) groups, each group has \(600\) time nodes, and each time node denotes a filtered velocity field of \(32^{3}\) with three directions.
Denotes the \(m\)-th time-node velocity field as \(U_{m}\) and the \(m\)-th evolution increment filed as \(\Delta U_{m}=U_{m+1}-U_{m}\) which is the difference of velocity field between two adjacent time nodes. The IU-FNO model takes the velocity fields of the previous five time nodes \([U_{1},U_{2},U_{3},U_{4},U_{5}]\) as input and produces the difference between the sixth and fifth velocity fields \([\Delta U_{5}=U_{6}-U_{5}]\) as output, as illustrated in Fig. 2.[61] Once the predicted evolution increment \(\Delta U_{5}^{\text{pre}}\) is obtained from the trained model, the predicted sixth velocity field can be calculated by \(U_{6}^{\text{pre}}=U_{5}+\Delta U_{5}^{\text{pre}}\). In the same way, \(U_{7}^{\text{pre}}\) can be predicted by \([U_{2},U_{3},U_{4},U_{5},U_{6}^{\text{pre}}]\) and so on. Therefore, \(600\) time-nodes in each group can generate \(595\) input-output pairs (\([U_{1},U_{2},U_{3},U_{4},U_{5}\to U_{6}^{\text{pre}}]\)), and \(45\) groups can produce \(26775\) samples where we use \(80\%\) for training and the rest of the samples for testing.[61]
All four data-driven models in this study utilize the same number of the Fourier modes, specifically a value of \(20\), and the initial learning rate is set to \(10^{-3}\).[62] The Adam optimizer is used for optimization.[89] The GELU function is chosen as the activation function.[90] In order to ensure a fair comparison, the hyperparameters including learning rates and the decay rates are tuned for each method to minimize the training and testing loss, which is defined as
\[\textit{Loss}=\frac{\|u^{*}-u\|_{2}}{\|u\|_{2}},\text{ where }\|\mathbf{A}\|_{2}= \frac{1}{n}\sqrt{\sum_{k=1}^{n}|\mathbf{A_{k}}|^{2}}. \tag{15}\]
Here, \(u^{*}\) denotes the prediction of velocity fields and \(u\) is the ground truth.
A comparison of the minimum training and testing loss with Fourier layer \(L\) of different FNO-based models in forced HIT is given in Tab. II. It is shown that incorporating the U-net module to facilitate learning at small-scale information can improve the effectiveness of training and testing. Besides, the training and testing loss of the implicit method using
the shared hidden layer (e.g. IFNO and IU-FNO) will be larger than FNO and U-FNO at layer \(L=4\). However, as the number of hidden layer loop iterations \(L\) increases, a significant reduction in the loss value is observed. The smallest testing loss value \(0.155\) is obtained when \(L\) equals \(40\) for IU-FNO. Therefore, the number of layer \(L\) with minimum loss in each model is chosen for the _posteriori_ study. For FNO and U-FNO models, the Fourier layer number \(L\) is set to \(4\), whereas IFNO and IU-FNO models have \(40\) implicit loop Fourier layers.
To avoid the over-fitting issue of the models, an additional independent ten groups of data from different initial fields are generated and utilized for the _posteriori_ evaluation. In the a _posteriori_ study, fDNS data is utilized as a baseline to evaluate various FNO-based models, including FNO, U-FNO, IFNO, and IU-FNO. The LES with DSM model is performed on the uniform grid with the grid resolution of \(32^{3}\) in a cubic box of \((2\pi)^{3}\) using the same numerical method as DNS. The LES is initialized with the instantaneous velocity field obtained from fDNS. The DMM and VGM models adopt the same initial field and computational approach as the DSM model.
#### iii.1.1 The a posteriori study
The normalized velocity spectra predicted by different FNO-based models and classical LES models at different time instants are shown in Fig. 3. Here, Kolmogorov length scale \(\eta\approx 0.023\) and the dissipation rate \(\varepsilon\approx 0.825\) in the forced HIT are obtained from DNS data. For the traditional LES model DSM, the prediction errors become larger as the wavenumber \(k\) increases. Specifically, the velocity spectrum at wavenumbers \(4\leq k\leq 9\) is significantly
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & \multicolumn{3}{c}{(Training Loss, Testing Loss)} \\ \hline Model & \(L=4(T=4)\) & \(L=10\) & \(L=20\) & \(L=40\) \\ \hline FNO & (0.225, 0.255) & N/A & N/A & N/A \\ U-FNO & (0.174, 0.198) & N/A & N/A & N/A \\ IFNO & (0.244, 0.261) & (0.216, 0.228) & (0.199, 0.214) & (0.185, 0.201) \\ IU-FNO & (0.192, 0.211) & (0.171, 0.190) & (0.140, 0.163) & **(0.143, 0.155)** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of minimum training and testing loss with Fourier layer \(L\) of different FNO-based models in forced homogeneous isotropic turbulence.
lower than fDNS results. In terms of velocity spectrum prediction, the DMM model exhibits significantly higher values than the fDNS results in the wavenumbers \(k\) range of 5 to 8. In contrast, the VGM model predicts much lower results for \(k\geq 4\). Overall, the normalized velocity spectrum predicted by the DSM model is more accurate than the DMM and VGM models. This study focuses on comparing the performance of FNO-based models, and we select the DSM model as the representative SGS model for comparison.
It can be seen from Fig. 3(a) that the normalized velocity spectrum predicted by data-driven models including FNO, U-FNO, and our proposed IU-FNO model are close to that of fDNS at time \(t/\tau\approx 4.0\). Here the large-eddy turnover time \(\tau\) is provided in Eq. (4). However, the normalized velocity spectrum is overestimated by IFNO for the high wavenumbers at time \(t/\tau\approx 4.0\), and the prediction error becomes larger as the time increases. For the FNO
Figure 3: The normalized velocity spectra of LES using different models in the forced HIT at different time instants: (a)\(t/\tau\approx 4.0\); (b) \(t/\tau\approx 6.0\); (c)\(t/\tau\approx 8.0\); (d)\(t/\tau\approx 50.0\). Here, each prediction time instant for FNO-based model is \(0.2\tau\).
and U-FNO models, large prediction errors have been identified at time \(t/\tau\approx 6.0\) and time \(t/\tau\approx 8.0\), respectively. It is worth noting that the prediction results of the FNO, IFNO, and U-FNO models are observed to be divergent with an increase in prediction time, and lose statistical significance. Therefore, we do not present these divergent results at the later time instants. On the contrary, IU-FNO always gives accurate predictions on the velocity spectrum in both short-term and long-term predictions for \(t/\tau\leq 50\). We also observe that IU-FNO is stable for \(t/\tau\geq 100\) (not shown here).
To further examine the IU-FNO model in predicting multi-scale properties of turbulence, we compute the longitudinal structure functions of the filtered velocity, which are defined by [91; 92]
\[\bar{S}_{n}(r)=\left\langle\left|\frac{\delta_{r}\bar{u}}{\bar{u}^{\rm rms}} \right|^{n}\right\rangle, \tag{16}\]
where \(n\) denotes the order of structure function and \(\delta_{r}\bar{u}=[\overline{\bf u}({\bf x}+{\bf r})-\overline{\bf u}({\bf x} )]\cdot\hat{\bf r}\) represents the longitudinal increment of the velocity at the separation \({\bf r}\). Here, \(\hat{\bf r}={\bf r}/|{\bf r}|\) is the unit vector.
Fig. 4 compares the second-order and fourth-order structure functions of the filtered ve
Figure 4: Second-order and fourth-order structure functions of the LES using different models in the forced HIT at different time instants: (a)Second-order, \(t/\tau\approx 4.0\); (b)Second-order, \(t/\tau\approx 8.0\); (c)Second-order, \(t/\tau\approx 50.0\); (d)Fourth-order, \(t/\tau\approx 4.0\); (e)Fourth-order, \(t/\tau\approx 8.0\); (f)Fourth-order, \(t/\tau\approx 50.0\). Here, each prediction time instant for FNO-based model is \(0.2\tau\).
locity for different models with fDNS data at \(t/\tau\approx 4.0\), \(t/\tau\approx 8.0\), and \(t/\tau\approx 50.0\). It can be seen that the DSM model overestimates the structure functions at a small distances while underestimates them at large distances compared to those of the fDNS data. Moreover, as the time increases, the deviations of the structure functions predicted by the FNO, IFNO, and U-FNO models from the fDNS data become more serious. In contrast, the IU-FNO model can always accurately predict the structure functions at both small and large separations.
Furthermore, we compare PDFs of the normalized velocity increments \(\delta_{r}\bar{u}/\bar{u}^{\rm rms}\) with distance \(r=\Delta\) at different time instants in Fig. 5. It can be seen that the PDFs of the normalized velocity increments predicted by FNO, and U-FNO are in a good agreement with the fDNS data at the beginning, but the predicted PDFs become wider than the fDNS data
Figure 5: The PDFs of the normalized velocity increments \(\delta_{r}\bar{u}/\bar{u}^{\rm rms}\) for LES using different models in the forced HIT at different time instants: (a)\(t/\tau\approx 4.0\); (b)\(t/\tau\approx 6.0\); (c)\(t/\tau\approx 8.0\); (d)\(t/\tau\approx 50.0\). Here, each prediction time instant for FNO-based model is \(0.2\tau\).
as the time increases. The IFNO gives worst prediction on the PDF. The PDF predicted by the DSM model are also slightly wider than the fDNS results. The IU-FNO model gives the most accurate prediction on the velocity increments, demonstrating the excellent performance for both short-term and long-term predictions.
To further demonstrate the stability of different models, we display the evolution of the root-mean-square (rms) values of velocity and vorticity over time in Fig. 6. Here, we plot the results from the 6-th time instant. It can be seen that as the time increases gradually, the IFNO model will diverge quickly, followed by the FNO model, and the U-FNO model respectively. Since the predicted results will be used as the input for the next prediction, the prediction error will continue to be accumulated, which is one of the reasons why the data-driven model is difficult to be stable for a long-term prediction. The traditional DSM model is stable due to its dissipative characteristics.[93; 94] Here, it is demonstrated that the proposed IU-FNO model can effectively and stably reconstrcut the long-term large-scale dynamics of the forced homogeneous isotropic turbulence.
The PDFs of the normalized vorticity magnitude at different time instants are shown in Fig. 7. Here, the vorticity is normalized by the rms values of the vorticity calculated by the fDNS data. It is shown that the PDFs predicted by all FNO-based models and DSM model have a reasonable agreement with those of fDNS in the short-term prediction for \(t/\tau\leq 4\). However, as the time increases, the deviation between the PDFs of \(\bar{\omega}/\bar{\omega}_{\rm fDNS}^{\rm rms}\) predicted by FNO and U-FNO models and those of ground truth becomes more obvious. In contrast,
Figure 6: Temporal evolutions of the velocity rms value and vorticity rms value for LES using different models in the forced HIT. Here, each prediction time instant for FNO-based model is \(0.2\tau\).
the IU-FNO performs better than other FNO-based models in both short and long time predictions of vorticity statistics.
Fig. 8 illustrates the contours of the vorticity fields predicted by different models. The instantaneous snapshots are selected on the center of the y-z plane at five different time instants. The DSM model produces factitious small-scale structures of vorticity, which significantly differ from those of the fDNS data. In contrast, vorticity fields given by IU-FNO model are very close to the benchmark fDNS results in the short-term prediction \(t/\tau\approx 1.0\) and \(t/\tau\approx 2.0\). Although the long-term prediction results of IU-FNO are not fully consistent with fDNS, its prediction of large-scale and small-scale structures is qualitatively better than that of the DSM model. Therefore, the IU-FNO model also performs better in long-term prediction of instantaneous vorticity structures, as compared to those of the DSM
model.
We demonstrate isosurfaces of the normalized vorticity \(\bar{\omega}/\bar{\omega}_{\rm fDNS}^{\rm rms}=1.5\) colored by the altitude of z-direction in Fig. 9. The spatial structures predicted by DSM and IU-FNO are compared to those of the fDNS data at time \(t/\tau\approx 2.0\) and \(t/\tau\approx 50.0\). It is revealed that the DSM model shows a limited accuracy in predicting the spatial structure of small-scale vortices. On the contrary, the IU-FNO model can better predict the overall flow structures of the vorticity field.
Figure 8: Evolution of predicted vorticity fields (at the center of y-z plane) as a function of time for forced HIT. Here, each prediction time instant for FNO-based model is \(0.2\tau\).
\begin{table}
\begin{tabular}{c c c c c} Method & Number of parameters(Million) & GPU memory-usage(MB) & GPU-s & CPU-s \\ \hline DSM & N/A & N/A & N/A & 65.31 \\ FNO & 331.8 & 3,204 & 0.058 & 2.953 \\ U-FNO & 332.0 & 3,204 & 0.076 & 3.635 \\ IFNO & 82.97 & 1,284 & 0.577 & 28.43 \\ IU-FNO & 83.02 & 1,284 & 0.783 & 32.86 \\ \end{tabular}
\end{table}
Table 3: Computational efficiency of different approaches on forced HIT.
#### iv.2.2 Computational efficiency
Table. 3 compares the computational cost of 10 prediction steps, number of parameters of the model, and GPU memory-usage for different FNO-based models on predictions of forced HIT. We carry out numerical simulations by using the Pytorch. The neural network models are trained and test on Nvidia Tesla V100 GPU, where the CPU type is Intel(R) Xeon(R) Gold 6240 CPU @2.60GHz. The DSM simulations are implemented on a computing cluster, where the type of CPU is Intel Xeon Gold 6148 with 16 cores each @2.40 GHz. Moreover, we conducted supplementary tests to measure the computational time for the FNO-based models using the same CPU as the DSM model. CPU\(\cdot\)s in Table. 3 represents the time (second) required by each CPU core. Here, the FNO-based models are implemented on a single core CPU, while the DSM model is performed on CPU with 16 cores. So we
assume that the CPU\(\cdot\)s of the DSM model is 16 times the actual time it takes. It can be seen that the FNO-based models are significantly more efficient than the DSM model. This is mainly attributed to the fact that the DSM model requires iterative solutions with a very small time step, while the FNO-based model can directly predict the flow state over a large time interval. In comparison to the original FNO, the IU-FNO model requires more computation time due to its deeper network. However, it is still about two times faster than the traditional DSM model. Table. 3 also indicates that the computational efficiency of the FNO-based model can be further improved remarkably by using GPU. Moreover, the parameters and the GPU memory usage of the IU-FNO network model are reduced by about 75% and 40% compared with the original FNO model respectively.
#### iv.2.3 Generalization on higher Taylor Reynolds numbers
Here, we show that the IU-FNO model trained with low Taylor Reynolds number data can be directly used for the prediction of high Taylor Reynolds number cases without the need for additional training or modifications. The large-scale statistical features and flow structures in our simulations are observed to be insensitive to the Taylor Reynolds numbers.[61] We employ five sets of HIT data with different initial fields at a Taylor Reynolds number \(Re_{\lambda}\approx 250\). Owing to the large computational cost of DNS of turbulence at high Taylor Reynolds number, only 90 time nodes are computed for each initial field. Here, each time node for FNO-based model is \(\tau_{\text{h}}/3\), and the large-eddy turnover time is \(\tau_{\text{h}}\approx 0.6\). For consistency, the computing devices for LES of DSM and IU-FNO are the same as those in the case of low Reynolds number. When performing the simulation for these higher Taylor Reynolds number cases on the same single CPU mentioned above, the DSM model required about 1900s to complete the task, while IU-FNO still only costs 32.26s. Thus, IU-FNO is more computationally efficient than DSM model, highlighting its considerable speed advantage. Here, DSM is performed on a higher grid resolution of \(64^{3}\) to capture the small-scale fields at higher Taylor Reynolds number, but the result is still worse than the IU-FNO model.[61]
To assess the stability of the model at the high Reynolds number, we show the temporal evolutions of the rms values of velocity and vorticity in Fig. 10. It is revealed that the rms values of velocity and vorticity predicted by the IU-FNO and DSM models always perform
stable, whereas other models become unstable as time increases. The rms values of velocity predicted by both the IU-FNO and DSM models show a good agreement with the fDNS data.
Furthermore, Fig 11(a) and (b) illustrate the PDFs of the normalized vorticity \(\bar{\omega}/\bar{\omega}_{\text{fDNS}}^{\text{rms}}\). The IU-FNO model demonstrates a higher accuracy in predicting the peak and shape of PDFs than other models. The PDFs of the normalized characteristic strain rate of forced HIT at \(t/\tau\approx 8.3\) and \(t/\tau\approx 30\) are displayed in Fig. 11(c) and (d), respectively. Here, the characteristic strain rate is defined by \(|\bar{S}|=\sqrt{2\bar{S}_{ij}\bar{S}_{ij}}\) and normalized by the rms values of the corresponding fDNS data. It is shown that the IU-FNO model outperforms the DSM model by accurately recovering both the peak value and overall shape of the PDF.
### Temporally evolving turbulent mixing layer
In addition to benchmarking the performance of FNO-based models and classical SGS model on 3D forced HIT, we also evaluate their capabilities on a more complex simulation task: a 3D free-shear turbulent mixing layer. We focus on the comparison between the IU-FNO model and the conventional DSM model. The turbulent mixing layer provides a suitable example for studying the effects of non-uniform turbulent shear and mixing on subgrid-scale (SGS) models.[65]
The free-shear turbulent mixing layer is governed by the same Navier-Stokes equations
Figure 10: Temporal evolutions of the velocity RMS and vorticity RMS for LES using different models in the forced HIT at \(Re_{\lambda}\approx 250\).
(Eqs. 1 and 2) without the forcing term. The mixing layer is numerically simulated in a cuboid domain with lengths \(L_{1}\times L_{2}\times L_{3}=8\pi\times 8\pi\times 4\pi\) using a uniform grid resolution of \(N_{1}\times N_{2}\times N_{3}=256\times 256\times 128\). Here, \(x_{1}\in[-L_{1}/2,L_{1}/2]\), \(x_{2}\in[-L_{2}/2,L_{2}/2]\) and \(x_{3}\in[-L_{3}/2,L_{3}/2]\) denote the streamwise, normal, and spanwise directions, respectively. The initial streamwise velocity is given by [65; 95; 96]
\[u_{1}=\frac{\Delta U}{2}\left[\tanh\left(\frac{x_{2}}{2\delta_{\theta}^{0}} \right)-\tanh\left(\frac{x_{2}+L_{2}/2}{2\delta_{\theta}^{0}}\right)-\tanh \left(\frac{x_{2}-L_{2}/2}{2\delta_{\theta}^{0}}\right)\right]+\lambda_{1}, \tag{17}\]
where, \(-L_{2}/2\leqslant x_{2}\leqslant L_{2}/2\), \(\delta_{\theta}^{0}=0.08\) is the initial momentum thickness and \(\Delta U=U_{2}-U_{1}=2\) is the velocity difference between two equal and opposite free streams across the shear layer. [65; 83] The momentum thickness quantifies the range of turbulence region in
the mixing layer, which is given by [67; 68; 83]
\[\delta_{\theta}=\int_{-L_{2}/4}^{L_{2}/4}\left[\frac{1}{4}-\left(\frac{\langle \bar{u}_{1}\rangle}{\Delta U}\right)^{2}\right]dx_{2}. \tag{18}\]
The initial normal and spanwise velocities are given as \(u_{2}=\lambda_{2},u_{3}=\lambda_{3}\), respectively. Here, \(\lambda_{1},\lambda_{2},\lambda_{3}\sim\mathcal{N}\left(\mu,\sigma^{2}\right)\), i.e., \(\lambda_{1},\lambda_{2},\lambda_{3}\) satisfy the Gaussian random distribution. The expectation of the distribution is \(\mu=0\) and the variance of the distribution is \(\sigma^{2}=0.01\). The Reynolds number based on the momentum thickness \(Re_{\theta}\) is defined as \(Re_{\theta}=\Delta U\delta_{\theta}/v_{\infty}\). Here, the kinematic viscosity of shear layer is set to \(v_{\infty}=5\times 10^{-4}\), so the initial momentum thickness Reynolds number is \(Re_{\theta}^{0}=320\). [83] To mitigate the impact of the top and bottom boundaries on the central mixing layer, two numerical diffusion buffer zones are implemented to the vertical edges of the computational domain. [65; 83; 95] The periodic boundary conditions in all three directions are utilized and the pseudo-spectral method with the two-thirds dealiasing rule is employed for the spatial discretization. An explicit two-step Adam-Bashforth scheme is chosen as the time-advancing scheme.
The DNS data are then explicitly filtered by the commonly-used Gaussian filter, which is defined by [63; 68]
\[G(\mathbf{r};\bar{\Delta})=\left(\frac{6}{\pi\bar{\Delta}^{2}}\right)^{1/2} \exp\left(-\frac{6\mathbf{r}^{2}}{\bar{\Delta}^{2}}\right). \tag{19}\]
Here, the filter scale \(\bar{\Delta}=8h_{\text{DNS}}\) is selected for the free-shear turbulent mixing layer, where \(h_{\text{DNS}}\) is the grid spacing of DNS. The filter-to-grid ratio FGR=\(\bar{\Delta}/h_{\text{LES}}=2\) is utilized and then the corresponding grid resolution of fDNS: \(64\times 64\times 32\) can be obtained. [65; 69] The LES with DSM model is performed on the uniform grid with the grid resolution of \(64\times 64\times 32\) in a cuboid domain with lengths \(L_{1}\times L_{2}\times L_{3}=8\pi\times 8\pi\times 4\pi\) using the same numerical method as DNS and is initialized by the fDNS data.
We perform numerical simulations for 145 sets of distinct initial fields and save the results for 90 temporal snapshots for each initial field. The time interval for each snapshot is \(200dt\), where \(dt=0.002\) is the time step of DNS. Therefore, the data of size [\(145\times 90\times 64\times 64\times 32\times 3\)] can be obtained as training and testing sets. Similar to SectionV.1, 80% of data, including 9860 input-output pairs, is used for training, and 20% of data, including 2465 pairs, is used for testing.
To perform a _posteriori_ analysis, we produce additional five sets of data with different initial fields, each containing ninety time nodes. Here, ninety time nodes are equivalent
to nine hundred time units (\(t/\tau_{\theta}=900\)) normalized by \(\tau_{\theta}=\delta_{\theta}^{0}/\Delta U=20dt\). The temporal evolutions of the momentum thickness \(\delta_{\theta}\) for LES using different models are shown in Fig. 12. The DSM model underestimates the momentum thickness at the beginning of the transition region while overestimates the momentum thickness in the linear growth region. In comparison to the baseline fDNS, both the DMM and VGM models exhibit lower predicted values at the beginning, which gradually increase and eventually exceed the actual fDNS results. Since the three classical LES models exhibit similar performance, we select the DSM model for comparison with the FNO-based models. The FNO model shows a good ability to capture the momentum thickness growth rate during the early stages of temporal development. However, its prediction becomes invalid after 500 time units (\(t/\tau_{\theta}\geqslant 500\)). In contrast, the predictions of the IU-FNO model always show a good agreement with fDNS results.
Figure 12: Temporal evolutions of the momentum thickness \(\delta_{\theta}\) for LES using different models in the free-shear turbulent mixing layer.
in both transition and linear growth regions.
Furthermore, the temporal evolutions of the streamwise turbulent kinetic energy \(E_{k1}=\frac{1}{2}\left(\sqrt{\langle u_{1}u_{1}\rangle}\right)^{2}\) and normal turbulent kinetic energy \(E_{k2}=\frac{1}{2}\left(\sqrt{\langle u_{2}u_{2}\rangle}\right)^{2}\) are displayed in Fig. 13. Here, \(\langle\cdot\rangle\) denotes a spatial average over the whole computational domain. The turbulent kinetic energy in various directions increases gradually during the shear layer development in fDNS. Both streamwise and normal kinetic energy predicted by the DSM model are much larger than those of fDNS. FNO predicts reasonable results during the first 450 time units (\(t/\tau_{\theta}\leq 450\)), after that the results diverge quickly. By contrast, the IU-FNO model can well predict the kinetic energy in both streamwise and normal directions during the whole development of the shear layer, which is the closest to the fDNS data.
We then compare the normalized velocity spectrum of different models at time instants \(t/\tau_{\theta}\approx 500\) and \(t/\tau_{\theta}\approx 900\), as shown in Fig. 14. Here, Kolmogorov length scale \(\eta\approx 0.026\) at both \(t/\tau_{\theta}\approx 500\) and \(t/\tau_{\theta}\approx 900\). The dissipation rate \(\varepsilon\approx 0.0023\) at \(t/\tau_{\theta}\approx 500\), and \(\varepsilon\approx 0.0021\) at \(t/\tau_{\theta}\approx 900\) in the free-shear turbulent mixing layer. It can be seen that the normalized velocity spectrum predicted by the DSM model is overestimated at low wavenumbers and is underestimated at high wavenumbers when compared to those of the fDNS. The normalized velocity spectrum predicted by FNO is higher than benchmark fDNS at high wavenumbers, and the deviation will be larger as the time increases. In comparison, the IU-FNO model can accurately predict energy spectrum that agrees well with the fDNS data at various time instants.
Figure 13: Temporal evolutions of the streamwise turbulent kinetic energy \(E_{k1}\) and normal turbulent kinetic energy \(E_{k2}\) for LES using different models in the free-shear turbulent mixing layer.
Figure. 15 illustrates the PDFs of velocity increment in the spanwise direction. Here, the spanwise velocity increment is given by \(\delta_{\tau_{3}}\bar{u}=[\overline{\mathbf{u}}(\mathbf{x}+\mathbf{r})- \overline{\mathbf{u}}(\mathbf{x})]\cdot\hat{\mathbf{e}_{3}}\), where \(\hat{\mathbf{e}_{3}}\) denotes the unit vector in the spanwise direction and the velocity increments are normalized by the rms values of velocity \(\bar{u}^{\mathrm{rms}}\). The sharp peak of PDF is due to the non-turbulent regions where the velocity increment is nearly zero in the spanwise direction.[65] The regions with non-zero velocity increments are predominantly governed by turbulence. It is shown that the IU-FNO model demonstrates better performance compared to both FNO and DSM models at different time instants.
Figure 14: The normalized velocity spectra for LES using different models in the free-shear turbulent mixing layer at different time instants: (a)\(t/\tau_{\theta}\approx 500\) (b)\(t/\tau_{\theta}\approx 900\).
Figure 15: The PDFs of the spanwise velocity increment for LES using different models in the free-shear turbulent mixing layer at different time instants: (a)\(t/\tau_{\theta}\approx 500\) (b)\(t/\tau_{\theta}\approx 900\).
Finally, we compare the vortex structures predicted by the DSM model and IU-FNO model with fDNS data. The Q-criterion has been widely used for visualizing vortex structures in turbulent flows and is defined by [98; 99; 100]
\[Q=\frac{1}{2}\left(\bar{\Omega}_{ij}\bar{\Omega}_{ij}-\bar{S}_{ij}\bar{S}_{ij} \right), \tag{20}\]
where \(\bar{\Omega}_{ij}=\left(\partial\bar{u}_{i}/\partial x_{j}-\partial\bar{u}_{j}/ \partial x_{i}\right)/2\) is the filtered rotation-rate tensor. Fig. 16 displays the instantaneous isosurfaces of \(Q=0.2\) at \(t/\tau_{\theta}\approx 200\) and \(t/\tau_{\theta}\approx 900\) colored by the streamwise velocity. It is observed that the DSM model predicts relatively larger vortex structures compared to the fDNS result. On the contrary, the IU-FNO model demonstrates
Figure 16: The iso-surface of the Q-criterion at \(Q=0.2\) colored by the streamwise velocity at \(t/\tau_{\theta}\approx 200\) and \(t/\tau_{\theta}\approx 900\) in the free-shear turbulent mixing layer.
a closer agreement with fDNS results especially in terms of reconstructing the small vortex structures, highlighting its advantage in improving the accuracy of LES.
### Decaying homogeneous isotropic turbulence
We assess the extrapolation ability of different FNO-based models in LES of the decaying homogeneous isotropic turbulence (HIT). The numerical simulation of decaying HIT is conducted in a cubic box of \((2\pi)^{3}\) with periodic boundary conditions, and the numerical method is consistent with the forced HIT. The governing equations are spatially discretized using the pseudo-spectral method, incorporating the two-thirds dealiasing rule, at a uniform grid resolution of \(N=256^{3}\). The temporal discretization scheme employs the explicit second-order two-step Adams-Bashforth method. We use the statistically steady flow field of the forced HIT as the initial field for the simulation of decaying turbulence. DNS of decaying turbulence is performed over about six large-eddy turnover times(\(\tau=L_{I}/u^{\rm rms}\)). In order to assess the extrapolation ability of different FNO-based models, only the flow fields at first two large-eddy turnover times \(t/\tau\leq 2\) are used for training, and the flow fields at \(t/\tau>2\) are in the unseen flow regime where the magnitude of velocity fluctuation is different from the training data.
The kinematic viscosity is set to \(\nu=0.00625\) and the initial Taylor Reynolds number is \(Re_{\lambda}\approx 100\). The sharp spectral filter(mentioned in Section II) with cutoff wavenumber \(k_{c}=10\) is used to filter the DNS data. Here, we calculated 595 different sets of initial fields and stored a snapshot every \(0.1\tau\). Finally, the fDNS data of size \([595\times 20\times 32\times 32\times 32\times 3]\) can be obtained, and 80% of datas are used for training and 20% for testing.
After training, five more groups of data with different initial fields are generated to perform a _posteriori_ analysis. Figure. 17 compares the temporal evolutions of the turbulent kinetic energy \(E(t)=\int_{0}^{\infty}E(k)dk=\frac{1}{2}\left(u^{\rm rms}\right)^{2}\) and the resolved dissipation rate \(\bar{\varepsilon}\) of DSM, FNO, and IU-FNO models with fDNS data. Here, the dissipation rate is defined by \(\bar{\varepsilon}=2\nu\left<\bar{S}_{ij}\bar{S}_{ij}\right>\). It can be seen that the kinetic energy gradually decays from the initial state over time, and all models can predict the turbulent kinetic energy well in the short period. However, the dissipation rate predicted by IU-FNO is more accurate than those of the DSM and FNO models at \(t/\tau\geqslant 4\).
Further, we evaluate the normalized velocity spectra for different models at two different
time instants \(t/\tau\approx 4.0\) and \(t/\tau\approx 6.0\) in Fig. 18. Here, Kolmogorov length scale \(\eta\approx 0.033\) at \(t/\tau\approx 4.0\), and \(\eta\approx 0.041\) at \(t/\tau\approx 6.0\). The dissipation rate \(\varepsilon\approx 0.214\) at \(t/\tau\approx 4.0\), and \(\varepsilon\approx 0.085\) at \(t/\tau\approx 6.0\) in the decaying HIT. The kinetic energy at all wavenumbers decreases with the time. The DSM model overestimates the kinetic energy at low wavenumbers. The FNO model overpredicts the energy spectrum at all wavenumbers. In contrast, the normalized velocity spectrum predicted by the IU-FNO model is in a good agreement with fDNS data.
Finally, the PDFs of the normalized vorticity at the dimensionless time \(t/\tau\approx 4.0\) and
Figure 17: Temporal evolutions of the turbulent kinetic energy \(E(t)\) and the average dissipation rate \(\bar{\varepsilon}\) for different models in decaying HIT.
Figure 18: The normalized velocity spectra for different models in decaying HIT at \(t/\tau\approx 4.0\) and \(t/\tau\approx 6.0\).
\(t/\tau\approx 6.0\) are depicted in Fig. 19. The rms values of the vorticity calculated by the fDNS data are used for normalization. Both the DSM model and FNO model give the wrong prediction of the peak location. On the contrary, the IU-FNO model slightly outperforms these models at both \(t/\tau\approx 4.0\) and \(t/\tau\approx 6.0\), which provides a reasonably accurate prediction for both the locations and peaks of the PDFs of the vorticity.
## VI Discussion and Future Work
Simulations of three-dimensional (3D) nonlinear partial differential equations (PDEs) are of great importance in engineering applications. While data-driven approaches have been widely successful in solving one-dimensional (1D) and two-dimensional (2D) PDEs, the relevant works on data-driven fast simulations of 3D PDFs are relatively rare. The need for significant model complexity and a large number of parameters to accurately model the non-linear interactions in 3D PDEs (including turbulent flows) is a major challenge. In such situations, training and implementing neural networks may not be as efficient as traditional numerical methods.
Recently, the FNO has proven to be a highly effective surrogate model in solving PDEs, indicating its significant potential for addressing 3D nonlinear problems.[41; 47; 101] The utilization of FNO in 3D turbulence has attracted more and more attention. Li et al. utilized FNO for LES of 3D forced HIT and achieved faster and more accurate prediction compared
to the classical LES with the dynamic Smagorinsky model and dynamic mixed model.[61] Peng et al. proposed a linear attention coupled Fourier neural operator (LAFNO) to further improve the model accuracy in simulating 3D forced HIT, and free-shear turbulence.[62] However, model errors will accumulate over time, leading a challenge for maintaining high accuracy in long-term predictions. In addition, the memory size imposes a limitation on number of layers in the original form of FNO.
In this work, we investigate the effectiveness of implicit layer that utilize shared Fourier layers, which can enable the neural network to be expanded to greater depths, thereby enhancing its capability to approximate complex functions. Simultaneously, we incorporate the U-net network to complement small-scale information, which further enhances the stability of the model. The results demonstrate that the proposed IU-FNO model outperforms the original FNO model in terms of accuracy and stability in predicting 3D turbulent flows, including forced homogeneous isotropic turbulence, free-shear turbulent mixing layer, and decaying homogeneous isotropic turbulence. Moreover, IU-FNO demonstrates long-term stable predictions, which has not been achieved by previous versions of FNO. In comparison with the original FNO, IU-FNO reduces the network parameters by approximately 75%, and the number of parameters is independent of the number of network layers. Meanwhile, the IU-FNO model also demonstrates improved generalizability to higher Reynolds numbers, and can predict unseen flow regime in decaying turbulence. Therefore, the IU-FNO approach serves as a valuable guide for modeling large-scale dynamics of more complex turbulence. Since we are using a purely data-driven approach without explicitly embedding any physical knowledge, the predicted results might not strictly satisfy the N-S equations. However, the IU-FNO model is capable of approximating the N-S equations from data.
One limitation of the proposed model is that it has only been tested on simple flows, whereas the flows in engineering applications are often much more complex. While the IU-FNO model is effective in predicting flow types under uniform grid and periodic boundary conditions, it requires further improvement to be applicable to non-uniform grid and non-periodic boundary conditions. Another disadvantage of the proposed model is its high dependence on data. As a purely data-driven model, it requires a substantial amount of data for training. Recently, more sophisticated improvements of the FNO framework have been proposed to simulate complex flows, including the adaptive Fourier neural operators (AFNO)[102; 52] and physics-informed neural operator (PINO).[103; 51] Li et al. introduced geo
FNO, a method that can handle PDEs on irregular geometries by mapping the input physical domain into a uniform latent space using a deformation function. The FNO model with the FFT is then applied in the latent space to solve the PDEs.[47] The ability to handle arbitrary geometries is essential for solving engineering flows, which often involve complex geometries with irregular boundaries. Most of advanced FNO variants have been only tested in 2D problems, whereas most flows in engineering applications are 3D. In future work, the geo-FNO can be extended and integrated with the proposed IU-FNO models for fast simulations of 3D complex turbulence.
## VII Conclusion
In this work, we proposed an implicit U-Net enhanced Fourier neural operator (IU-FNO) model to predict long-term large-scale dynamics of three-dimensional turbulence. The IU-FNO is verified in the large-eddy simulations of three types of 3D turbulence, including forced homogeneous isotropic turbulence, free-shear turbulent mixing layer, and decaying homogeneous isotropic turbulence.
Numerical simulations demonstrate that: 1) The IU-FNO model performs a superior capability to reconstruct a variety of statistics of velocity and vorticity fields, and the instantaneous spatial structures of vorticity, compared to other FNO-based models and classical DSM model. 2)The IU-FNO model has the capability of accurate predictions for long-time dynamics of 3D turbulence, which can not be achieved by previous forms of FNO. 3) IU-FNO model employs implicit loop Fourier layers to reduce the number of network parameters by approximately 75% compared to the original FNO. 4)The IU-FNO model is much more efficient than traditional LES with DSM model, and shows an enhanced capacity for generalization to high Reynolds numbers, and can make predictions on unseen flow regime of decaying turbulence. Therefore, the proposed IU-FNO approach has the great potential in developing advanced neural network models to solve 3D nonlinear problems in engineering applications.
###### Acknowledgements.
This work was supported by the National Natural Science Foundation of China (NSFC Grant Nos. 91952104, 92052301, 12172161 and 91752201), by the NSFC Basic Science Center Program (grant no. 11988102), by the Shenzhen Science and Technology Program (Grants No.KQTD20180411143441009), by Key Special Project for Introduced Talents Team of Southern Marine Science and Engineering Guangdong Laboratory (Guangzhou) (Grant No. GML2019ZD0103), and by Department of Science and Technology of Guangdong Province (No.2020B1212030001). This work was also supported by Center for Computational Science and Engineering of Southern University of Science and Technology.
## Author Declarations
### Conflict of Interest
The authors have no conflicts to disclose.
## Data Availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
## Appendix A Conventional subgrid-scale model for LES
In this appendix, we mainly introduce three classical LES models, including dynamic Smagorinsky model (DSM), velocity gradient model (VGM) and dynamic mixed model (DMM). One of the most widely used functional models is the Smagorinsky model, given by [93; 104; 105]
\[\tau_{ij}-\frac{\delta_{ij}}{3}\tau_{kk}=-2C_{s}^{2}\Delta^{2}|\bar{S}|\bar{S}_ {ij}, \tag{10}\]
where \(\bar{S}_{ij}=\frac{1}{2}\left(\partial\bar{u}_{i}/\partial x_{j}+\partial\bar {u}_{j}/\partial x_{i}\right)\) represents the strain rate of the filtered velocity, and \(|\bar{S}|=\left(2\bar{S}_{ij}\bar{S}_{ij}\right)^{1/2}\) stands for the characteristic filtered strain rate. \(\delta_{ij}\) is the Kronecker delta operator, and \(\Delta\) denotes the filter width.
The coefficient \(C_{s}^{2}\) can be obtained either through theoretical analysis or empirical calibration.[104] The widely adopted strategy involves implementing the least-squares dynamic methodology by utilizing the Germano identity, resulting in the dynamic Smagorinsky model (DSM) with the coefficient given by[106; 107]
\[C_{s}^{2}=\frac{\left\langle\mathcal{L}_{ij}\mathcal{M}_{ij}\right\rangle}{ \left\langle\mathcal{M}_{kl}\mathcal{M}_{kl}\right\rangle}. \tag{10}\]
Here, the Leonard stress \(\mathcal{L}_{ij}=\widetilde{\hat{u}_{i}\hat{u}_{j}}-\tilde{\hat{u}}_{i}\tilde{ \hat{u}}_{j}\), and \(\mathcal{M}_{ij}=\tilde{\alpha}_{ij}-\beta_{ij}\) with \(\alpha_{ij}=2\Delta^{2}|\bar{S}|\bar{S}_{ij}\) and \(\beta_{ij}=2\tilde{\Delta}^{2}|\bar{\tilde{S}}|\bar{\tilde{S}}_{ij}\). Specially, an overbar denotes the filtering at scale \(\Delta\), a tilde represents the test filtering operation at the double-filtering scale \(\tilde{\Delta}=2\Delta\).
A representative structural model is the velocity gradient model (VGM) based on the truncated Taylor series expansions, given by[108]
\[\tau_{ij}=\frac{\bar{\Delta}^{2}}{12}\frac{\partial\bar{u}_{i}}{\partial x_{k }}\frac{\partial\bar{u}_{j}}{\partial x_{k}}. \tag{11}\]
The dynamic mixed model (DMM) combines the scale-similarity model with the dissipative Smagorinsky term, and is given by[109; 110]
\[\tau_{ij}=C_{1}\bar{\Delta}^{2}|\bar{S}|\bar{S}_{ij}+C_{2}\left(\bar{\bar{u}_ {i}\bar{\bar{u}}_{j}}-\bar{\bar{u}}_{i}\bar{\bar{u}}_{j}\right). \tag{12}\]
Here, an overbar denotes the filtering at scale \(\Delta\), and a tilde represents the test filtering operation at the double-filtering scale \(\tilde{\Delta}=2\Delta\). The spectral filter is employed to double-filtering in HIT, and a Gaussian filter is utilized in free-shear turbulent mixing layer. Similar to the DSM model, the model coefficients \(C_{1}\) and \(C_{2}\) of the DMM model are dynamically determined using the Germano identity through the least-squares algorithm. \(C_{1}\) and \(C_{2}\) are expressed as[111; 85]
\[C_{1}=\frac{\left\langle N_{ij}^{2}\right\rangle\left\langle L_{ij}M_{ij} \right\rangle-\left\langle M_{ij}N_{ij}\right\rangle\left\langle L_{ij}N_{ij }\right\rangle}{\left\langle N_{ij}^{2}\right\rangle\left\langle M_{ij}^{2} \right\rangle-\left\langle M_{ij}N_{ij}\right\rangle^{2}}, \tag{13}\]
\[C_{2}=\frac{\left\langle M_{ij}^{2}\right\rangle\left\langle L_{ij}N_{ij} \right\rangle-\left\langle M_{ij}N_{ij}\right\rangle\left\langle L_{ij}M_{ij }\right\rangle}{\left\langle N_{ij}^{2}\right\rangle\left\langle M_{ij}^{2} \right\rangle-\left\langle M_{ij}N_{ij}\right\rangle^{2}}, \tag{14}\]
where \(M_{ij}=H_{1,ij}-\tilde{h}_{1,ij}\), and \(N_{ij}=H_{2,ij}-\tilde{h}_{2,ij}\). Here, \(h_{1,ij}=-2\bar{\Delta}^{2}|\bar{S}|\bar{S}_{ij}\), \(h_{2,ij}=\widetilde{\hat{u}_{i}\bar{\hat{u}}_{j}}-\tilde{\hat{u}}_{i}\tilde{ \hat{u}}_{j}\), \(H_{1,ij}=-2\tilde{\Delta}^{2}|\bar{\tilde{S}}|\bar{\tilde{S}}_{ij}\), and \(H_{2,ij}=\widetilde{\hat{\bar{u}}_{i}\hat{\bar{u}}_{j}}-\hat{\hat{\bar{u}}}_{i }\hat{\bar{\hat{u}}}_{j}\) The hat denotes the filter at scale \(\hat{\Delta}=4\Delta\).
## Appendix B The Details of related Fourier neural operator methods
In this appendix, we introduce the details of related Fourier neural operators, including U-FNO and IFNO.
Fig. 20(a) illustrates the architectures of the U-FNO model. The U-FNO employ iterative architectures: \(v_{l_{0}}\mapsto v_{l_{1}}\mapsto\ldots\mapsto v_{l_{T}}\mapsto v_{m_{0}}\ldots \mapsto v_{m_{M}}\) where \(v_{l_{j}}\) for \(j=0,1,\ldots,T-1\) and \(v_{m_{k}}\) for \(k=0,1,\ldots,M-1\) are sequences of functions taking values in \(\mathbb{R}^{d_{v}}\). [45] Therefore, when doing local transformation projection, the operation \(u(x)=Q\left(v_{m_{M}}(x)\right)\) is performed on \(v_{m_{M}}(x)\). Specifically, \(v_{l_{j}}\) denotes \(j\)-th Fourier layer which is the same as original FNO architecture. \(v_{m_{M}}\) represents \(M\)-th U-Fourier layer which is given as
\[v_{m_{k+1}}(x):=\sigma\left(\mathcal{F}^{-1}\left(R_{\phi}\cdot\left(\mathcal{ F}v_{m_{k}}(x)\right)\right)(x)+\left(\mathcal{U}^{*}v_{m_{k}}\right)(x)+W \left(v_{m_{k}}(x)\right)\right),\quad\forall x\in D. \tag{10}\]
Here, \(\mathcal{F}\), \(\mathcal{F}^{-1}\), \(R_{\phi}\) and \(W\) have the same meaning as defined in Section III.1. \(\mathcal{U}^{*}\) denotes a U-Net CNN operator. The architecture of U-FNO differs from the original Fourier layer in FNO by incorporating a U-Net path into each U-Fourier layer. The purpose of the U-Net is to perform local convolutions that enhance the representation capability of the U-FNO, particularly for small-scale flow structures.
The architecture of IFNO is shown in Fig. 21, which can greatly reduce the number of trainable parameters and memory cost, and overcome the vanishing gradient problem of
Figure 20: The architectures of U-Net enhanced Fourier neural operators (U-FNO).(a) the architecture of U-FNO proposed by Wen et al. [45] (b) the architecture of modified U-FNO proposed by us.
training networks with deep layers. The iterative network update of IFNO is given as [46]
\[v(x,(l+1)\Delta t) =\mathcal{L}^{IFNO}[v(x,l\Delta t)]\] \[:=v(x,l\Delta t)+\Delta t\sigma\left(Wv(x,l\Delta t)+\mathcal{F}^{- 1}\left(R_{\phi}\cdot\left(\mathcal{F}v(x,l\Delta t)\right)\right)(x)\right), \forall x\in D. \tag{30}\]
The IFNO model employs a parameter-sharing strategy and continuously optimizes the network parameters through iterative loops to enhance its accuracy. This approach is effective in improving the performance of the network and enables it to handle complex data and nonlinear problems more efficiently.
|
2303.03681 | Towards practical and massively parallel quantum computing emulation for
quantum chemistry | Quantum computing is moving beyond its early stage and seeking for commercial
applications in chemical and biomedical sciences. In the current noisy
intermediate-scale quantum computing era, quantum resource is too scarce to
support these explorations. Therefore, it is valuable to emulate quantum
computing on classical computers for developing quantum algorithms and
validating quantum hardware. However, existing simulators mostly suffer from
the memory bottleneck so developing the approaches for large-scale quantum
chemistry calculations remains challenging. Here we demonstrate a
high-performance and massively parallel variational quantum eigensolver (VQE)
simulator based on matrix product states, combined with embedding theory for
solving large-scale quantum computing emulation for quantum chemistry on HPC
platforms. We apply this method to study the torsional barrier of ethane and
the quantification of the protein-ligand interactions. Our largest simulation
reaches $1000$ qubits, and a performance of $216.9$ PFLOPS is achieved on a new
Sunway supercomputer, which sets the state-of-the-art for quantum computing
emulation for quantum chemistry | Honghui Shang, Yi Fan, Li Shen, Chu Guo, Jie Liu, Xiaohui Duan, Fang Li, Zhenyu Li | 2023-03-07T06:44:18Z | http://arxiv.org/abs/2303.03681v1 | # Towards practical and massively parallel quantum computing emulation for quantum chemistry
###### Abstract
Quantum computing is moving beyond its early stage and seeking for commercial applications in chemical and biomedical sciences. In the current noisy intermediate-scale quantum computing era, quantum resource is too scarce to support these explorations. Therefore, it is valuable to emulate quantum computing on classical computers for developing quantum algorithms and validating quantum hardware. However, existing simulators mostly suffer from the memory bottleneck so developing the approaches for large-scale quantum chemistry calculations remains challenging. Here we demonstrate a high-performance and massively parallel variational quantum eigensolver (VQE) simulator based on matrix product states, combined with embedding theory for solving large-scale quantum computing emulation for quantum chemistry on HPC platforms. We apply this method to study the torsional barrier of ethane and the quantification of the protein-ligand interactions. Our largest simulation reaches \(1000\) qubits, and a performance of \(216.9\) PFLOPS is achieved on a new Sunway supercomputer, which sets the state-of-the-art for quantum computing emulation for quantum chemistry.
Quantum Computational Chemistry, Quantum Computing, Variational Quantum Eigensolver, Matrix Product State, High Performance Computing
## I Introduction
Computation is revolutionizing chemistry and materials science. Computing the electronic structure by approximately solving the Schrodinger equation enables us to explore chemicals and materials at the atomic scale. However, the pursuit for chemical accuracy in numerical simulations of quantum many-body systems is a longstanding problem since the computational complexity grows exponentially with the system size. For example, even with the help of supercomputers, the exact solution of the Schrodinger equation is limited to a complete active space problem of (\(24\) electrons, \(24\) orbitals), which corresponds to a diagonalization problem of size \(7.3\) trillion [1]. Richard Feynman suggested quantum computing as a potential solution for simulating quantum systems, as he marked 'if you want to make a simulation of nature, you'd better make it quantum mechanical' [2].
Significant advances in quantum computing technologies over the past two decades are turning Feynman's vision into reality. As a milestone, quantum advantage in the random circuit sampling (RCS) problem has been demonstrated on noisy intermediate-scale quantum (NISQ) computers [3, 4, 5]. Toward practical applications, the ground-state energies of diamond have been estimated with the quantum Monte Carlo (QMC) method using 16 qubits and 65 circuit depths, which is the largest quantum chemistry calculation using a quantum computer [6]. However, the quantum resource used in this experiment is far away from that required to realize quantum advantage in quantum chemistry, which is expected to appear at around \(38\) to \(68\) qubits (under the assumption of error-corrected qubits) [7]. Besides, the variational quantum eigensolver (VQE) is an appealing candidate for solving quantum chemistry problems on NISQ devices [8], which has a great flexibility in choosing quantum circuit ansatzes and mitigating errors [9]. However, compared to the RCS and QMC experiments, the VQE simulations with tens of qubits would be significantly more challenging for quantum hardware in that: 1) the circuit depth scales quickly up to \(10^{3}\) or even more as the number of qubits increases [10]; and 2) the nonlinear optimization with a large number of parameters remarkably increases the computational cost. As such, the largest VQE experiment preformed on a quantum computer has only used 12 qubits [11], and the current VQE emulation with classical simulators are also mostly limited to relatively small molecules with 10 to 20 qubits, as shown in Table I for the typical simulations of chemical and material systems using classical simulators.
To explore practical applications of quantum computing in quantum chemistry, one can resort to the development of quantum technologies, e.g. advanced quantum algorithms in combination with error mitigation techniques or the fault-tolerant quantum computers as a long-term target. Another way is the combination of state-of-the-art simulators with high performance computing (HPC), which enable us to emulate large-scale quantum computation of the electronic structure on classical computers. In the current stage, simulators are expected to play a fundamental role in the algorithm design or verification. In the RSC experiments, classical simulators are
used for both calibrating the fidelity of individual gate operation and the whole random quantum circuit, and extrapolating the fidelity of simpler quantum circuits to the most difficult ones [3, 4, 5]. In most quantum algorithm designs, simulators are employed as the numerical emulating platform to benchmark new algorithms.
Classical simulators suffer from the notorious exponential wall when the many-body systems are simulated exactly. As such, approximation algorithms are often used to realize large-scale emulations of quantum chemistry calculation. For example, the excited states of iridium complexes have been computed with up to \(72\) qubits [12], which is the largest classical emulation of the VQE in terms of the number of qubits up to date. However, to achieve such a large emulation scale, a very shallow quantum circuit ansatz was employed to reduce the computational cost. Additionally, a \(28\)-qubit VQE emulation of the C\({}_{2}\)H\({}_{4}\) molecule has been reported by using point symmetry to significantly reduce the total number of gate operations [13]. A classical emulation of the C\({}_{18}\) molecule (a model system consisting of 144 spin molecular orbitals and 72 electrons) has been reported by combining VQE with the density matrix embedding theory (DMET), where DMET is used to break the molecule into small fragments and the VQE is used as the solver for the electronic structure of each fragment. While, the maximum number of qubits used in the VQE calculations is only 16 [14].
In this work, we demonstrate a high-performance and massively parallel VQE simulator using the matrix product state (MPS) representation of the quantum state. Our simulator maximally utilizes the power of tensor network methods and supercomputers in order to to overcome the exponential memory bottleneck and realize the largest classical emulation of quantum computational chemistry. The major computational bottleneck of the MPS-VQE algorithm (see Sec. IV-B for more details) on HPC is the implementation of high-level linear algebra solvers, such as singular value decomposition (SVD) (see Sec. IV-H). Here, we overcome this bottleneck by the optimized SVD and tensor operation algorithm. As discussed in Sec. II-C, our one-sided Jacobi SVD is more than \(60\) times faster than the non-optimized version on average for matrix sizes from \(100\) to \(500\). As a result, our largest simulation which use the MPS-VQE simulator scales up to \(1000\) qubits for one-shot energy evaluation and to \(92\) qubits for fully converged VQE emulation, with a two-qubit gate count up to \(10^{5}\). In combination with DMET (see Sec. IV-E for more details), our simulator is applied to study practical quantum chemistry systems containing \(103\) atoms and achieves comparable accuracy with state-of-the-art computational methods.
## II Results
### _Optimization Strategies_
Emulating quantum computing on a classical computer is difficult due to the exponential runtime and memory requirement. Such difficulties can be leveraged with tensor network methods and by utilizing many-core and multi-node computers. Heterogeneous many-core systems are efficient for handling runtime issues but have limited total accessible memory space. Meanwhile, the memory of a multi-node computer can be scaled to the petabytes order, but its bandwidth for access from host computers (CPUs) is narrow. To simultaneously accelerate simulations and enlarge the total memory space, the heterogeneous parallelization approach [15](see Sec. IV-F and Sec. IV-G for more details) can be adopted. Our simulator allocates memory to each computation node and then accelerates simulations by utilizing the full capabilities of the heterogeneous many-core processors.
The new-generation Sunway supercomputer that is the successor of the Sunway TaihulLight supercomputer is used for performance assessment in this work. Similar to the Sunway TaihulLight system, the new Sunway supercomputer adopts a new generation of domestic high-performance heterogeneous many-core processors (SW26010Pro) and interconnection network chips in China. The architecture of the SW26010Pro processor is shown in Fig. 2(a). Each processor contains 6 core-groups (CGs), with 65 cores in each CG, making a total number of 390 cores. Each CG contains one management processing element (MPE), one cluster of computing processing elements (CPEs) and one memory controller. Each CPE has a 32 KB L1 instruction cache, and a 256 KB scratch pad memory (SPM, also called the Local Data Memory (LDM)), which serves the same function as the L1 cache. Data transfer between LDM and main memory can be realized by direct memory access (DMA).
The hotspots of our simulator are mainly the tensor contractions and SVD functions. In the tensor contraction, the first step is the index permutation of the tensors, followed by one of the BLAS (basic linear algebra subprograms) [16] routine that performs matrix-matrix multiplications (ZGEMM) to accomplish the calculation. Here we use the fused permutation and multiplication technique [17]. For the ZGEMM calculation, we perform matrix-matrix multiplications based on the optimization strategies, including balanced block that we choose an optimized block for the matrix A and B to make balanced computations with CPEs, and diagonal broadcasting method where we use CPEs on the diagonal to perform a broadcast to forward its data to its corresponding row or column, to realize efficient parallel computing for matrix multiplications, matrix transpose multiplications and conjugate transpose multiplications on the Sunway many-core system. First, we need to decompose the matrices A and B into smaller blocks to fit into the computing size of kernel. Second, we transmit the blocks of the input matrix into the LDM from the main memory. If we need to permute the input matrix, we should load the data that need to be transposed to the LDM of each CPE in blocks by DMA_get, and the data stored on its own LDM using the SIMD (Single Instruction Multiple Data) 'vshuff' instruction (the interface of the shuffle between two vectors); A diagonal broadcast optimization method is used to greatly reduce the memory access overhead to ensure the overall performance of matrix multiplication. Third, SIMD is used to implement eight 64-bit double-precision floating-point operations at a time. One SIMD instruction is equivalent to
a small loop, so the number of instructions can be reduced, thereby reducing the requirement for bandwidth, and reducing the number of loops caused by induced control-related time overhead, as shown in Fig. 2(b).
For the SVD calculation, there are mainly two classes of algorithms. The first class of the SVD algorithms is the QR-based two-phase approach [18], in which the matrix \(A\) is transformed into a bidiagonal matrix using an orthogonal transformation, and then the bidiagonal matrix is diagonalized using the bidiagonal divide-and-conquer method or the QR algorithm. The complete SVD is then determined during the backward transformation. This method is efficient for large matrices while suffering from loss of relative accuracy [19]. The second class of the SVD algorithms is the Jacobi-based algorithm, which has recently attracted a lot of attention because it has a higher degree of potential parallelism [20, 21, 22]. There are two varieties of the Jacobi-based algorithm (see Sec. IV-H), one-sided and two-sided algorithms. The one-sided Jacobi algorithm is computationally more efficient than the two-sided algorithm [23], and suitable for vector pipeline computing. Thus, to achieve efficient parallel SVD computation on Sunway heterogeneous many-core architectures, the best choice is the Hestenes one-sided Jocobi transformation method [24], where all pairs of columns are repeatedly orthogonalized in sweeps using Jacobi rotations [25] until all columns are mutually orthogonal. When the convergence is reached, the right singular vectors can be computed by accumulating the rotations, the left singular vectors are the normalized columns of the modified matrix, and the singular values are the norms of those columns. Since each pair of columns can be orthogonalized independently, the method is also easily to parallelize over the CPEs, as shown in Fig. 2(c). It should be noted that another scalable SVD algorithm called cross-product SVD [26] is also widely used in principal component analysis. However, numerical issues may appear since the condition number is squared in the intermediate step to orthogonalize \(A^{T}A\). To simulate quantum systems in which the superposition of states is quite arbitrary, the cross-product SVD may be not as stable as other approaches.
### _Validation Results with MPS-VQE simulator (92 qubits)_
As a pilot application, Fig. 3 shows the potential energy curves (PECs) of the hydrogen molecule computed with the MPS-VQE simulator. The unitary coupled cluster with single and double excitations (UCCSD) ansatz that is able to accurately describe this two-electron system is employed for single-point energy calculations. The implementation of the UCCSD ansatz with MPS is described in Method (see Sec. IV-C for more details). The STO-3g, cc-pVDZ, cc-pVTZ and aug-cc-pVTZ basis sets are used to extend these emulations from 4 to 92 qubits. The BOBYQA optimizer is used for the variational optimization, with a convergence threshold set to \(10^{-6}\) for the minimum allowed value of trust region radius. Note that the hydrogen molecule can be simulated without supercomputer resources even in aug-cc-pVTZ basis, since only two electrons are involved. However, this 92-qubit case involves \(1.4\times 10^{5}\) CNOT gates (161 variational parameters), which is the largest quantum circuit simulation up to date in terms of the number of qubits and circuit depth. The simulations are carried out using 512 processes, and the computation times are given in Tab. II. The results from MPS-VQE are in excellent agreement with the full configuration interaction (FCI) results as shown in Table III. For all the four basis sets, chemical accuracy is achieved with a maximum error of 0.82 kcal mol\({}^{-1}\) at R(H-H)=2.4 A for the aug-cc-pVTZ results. We also show results obtained with FCI in the complete basis set (CBS) limit, which can be considered as the exact potential energy curve of the hydrogen molecule. The results of aug-cc-pVTZ shows an average deviation of 1.42 kcal mol\({}^{-1}\) from the complete basis set limit. We can see that using a larger basis set makes the potential energy curve much closer to the exact dissociation limit.
### _Speedup and Scaling with MPS-VQE simulator_
One major bottleneck of the MPS-VQE simulator is the SVD function (technical details shown in Sec.IV-H), which takes around 85% of the CPU time on average. In Fig. 4, we show the performance improvement of the two optimized versions of SVD, including the QR-based method implemented in SW_xmath (QR_SW_xmath) and the optimized one-sided Jacobi in this work (one-sided-Jacobi_SW), compared to the QR-based SVD method running on MPE (QR_MPE), for different matrix sizes. We use the performance of the QR_MPE as the baseline, which we set as 1 in Fig 4(b). We can see that the optimized SVD using the one-sided Jacobi method produces an overall speedup ranging from 1.5x to 62.2x compared to QR_MPE, and achieves a speedup of 2x to 6x compared to QR_SW_xmath version. For the one-sided Jacobi SVD (one-sided-Jacobi_SW), we use the Athread library routines provided by the Sunway architecture for the many-core acceleration, and we use 64 threads for the actual computation. The Jacobi-based method for SVD used in this work has potentially better accuracy than other methods. For example, if the SVD routine in the MPS simulator is replaced with cross-product SVD [26], the energy error with respect to FCI will raise from \(1.1\times 10^{-2}\) kcal mol\({}^{-1}\) to \(1.5\times 10^{-1}\) kcal mol\({}^{-1}\) for the simplest H\({}_{2}\) molecule (cc-PVTZ basis set) even if more than 2.5 times the number of VQE steps are performed.
For the tensor contraction using the optimization method listed in Sec. II-A (SW_zgemm), we can get an overall speedup of around 1.3x to 7.2x compared with the SW_xmath version (a vendor-provided linear algebra library on the Sunway supercomputer), as shown in Fig. 4(a).
Figure 4(c) shows the computational time of the MPS-VQE simulator for implementing the VQE circuits of the hydrogen chain using 512 processes. The maximally allowed bond dimension is set to be \(D=128\), as explained in Section IV-D. The one-shot energy estimation means that only one step of energy evaluation is performed instead of performing optimization of variational parameters until convergence. In the one-shot energy evaluation, the parameters are set as random
numbers in order to keep the bond dimension at the upper limit value (D=128) during the circuit evolution. The number of the electrons/atoms ranges from 12 to 500, and the corresponding number of the qubits ranges from 24 to 1000. The scaling exponents of the computation time (as a function of the total number of atoms \(N\)) for each VQE iteration are fitted by the polynomial scaling formula \(t=cN^{\alpha}\) (\(\alpha\) is the exponent). We find the exponent \(\alpha\approx 1.6\) for all of the VQE circuits. This is because the number of terms in the Hamiltonian approximately scales as \(N^{1.5}\) for the hydrogen chain.
### _Peak performance with DMET-MPS-VQE_
We use the hydrogen chain to assess the scalability and performance of our DMET-MPS-VQE simulator. The wave function ansatz is adaptively built in order to reduce the circuit depth (see Sec. IV-D for more details). The system is divided into fragments with the DMET method. A brief introduction of the DMET method used in this work can be found in Sec. IV-E. We record the computational time with an increasing number of fragments (2048 processes per fragment). The number of floating point operations for tensor contractions is measured by counting all the floating point arithmetic instructions needed for matrix multiplications. For SVD, the number of floating point operations is measured using the profiler LWPF [27] that can monitor the floating-point operation hardware counters in the processor. The quantum circuits containing CNOT gates acting on each pair of neighbouring qubits. This building block serves as the entanglement blocks in the hardware-efficient ansatz [28]. Evolving the circuit requires to perform SVDs for \(N_{q}-3\) matrices of size \(2D\times 2D\) and \(3\times(N_{q}-3)\) matrix-matrix multiplications. The results are shown in Fig. 4(d). We can see that a nearly linear scaling is obtained. Sustained performance of 216.9 PFLOPS is achieved in double precision with 606,208 processes (39,403,520 cores) for the system with 2368 qubit.
### _Implications_
In this section, we discuss applications of our MPS-VQE and DMET-MPS-VQE simulators to study realistic chemical systems. One example is the torsional barrier of ethane, which is one of the most fundamental problems in biomacromolecule configuration analysis. Fig. 5 shows the results obtained by the MPS-VQE simulator for the torsional barrier of the ethane molecule. The bond lengths of C-C and C-H are set to be 1.512 and 1.153 A, respectively. The STO-3g basis set with all 16 orbitals is used (32 qubits). The obtained torsional barrier is 0.29 eV which is higher than the experimental value 0.13 eV. Using the 6-31G(d) basis set will lower the barrier to 0.20 eV even if a small active space of only 6-orbital-6-electron is used. Therefore, It is expected that using a larger basis set could further improve the simulation accuracy.
As an anticipated application, we apply the DMET-MPS-VQE simulator to study the quantification of the protein-ligand interactions, which is a large-scale practical biochemical problem. Compared to the classical calculations, the quantum mechanical calculations can automatically include the effects of polarization, charge transfer, charge penetration, and the coupling of the various terms, thus offering more accurate and detailed information on the nature of the protein-ligand interactions. This is highly important in high-accuracy binding affinity prediction as well as in drug design. The SARS-CoV-2 is the coronavirus behind the COVID-19 pandemic, and its main protease (\(\text{M}^{\text{pro}}\)) is an enzyme that cleaves the viral polyproteins into individual proteins required for viral replication, so it is important to develop drugs targeting at \(\text{M}^{\text{pro}}\) for SARS-CoV-2. In quantum mechanical studies, the protein-ligand binding energy is calculated by \(E_{\text{b}}=E_{\text{complex}}-E_{\text{protein}}-E_{\text{ligand}}\), where \(E_{\text{complex}}\) is the energy of the complex, \(E_{\text{protein}}\) is the energy of the protein and \(E_{\text{ligand}}\) is the energy of unbound ligand. The energy of the complex, protein and the ligand bounded in the complex are calculated using density functional theory with the PBE+MDB functional to account for many-body van der Waals interactions, which is important to obtain accurate potential-energy surfaces [29]. After that, the energy differences between bounded and unbounded geometries of ligands are estimated with DMET-VQE [30]. We use the geometries of the 14 neutral ligands from Ref. [31], and then we optimize the geometries of the ligands at the Hartree-Fock level to account for geometric distortion needed for the ligand to occupy the active site. Similar with Ref. [30], we use STO-3g basis set in the DMET-VQE calculation. We plot the ranking score against the experimental binding free energies in a correlational plot as shown in Fig. 4. The ranking score is defined as the difference between the binding energy and the average value of 14 ligands. Ideally, the simulated ranking score should reproduce the experimental trends. We use the coefficients of determination, denoted as \(\text{R}^{2}\), of the simulated ranking score and the experimentally measured free energy to access the quality of our simulation. It can be seen the correlation between our simulation and the experiment is fairly good, with \(\text{R}^{2}\) of 0.44, which is better than the FEP-based approach (with \(\text{R}^{2}\) of 0.29) [32]. The dipyridamole falls off the correlation line, but the fact that candesartan cizeceil binds best to the protein agrees with experiment. By removing dipyridamole and hydroxychloroquine from the set, we get an \(\text{R}^{2}\) of 0.59. However, we are fully aware of the necessity to consider the basis set, environment and temperature effects, as well as DMET subsystems size when applying the DMET-MPS-VQE to drug design in the following studies. The largest molecule we calculated is Atazanavir which contains 103 atoms and 378 electrons, this is the largest system that has been investigated with simulators to our knowledge.
## III Discussions
As a heuristic quantum algorithm, the accuracy and performance of VQE should be verified in practical applications. The problems that VQE aims to solve, namely finding the ground state of a quantum many-body Hamiltonian, have a computational complexity growing exponentially with the problem size in general. Therefore, small scale simulations for simple molecules using around \(20\) qubits are hard to demonstrate the
powerfulness of VQE in practical applications. In this work, the MPS-VQE simulator scales up to \(1000\) qubits for one-shot energy evaluation and to \(92\) qubits for converged VQE emulation, moreover, the DMET-MPS-VQE simulator scales up to \(39\) million cores on the New Sunway supercomputer. The quantification of the protein-ligand interactions for SARS-CoV-2 is studied with the DMET-MPS-VQE as an application in drug discovery. Particularly, we can obtain descent results using VQE, which are comparable with the experimental observations.
The development of quantum computers requires the intertwining and contribution from classical supercomputers, which enables us to benefit from the much more mature classical computing. The simulation scale we have reached in this work, in terms of both the number of qubits and the circuit depths, is far beyond the simulations that have done in existing literatures, and the capability of existing quantum computers. Although we have limited ourselves to the physical motivated UCCSD ansatz, our simulator could also be straightforwardly used with any other circuit ansatz, such as those hardware efficient ones, which are more friendly to current quantum computers. Our simulator would be an excellent benchmark and validation tool for the development of next generation quantum computers, as well as a flexible platform for quantum researchers to explore industrially related applications with tens of qubits.
## IV Methods
### _Unitary coupled cluster_
The electronic Hamiltonian \(\hat{H}\) of a chemical system is written in the second-quantized form as \(\hat{H}=\sum_{p,q}h_{q}^{p}a_{p}^{\dagger}a_{q}+\frac{1}{2}\sum_{r,s}\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
then we contract \(C^{i_{n},i_{n+1}}_{\alpha_{n-1},\alpha_{n+1}}\) with the singular matrix formed by the singular values at the \(n-1\)-th bond (denoted as \(\lambda_{\alpha_{n-1}}\)) to get a new two-site tensor as
\[\tilde{C}^{i_{n},i_{n+1}}_{\alpha_{n-1},\alpha_{n+1}}=\lambda_{\alpha_{n-1}}C^{ i_{n},i_{n+1}}_{\alpha_{n-1},\alpha_{n+1}}. \tag{7}\]
We perform singular value decomposition onto the tensor \(\tilde{C}^{i_{n},i_{n+1}}_{\alpha_{n-1},\alpha_{n+1}}\) and get
\[\mathrm{SVD}(\tilde{C}^{i_{n},i_{n+1}}_{\alpha_{n-1},\alpha_{n+1}})=\sum_{ \alpha_{n}}U^{i_{n}}_{\alpha_{n-1},\alpha_{n}}\tilde{\lambda}_{\alpha_{n}}V^{ i_{n+1}}_{\alpha_{n},\alpha_{n+1}}, \tag{8}\]
during which we will also truncate the small singular values below a certain threshold or simply reserve the largest few singular values to control the memory overhead. Finally the new site tensors \(\tilde{B}^{i_{n}}_{\alpha_{n-1},\alpha_{n}}\) and \(\tilde{B}^{i_{n+1}}_{\alpha_{n},\alpha_{n+1}}\) can be obtained as
\[\tilde{B}^{i_{n}}_{\alpha_{n-1},\alpha_{n}}=\sum_{i_{n+1},\alpha_ {n+1}}C^{i_{n},i_{n+1}}_{\alpha_{n-1},\alpha_{n+1}}\left(V^{i_{n+1}}_{\alpha_{ n},\alpha_{n+1}}\right)^{*}; \tag{9}\] \[\tilde{B}^{i_{n+1}}_{\alpha_{n},\alpha_{n+1}}=V^{i_{n+1}}_{\alpha _{n},\alpha_{n+1}}, \tag{10}\]
and the new singular values \(\tilde{\lambda}_{\alpha_{n}}\) is used to replace the old \(\lambda_{\alpha_{n}}\) at the \(n\)-th bond. Since \(\sum_{\alpha_{n}}\tilde{B}^{i_{n}}_{\alpha_{n-1},\alpha_{n}}\tilde{B}^{i_{n+1 }}_{\alpha_{n},\alpha_{n+1}}=C^{i_{n},i_{n+1}}_{\alpha_{n-1},\alpha_{n+1}}\), they indeed represent the correct site tensors after the two-qubit gate operation. \(\tilde{B}^{i_{n+1}}_{\alpha_{n},\alpha_{n+1}}\) is right-canonical by definition of SVD. Moreover, one can verify that \(\tilde{B}^{i_{n}}_{\alpha_{n-1},\alpha_{n}}\) is also right-canonical by substituting Eqs.(7, 8) into Eq.(9):
\[\tilde{B}^{i_{n}}_{\alpha_{n-1},\alpha_{n}}=U^{i_{n}}_{\alpha_{n-1},\alpha_{n }}\tilde{\lambda}_{\alpha_{n}}/\tilde{\lambda}_{\alpha_{n-1}}, \tag{11}\]
The above equation transforms a left-canonical site tensor \(U^{i_{n}}_{\alpha_{n-1},\alpha_{n}}\) into a right-canonical site tensor \(\tilde{B}^{i_{n}}_{\alpha_{n-1},\alpha_{n}}\).
### _The implementation of UCCSD with matrix produce states_
As discussed in Sec. IV-A, the implementation of the UCCSD ansatz in this work includes three step:
* We perform the Jordan-Wigner transformation of the cluster operator. Here, the Hartree-Fock state is employed as a reference state. The cluster operator is defined as a linear combination of single and double excitations from occupied orbitals to virtual orbitals (see Eq. (1)).
* We perform a Suzuki-Trotter decomposition of the unitary exponential operator into one- and two-qubit gates. Because the excitation operators are not commutative, we use first-order Trotter decomposition to approximate the UCCSD ansatz as products of exponential operators, which can be further decomposed into products of one- and two-qubit gates.
* We apply these quantum gates to a reference wave function. The intermediate wave functions after applying quantum gates to the initial wave function are represented by matrix product states.
Step 1 and 2 are done using the Q\({}^{2}\)Chemistry package [36]. Step 3 is one of the most important parts of this work. Applying a single qubit gate to a MPS can be done without approximation by multiplying the gate with a single MPS tensor. To apply a two-qubit gate to qubits n and n + 1, we first perform tensor contractions the corresponding gates and tensors, and then applies the gate to the contracted state. To restore the MPS form, the resulting tensor is decomposed with a SVD truncated to keep the largest X singular values, and the matrix of singular values is multiplied into one of the unitary factors X or Y.
With a right-canonical form of MPS, there is a very efficient way to compute the expectation of a single Pauli string. Taking the expectation value of a single-qubit observable \(O_{i_{n},i^{\prime}_{n}}\) as an example, it can be simply computed as
\[\sum_{\alpha_{n-1},\alpha_{n},i_{n},i^{\prime}_{n}}\lambda^{2}_{\alpha_{n-1}}O_ {i_{n}i^{\prime}_{n}}B^{i^{\prime}_{n}}_{\alpha_{n-1},\alpha_{n}}(B^{i_{n}}_{ \alpha_{n-1},\alpha_{n}})^{*}, \tag{12}\]
while a generic two-qubit observable \(O^{i_{m},i_{n}}_{i^{\prime}_{m},i^{\prime}_{n}}\) (assuming \(m<n\)) can be computed as
\[\sum_{\alpha_{n:m-1},i_{n:m},i^{\prime}_{n,m}} \lambda^{2}_{\alpha_{m-1},i^{\prime}_{m}\dagger_{n}}O^{i_{m}i_{n}} _{\alpha_{m-1},\alpha_{m}}B^{i^{\prime}_{m}}_{\alpha_{m-1},\alpha_{m}}(B^{i_{m}} _{\alpha_{m-1},\alpha_{m}})^{*}\times\] \[\cdot\cdot\times B^{i^{\prime}_{n}}_{\alpha_{n-1},\alpha_{n}}(B^{i_ {n}}_{\alpha_{n-1},\alpha_{n}})^{*}, \tag{13}\]
where we have used \(x_{j:i}=\{x_{i},x_{i+1},\ldots,x_{j}\}\) as an abbreviation for a list of indices. The expectation value of a general \(n\)-qubit Pauli string could be computed similarly.
### _The wave function ansatz for hydrogen chain simulations_
When hydrogen chains containing hundreds of atoms are studied, it is impossible to implement a full UCCSD ansatz even with a supercomputer. As such, we construct approximate wave function ansatzes to perform such large-scale simulations using our simulator. The ansatzes are constructed following four steps.
* The generalized single and double (GSD) excitation operators are generated using every 5 consecutive orbitals. For example, if there are 100 Hartree-Fock orbitals obtained from the Hartree-Fock calculation, we first build GSD excitation operators using orbital 1 to 5, and then orbital 2 to 6, etc.
* After the fermionic operator pool has been constructed, the Jordan-Wigner transformation is used to generate an initial operator pool \(\{P\}\) in the form of Pauli strings.
* All the Pauli-Zs are removed from the Pauli strings in order to reduce the quantum circuit depth. Because the Hamiltonian is real, all Pauli strings with even number of Pauli-Ys are removed from \(\{P\}\).
* The parametric circuit is adaptively constructed as a product of exponential of Pauli strings \(\prod_{j}\exp(\mathrm{i}\theta_{j}P_{j})\), where \(P_{i}\in\{P\}\) and \(\{\theta\}\) are variational parameters to be optimized. Here, we follows the strategy suggested in the qubit-ADAPT-VQE method [37]. While we did not iteratively build the wave function ansatz until convergence, high accuracy can be achieved if more iterations are performed to improve the wave function ansatz.
The above steps are performed by interfacing our MPS-VQE simulator with the Q\({}^{2}\)Chemistry package [15,36]. In this way, an approximate wave function ansatz that entangles every neighbouring 5 orbitals (10 qubits) is constructed for the hydrogen chain simulations. Another important factor that
affects the simulation accuracy is maximum allowed bond dimension of the MPS simulator. In order to choose a reasonable bond dimension, we performed a benchmark on the converged energy with respect to different bond dimension settings using a smaller molecule (H\({}_{8}\), 16 qubits). The results are given in Figure 6 and the bond dimension is selected such that \(\Delta E=|E_{D_{i}}-E_{D_{i+1}}|<1.0\times 10^{-3}\) Hartree which is slightly more strict than chemical accuracy (\(1.6\times 10^{-3}\) Hartree).
### _The DMET method_
In DMET, a high-level calculation for each fragment (e.g. VQE) is carried out individually until the self-consistency criterion has been met: the sum of the number of electrons of all of the fragments agrees with the number of electrons for the entire system. The DMET energy for the fragment is calculated using the 1-RDM and 2-RDM, that is,
\[E_{A}=\sum_{p\in A}\bigg{(}\] \[\sum_{q}^{N_{\text{orb}}^{A}+N_{\text{orb}}^{B}}\bigg{(}h_{pq}+ \tfrac{\sum_{N_{\text{orb}}}(|pq|rs)-(p|rq)|\Gamma_{rs}^{m}}{2}\bigg{)}D_{qp}^{A}\] \[+\tfrac{1}{2}\sum_{qrs}^{N_{\text{orb}}^{A}+N_{\text{orb}}^{B}}( pq|rs)P_{qp}^{A}\bigg{)}, \tag{14}\]
where \(h_{pq}\) are the one-electron integrals, \((pq|rs)\) are two-electron integrals, \(N_{\text{orb}}^{A}\) is the number of orbitals in the fragment, \(N_{\text{orb}}^{B}\) is the number of the bath orbitals, \(N_{\text{orb}}\) is the total number of the orbitals in the entire molecule and p,q,r,s are orbital indices. \(D_{qp}^{A}=\langle\hat{a}_{p}^{\dagger}\hat{a}_{q}\rangle\)) is 1-RDM and and \(P_{qp}^{A}=\langle\hat{a}_{p}^{\dagger}\hat{a}_{q}^{\dagger}\hat{a}_{q}\hat{a} _{q}\rangle\) is 2-RDM, which are evaluated with VQE method in this work. The number of electrons in fragment A is calculated as \(N^{A}=\sum_{pA}D_{pp}^{A}\), and the DMET total energy is the sum of the fragment energies
\[E^{\text{total}}=\sum_{A}E_{A} \tag{15}\]
The DMET cycle iterates until the number of electrons \(N^{\text{DMET}}=\sum_{A}N^{A}\) converges to the total number of electrons in molecule (\(N\)).
### _Heterogeneous parallelization strategy_
For the DMET-MPS-VQE simulator, three levels of parallelization are adopted: (1) The calculation of different fragments can be performed in an embarrassingly parallel manner, that we split the whole CPU pool into different sub-groups and sub-communicators, and there is no communication between different fragment calculations; (2) within each sub-group, the total energy of each fragment is calculated with the MPS-VQE method. We adopted the parallel simulation algorithm based on distributed memory over the circuits, just "mimic" the actual quantum computers, so our method can offer a good reference for VQE running on the quantum computers; (3) within the simulations of a single quantum circuit, we use a low-level multi-threaded parallelism on the CPEs to further boost the performance for the tensor contraction and singular value decomposition. We refer the reader to Ref. [15] for more details.
### _Julia programming language_
The Julia script language is used as the main programming language in this study. Julia has the performance of a statically compiled language while providing interactive dynamic behavior and productivity [38]. The codes written in Julia can be highly extensible due to its type system and the multiple dispatch mechanism. In additional to its JIT feature and metac programming ability, its powerful foreign function interface (FFI) makes it easily to use external libraries written in other languages. In this study, the electronic structure libraries Pyscf [39] and OpenFermion [40] are linked to Julia through PyCall.jl, and the optimized SVD routines written in C is called using the LLVM.jl package which provides a high-level wrapper to the LLVM C API.
Our parallel algorithm implemented in Julia is based on the parallel libraries MPI.jl. MPI.jl is a basic Julia wrapper for the Message Passing Interface (MPI). On the Sunway architecture, the MPI libraries are versatile and are highly optimized. MPI.jl can call these MPI library through interfaces of Julia that are almost identical to the C language, and provides similar performance.
### _SVD and one-side jacobi_
The singular value decomposition of a Matrix \(A_{m\times n}\) can be written as,
\[A=U\Sigma V^{T} \tag{16}\]
Where the matrix \(A_{m\times n}\) is decomposed into three matrices. Matrix \(U_{m\times m}\) and \(V_{n\times n}\) are complex unitary matrices, and \(V_{n\times n}^{T}\) is the conjugate transpose of \(V_{n\times n}\). Matrix \(\Sigma_{m\times n}\) is a rectangular diagonal matrix with the singular values of matrix \(A_{m\times n}\) on the diagonal.
There are two classes of Jacobi-based SVD algorithms: one-sided and two-sided. Two-sided Jacobi iteration algorithm transforms a symmetric matrix into a diagonal matrix by a sequence of two-sided Jacobi rotations (\(J\)).
\[J(i,j,\theta)=\begin{bmatrix}1&\cdots&0&\cdots&0&\cdots&0\\ \vdots&\ddots&\vdots&&\vdots&&\vdots\\ 0&\cdots&c&\cdots&-s&\cdots&0\\ \vdots&&\vdots&\ddots&\vdots&&\vdots\\ 0&\cdots&s&\cdots&c&\cdots&0\\ \vdots&&\vdots&&\vdots&\ddots&\vdots\\ 0&\cdots&0&\cdots&0&\cdots&1\end{bmatrix}j \tag{17}\]
Based on two-sided Jacobi algorithm, one-sided Jacobi SVD calculates singular value decomposition with only one-sided Jacobi rotations that modifies columns only. Algorithm 1 describes the one-sided Jacobi method. The parameters \(c\) and \(s\) of the Jacobi rotation matrix can be calculated by \(t\) and \(\tau\).
\[c=\frac{1}{\sqrt{1+t^{2}}} \tag{18}\]
\[s=t\times c \tag{19}\]
\[t=\frac{sign(\tau)}{|\tau|+\sqrt{1+\tau^{2}}} \tag{20}\]
\[\tau=\frac{a_{i}^{T}a_{i}-a_{j}^{T}a_{j}}{2a_{i}^{T}a_{j}} \tag{21}\]
The algorithm converges when all rotations in a sweep are skipped. Since each pair of columns can be orthogonalized independently, the method is also easily parallelized over the CPEs. The simplicity and inherent parallelism of the method make it an attractive first choice for an implementation on the many-core system.
### _The quantum simulation time of hydrogen chain with MPS-VQE_
The quantum simulation time of hydrogen chain using the MPS-VQE simulator is tested. The number of atoms (\(N_{a}\)), number of qubits (\(N_{q}\)), and the estimated number of circuits (\(N_{c}\)) are listed in Tab. IV. The geometry of hydrogen molecule chain is set as following: the H\({}_{2}\) moieties with R(H-H) = 0.741 A was aligned, and the distance between closest atoms of different H\({}_{2}\) fragments was 1.322 A, as shown in Fig. 7. For all the calculations, we use 512 cores (8 nodes \(\times\) 64 cores per node).
## V Data Availability
The data that support the findings of this study are available from the corresponding authors upon reasonable request.
## VI Acknowledgements
H.S. acknowledges support from National Natural Science Foundation of China (Grant No. T2222026, 22003073). L.S. acknowledges support from National Key Research and Development Program of China (Grant No. 2018YFB0204200). C.G. acknowledges support from National Natural Science Foundation of China (Grant No. 11805279). J.L. acknowledges National Natural Science Foundation of China (Grant No. 22073086), Innovation Program for Quantum Science and Technology (Grant No. 2021ZD0303306), the Fundamental Research Funds for the Central Universities (Grant No. WK206000018). Computational resources were provided by the new Sunway supercomputer.
## VII Competing Interests
The authors declare no competing interests.
## VIII Author Contributions
The project was conceived by H.S. The manuscript was written by H.S., J.L. and C.G. The numerical simulations were performed by Y.F., H.S., L.S., F.L., X.D. and Z.L., H.S. and Y.F. contributed equally to this work and are considered as co-first authors.
|
2305.07769 | Joint Coding of eMBB and URLLC in Vehicle-to-Everything (V2X)
Communications | A point-to-point communication is considered where a roadside unite (RSU)
wishes to simultaneously send messages of enhanced mobile broadband (eMBB) and
ultra-reliable low-latency communication (URLLC) services to a vehicle. The
eMBB message arrives at the beginning of a block and its transmission lasts
over the entire block. During each eMBB transmission block, random arrivals of
URLLC messages are assumed. To improve the reliability of the URLLC
transmissions, the RSU reinforces their transmissions by mitigating the
interference of eMBB transmission by means of dirty paper coding (DPC). In the
proposed coding scheme, the eMBB messages are decoded based on two approaches:
treating interference as noise, and successive interference cancellation.
Rigorous bounds are derived for the error probabilities of eMBB and URLLC
transmissions achieved by our scheme. Numerical results illustrate that they
are lower than bounds for standard time-sharing. | Homa Nikbakht, Eric Ruzomberka, Michèle Wigger, Shlomo Shamai, H. Vincent Poor | 2023-05-12T21:26:10Z | http://arxiv.org/abs/2305.07769v1 | # Joint Coding of eMBB and URLLC in Vehicle-to-Everything (V2X) Communications
###### Abstract
A point-to-point communication is considered where a roadside unite (RSU) wishes to simultaneously send messages of enhanced mobile broadband (eMBB) and ultra-reliable low-latency communication (URLLC) services to a vehicle. The eMBB message arrives at the beginning of a block and its transmission lasts over the entire block. During each eMBB transmission block, random arrivals of URLLC messages are assumed. To improve the reliability of the URLLC transmissions, the RSU reinforces their transmissions by mitigating the interference of eMBB transmission by means of dirty paper coding (DPC). In the proposed coding scheme, the eMBB messages are decoded based on two approaches: treating interference as noise, and successive interference cancellation. Rigorous bounds are derived for the error probabilities of eMBB and URLLC transmissions achieved by our scheme. Numerical results illustrate that they are lower than bounds for standard time-sharing.
## I Introduction
Enhanced mobile broadband (eMBB) and ultra-reliable low-latency communication (URLLC) services enabled by 5G new radio (NR) are considered as key enablers of the vehicle-to-everything (V2X) technology [1, 2, 3, 4, 5, 6]. Particularly, eMBB services aim to provide high data rate for content delivery and therefore improve the quality of experience (QoE) of in-vehicle entertainment applications. URLLC services, however, are key to guarantee the delivery of critical road safety information and thus enable fully autonomous driving of connected vehicles [7, 8].
Coexistence of eMBB and URLLC services in V2X communications has been studied in the literature [9, 10, 11]. In [9], a novel URLLC and eMBB coexistence mechanism for the cellular V2X framework is proposed where at the begining of the transmission interval eMBB users are associated with a V2X base station, whereas, URLLC users are allowed to puncture the eMBB transmissions upon arrival. The work in [10] formulates an optimization problem for joint scheduling of punctured eMBB and URLLC traffic to maximize the aggregate utility of the eMBB users subject to latency constraints for the URLLC users. Related to this work is [11], where resources are allocated jointly between eMBB and URLLC messages for a one-way highway vehicular network in which a vehicle receives an eMBB message from the nearest roadside unit (RSU) and URLLC messages from the nearest vehicle. During each eMBB transmission interval, random arrivals of URLLC messages are assumed. The eMBB time slot is thus divided into mini-slots and the newly arrived URLLC messages are immediately scheduled in the next mini-slot by puncturing the on-going eMBB transmissions. To guarantee the reliability of the URLLC transmission, guard zones are deployed around the vehicle and the eMBB transmissions are not allowed inside such zones.
In this work, the RSU wishes to transmit both eMBB and URLLC messages to a vehicle. The eMBB message arrives at the beginning of a block and its transmission lasts over the entire block. The eMBB blocklength is again divided into mini-slots and URLLC messages arrive randomly at the beginning of these mini-slots. Specifically, at the beginning of each of these mini-slots a URLLC message arrives with probability \(\rho\in[0,1]\) and the RSU simultaneously sends the eMBB message as well as the newly arrived URLLC message over this mini-slot. With probability \(1-\rho\) no URLLC message arrives at the beginning of the mini-slot and the RSU only sends the eMBB message. In our work, we do not use guard zones, but instead the RSU reinforces transmission of URLLC messages by mitigating the interference of eMBB transmission by means of dirty paper coding [12, 13, 14]. After each mini-slot, the receiving vehicle attempts to decode a URLLC message, and after the entire transmission interval it decodes the eMBB message. Given that the URLLC transmissions interfere with the transmission of eMBB, we employ two different eMBB decoding approaches. The first approach, known as _treating interference as noise (TIN)_, is to treat the URLLC interference as noise. The second approach, known as _successive interference cancellation (SIC)_, is to first subtract the decoded URLLC message and then decode the eMBB message based on the received signal. Rigorous bounds are derived for achievable error probabilities of eMBB (in both approaches) and URLLC transmissions. Numerical results illustrate that our proposed scheme significantly outperforms the standard time-sharing scheme.
## II Problem Setup
Consider a point-to-point setup with one RSU (transmitter) and one vehicle (receiver) communicating over a \(n_{\mathsf{e}}\) uses of an AWGN channel. The transmitter (Tx) sends a single, so called _eMBB_-type message \(M^{(\mathsf{e})}\), over the entire blocklength \(n_{\mathsf{e}}\), where \(M^{(\mathsf{e})}\) is uniformly distributed over a given set \(\mathcal{M}^{(\mathsf{e})}:=\{1,\ldots,L_{\mathsf{e}}\}\). Message \(M^{(\mathsf{e})}\) is thus available at the Tx at time \(t=1\) (and remains until time \(n_{\mathsf{e}}\)). Additionally, prior to each channel use in
\[\mathcal{T}^{(\mathsf{U})}:=\{1,1+n_{\mathsf{U}},1+2n_{\mathsf{U}},\ldots,1+ \left(\eta-1\right)n_{\mathsf{U}}\}, \tag{1}\]
where
\[\eta:=\left\lfloor\frac{n_{\mathsf{e}}}{n_{\mathsf{U}}}\right\rfloor, \tag{2}\]
the Tx generates with probability \(\rho\) an additional, so called, _URLLC_-type message that it wishes to convey to the Rx. With probability \(1-\rho\) no URLLC-type message is generated. For each \(b\in[\eta]\), if a URLLC message is generated at time \(t=(b-1)n_{\mathsf{U}}+1\), then we set \(A_{b}=1\), and otherwise we set \(A_{b}=0\). Denote the time-instances from \((b-1)\cdot n_{\mathsf{U}}+1\) to \(b\cdot n_{\mathsf{U}}\) by block \(b\). If in block \(b\) a message is generated we denote it by \(M_{b}^{(\mathsf{U})}\) and assume that it is uniformly distributed over the set \(\mathcal{M}^{(\mathsf{U})}:=\{1,\ldots,L_{\mathsf{U}}\}\).
During block \(b\), the Tx computes its inputs as:
\[X_{t}=\begin{cases}f_{t}^{(\mathsf{U})}\left(M_{b}^{(\mathsf{U})},M^{( \mathsf{e})}\right),&\text{if }A_{b}=1,\\ f_{t}^{(\mathsf{e})}\big{(}M^{(\mathsf{e})}\big{)},&\text{if }A_{b}=0,\end{cases} \tag{3}\]
for \(t=(b-1)\cdot n_{\mathsf{U}}+1,\ldots,b\cdot n_{\mathsf{U}}\) and some encoding functions \(f_{t}^{(\mathsf{U})}\) and \(f_{t}^{(\mathsf{e})}\) on appropriate domains. After the last URLLC block, i.e. at times \(t=\eta n_{\mathsf{U}}+1,\ldots,n_{\mathsf{e}}\), the Tx produces the inputs
\[X_{t}=f_{t}^{(\mathsf{e})}\big{(}M^{(\mathsf{e})}\big{)},\quad t=\eta n_{ \mathsf{U}}+1,\ldots,n_{\mathsf{e}}. \tag{4}\]
The sequence of channel inputs \(X_{1},\ldots,X_{n_{\mathsf{e}}}\) has to satisfy the average block-power constraint
\[\frac{1}{n_{\mathsf{e}}}\sum_{t=1}^{n_{\mathsf{e}}}X_{t}^{2}\leq\mathsf{P}, \qquad\text{almost surely}. \tag{5}\]
The input-output relation of the network is described as
\[Y_{t}=hX_{t}+Z_{t}, \tag{6}\]
where \(\{Z_{t}\}\) are independent and identically distributed (i.i.d.) standard Gaussian for all \(t\) and independent of all messages; \(h>0\) is the fixed channel coefficient between the Tx and Rx.
After each URLLC block \(b\) the receiver (Rx) decodes the transmitted URLLC message \(M_{b}^{(\mathsf{U})}\) if \(A_{b}=1\). Moreover, at the end of the entire \(n_{\mathsf{e}}\) channel uses it decodes the eMBB message \(M^{(\mathsf{e})}\). Thus, if \(A_{b}=1\) it produces
\[\hat{M}_{b}^{(\mathsf{U})}=g^{(n_{\mathsf{U}})}\big{(}Y_{(b-1)n_{\mathsf{U}} +1},\ldots,Y_{bn_{\mathsf{U}}}\big{)}, \tag{7}\]
for some decoding function \(g^{(n_{\mathsf{U}})}\) on appropriate domains. Otherwise, it sets \(\hat{M}_{b}^{(\mathsf{U})}=0\). We define the average error probability for each message \(M_{b}^{(\mathsf{U})}\) as:
\[\epsilon_{b}^{(\mathsf{U})} :=\rho\mathbb{P}\left[\hat{M}_{b}^{(\mathsf{U})}\neq M_{b}^{( \mathsf{U})}\Big{|}A_{b}=1\right]\] \[\quad+(1-\rho)\mathbb{P}\left[\hat{M}_{b}^{(\mathsf{U})}\neq 0 \Big{|}A_{b}=0\right]. \tag{8}\]
At the end of the \(n_{\mathsf{e}}\) channel uses, the Rx decodes its desired eMBB message as:
\[\hat{M}^{(\mathsf{e})}=\psi^{(n_{\mathsf{e}})}\left(\mathbf{Y}^{n_{\mathsf{e}}} \right), \tag{9}\]
where \(\mathbf{Y}^{n_{\mathsf{e}}}:=(Y_{1},\ldots,Y_{n_{\mathsf{e}}})\) and \(\psi^{(n_{\mathsf{e}})}\) is a decoding function on appropriate domains. We define the average error probability for message \(M^{(\mathsf{e})}\) as
\[\epsilon^{(\mathsf{e})}:=\mathbb{P}\left[\hat{M}^{(\mathsf{e})}\neq M^{( \mathsf{e})}\right]. \tag{10}\]
The goal is to propose a coding scheme that simultaneously has small error probabilities \(\epsilon_{b}^{(\mathsf{U})}\) and \(\epsilon^{(\mathsf{e})}\).
## III Joint Transmission of URLLC and eMBB Messages
### _Construction of Codebooks_
Define
\[\mathcal{B}_{\text{arrival}}:=\{b\in[\eta]:A_{b}=1\}. \tag{11}\]
Choose \(\beta_{\mathsf{U}}\) and \(\beta_{\mathsf{e}}\in[0,1]\) such that:
\[\beta_{\mathsf{U}}+\beta_{\mathsf{e}}=1. \tag{12}\]
Fix a value of \(\alpha\in[0,1]\). For each block \(b\in[\eta]\), for each \(j\in[L_{v}]\) and each realization \(m\in[L_{\mathsf{U}}]\), generate codewords \(\mathbf{V}_{b}(m,j)\) by picking them uniformly over a centered \(n_{\mathsf{U}}\)-dimensional sphere of radius \(\sqrt{n_{\mathsf{U}}\beta_{\mathsf{e}}\mathsf{P}}\) independently of each other and of all other codewords, for
\[\beta_{\mathsf{v}}:=\beta_{\mathsf{U}}+\alpha^{2}\beta_{\mathsf{e}}. \tag{13}\]
For each \(\ell\in[L_{\mathsf{e}}]\) randomly draw a codeword \(\mathbf{X}_{b}^{(\mathsf{e},2)}(\ell)\) uniformly distributed on the centered \(n_{\mathsf{U}}\)-dimensional sphere of radius \(\sqrt{n_{\mathsf{U}}\beta_{\mathsf{e}}\mathsf{P}}\) and a codeword \(\mathbf{X}_{b}^{(\mathsf{e},1)}(\ell)\) uniformly distributed on the centered \(n_{\mathsf{U}}\)-dimensional sphere of radius \(\sqrt{n_{\mathsf{U}}\mathsf{P}}\). All codewords are chosen independently of each other.
### _Encoding_
#### Iii-B1 Encoding at Blocks \(b\in\mathcal{B}_{\text{arrival}}\)
In each block \(b\in\mathcal{B}_{\text{arrival}}\), the Tx has both an eMBB and an URLLC message to send. It first picks the codeword \(\mathbf{X}_{b}^{(\mathsf{e},2)}(M^{(\mathsf{e})})\) and then employs DPC to encode \(M_{b}^{(\mathsf{U})}\) while precencing the interference of its own eMBB codeword \(\mathbf{X}_{b}^{(\mathsf{e},2)}(M^{(\mathsf{e})})\). Specifically, it chooses an index \(j\) such that the sequence
\[\mathbf{X}_{b}^{(\mathsf{U})}:=\mathbf{V}_{b}(M_{b}^{(\mathsf{U})},j)-\alpha\mathbf{X}_{b}^{( \mathsf{e},2)} \tag{14}\]
lies in the set
\[\mathcal{D}_{b}:=\left\{\mathbf{x}_{b}^{(\mathsf{U})}:n_{\mathsf{U}}\beta_{\mathsf{U}} \mathsf{P}-\delta_{b}\leq\left\|\mathbf{x}_{b}^{(\mathsf{U})}\right\|^{2}\leq n_{ \mathsf{U}}\beta_{\mathsf{U}}\mathsf{P}\right\} \tag{15}\]
for a given \(\delta_{b}>0\). If multiple such codewords exist, the index \(j^{\star}\) is chosen at random from this set, and the Tx sends:
\[\mathbf{X}_{b}=\mathbf{X}_{b}^{(\mathsf{U})}+\mathbf{X}_{b}^{(\mathsf{e},2)}. \tag{16}\]
We also set \(A_{b,\text{sent}}=1\).
Fig. 1: Example of the coding scheme with \(\eta=4\) and \(\mathcal{B}_{\text{sent}}=\{1,3\}\).
If no appropriate codeword exists, the Tx discards the arrived URLLC message by setting \(A_{b,\text{sent}}=0\) and sends only the eMBB message
\[\mathbf{X}_{b}=\mathbf{X}_{b}^{(\mathsf{e},1)}(M^{(\mathsf{e})}) \tag{17}\]
over this block.
Define
\[\mathcal{B}_{\text{sent}}:=\{b\in\mathcal{B}_{\text{arrival}}:A_{b,\text{sent} }=1\}, \tag{18}\]
where \(\mathcal{B}_{\text{sent}}\subseteq\mathcal{B}_{\text{arrival}}\) and represents the set of blocks in which an URLLC message is sent. See Figure 1.
Iii-B2 Encoding at Blocks \(b\in[\eta]\backslash\mathcal{B}_{\text{arrival}}\) and in Block \(\eta+1\) when \(n_{\mathsf{e}}>\eta n_{\mathsf{U}}\)
In each Block \(b\in[\eta]\backslash\mathcal{B}_{\text{arrival}}\), the Tx sends only eMBB message \(M^{(\mathsf{e})}\):
\[\mathbf{X}_{b}=\mathbf{X}_{b,1}^{(\mathsf{e})}(M^{(\mathsf{e})}). \tag{19}\]
Over Block \(b\), the Tx thus transmits
\[\mathbf{X}_{b}=\begin{cases}\mathbf{X}_{b}^{(\mathsf{U})}+\mathbf{X}_{b}^{(\mathsf{e},2)}& \text{if }b\in\mathcal{B}_{\text{sent}},\\ \mathbf{X}_{b}^{(\mathsf{e},1)}&\text{o.w.}\end{cases} \tag{20}\]
### _Decoding_
After each block \(b\in[\eta]\), the Rx attempts to decode a URLLC message, and after the entire block of \(n_{\mathsf{e}}\) channel uses it decodes the transmitted eMBB message. Given that the URLLC transmissions interfere with the transmission of eMBB, the Rx envisions two different approaches to decode the eMBB message. The first approach, termed _TIN approach_, is to treat the URLLC interference as noise. The second approach, termed _SIC approach_, is to first subtract the decoded URLLC message and then decode the eMBB message based on the received signal.
#### Iii-C1 Decoding of URLLC Messages
At the end of each block \(b\in[\eta]\), the Rx observes the following channel outputs \(\mathbf{Y}_{b}:=\{Y_{(b-1)n_{\mathsf{U}}+1},\dots,Y_{bn_{\mathsf{U}}}\}\):
\[\mathbf{Y}_{b}=\begin{cases}h\mathbf{X}_{b}^{(\mathsf{U})}+h\mathbf{X}_{b}^{(\mathsf{e},2 )}+\mathbf{Z}_{b}&\text{if }b\in\mathcal{B}_{\text{sent}}\\ h\mathbf{X}_{b}^{(\mathsf{e},1)}+\mathbf{Z}_{b}&\text{o.w.}\end{cases} \tag{21}\]
with \(\mathbf{Z}_{b}\sim\mathcal{N}(0,I_{n_{\mathsf{U}}})\). Define the information density metric between \(\mathbf{y}_{b}\) and \(\mathbf{v}_{b}\) by:
\[i_{b}^{(\mathsf{U})}(\mathbf{v}_{b};\mathbf{y}_{b}):=\ln\frac{f_{\mathbf{Y}_{b}|\mathbf{V}_{b} }(\mathbf{y}_{b}|\mathbf{v}_{b})}{f_{\mathbf{Y}_{b}}(\mathbf{y}_{b})}. \tag{22}\]
After observing \(\mathbf{Y}_{b}\), the Rx chooses the pair
\[(m^{\prime},j^{\prime})=\text{arg}\max_{m,j}i_{b}^{(\mathsf{U})}(\mathbf{v}_{b}(m, j);\mathbf{Y}_{b}). \tag{23}\]
If for this pair
\[i_{b}^{(\mathsf{U})}(\mathbf{v}_{b}(m^{\prime},j^{\prime});\mathbf{Y}_{b})>\gamma^{( \mathsf{U})} \tag{24}\]
where \(\gamma^{(\mathsf{U})}\) is a threshold over which we optimize, the Rx chooses \((\hat{M}_{b}^{(\mathsf{U})},\hat{j})=(m^{\prime},j^{\prime})\) and sets \(A_{b,\text{detection}}=1\). Otherwise the receiver declares that no URLLC message has been sent and indicates it by setting \(\hat{M}_{b}^{(\mathsf{U})}=0\) and \(A_{b,\text{detection}}=0\).
Define
\[\mathcal{B}_{\text{detect}}:=\{b\in[\eta]:A_{b,\text{detection}}=1\} \tag{25}\]
that is the set of blocks in which an URLLC message is detected. A detection error happens if \(\mathcal{B}_{\text{detect}}\neq\mathcal{B}_{\text{sent}}\).
In each block \(b\in\mathcal{B}_{\text{detect}}\), set \(A_{b,\text{decode}}=1\) if \((\hat{M}_{b}^{(\mathsf{U})},\hat{j})=(M_{b}^{(\mathsf{U})},j)\), otherwise set \(A_{b,\text{decode}}=0\). Define
\[\mathcal{B}_{\text{decode}}:=\{b\in\mathcal{B}_{\text{detect}}:A_{b,\text{ decode}}=1\} \tag{26}\]
that is the set of blocks in which an URLLC message is decoded correctly.
#### Iii-C2 Decoding the eMBB Message under the TIN approach
To decode its desired eMBB message under this approach, the Rx treats URLLC transmissions as noise. Therefore, the decoding of the eMBB message depends on the detection of URLLC messages sent over the \(\eta\) blocks.
Let \(B_{\text{dt}}\) be the realization of the set \(\mathcal{B}_{\text{detect}}\) defined in (25). Given \(B_{\text{dt}}\), the Rx decodes its desired eMBB message based on the outputs of the entire \(n_{\mathsf{e}}\) channel uses by looking for an index \(m\) such that its corresponding codewords \(\Big{\{}\{\mathbf{x}_{b}^{(\mathsf{e},1)}(m)\}_{b\notin B_{\text{dt}}},\{\mathbf{x}_{b}^ {(\mathsf{e},2)}(m)\}_{b\in B_{\text{dt}}}\Big{\}}\) maximize
\[i_{\text{TIN}}^{(\mathsf{e})}\left(\{\mathbf{x}_{b}^{(\mathsf{e},1)} \}_{b\notin B_{\text{dt}}},\{\mathbf{x}_{b}^{(\mathsf{e},2)}\}_{b\in B_{\text{dt}}}; \mathbf{y}^{n_{\mathsf{e}}}|\mathcal{B}_{\text{detect}}=B_{\text{dt}}\right)\] \[:=\ln\!\!\prod_{b\notin B_{\text{dt}}}\!\!\frac{f_{\mathbf{Y}_{b}| \mathbf{X}_{b}^{(\mathsf{e},1)}}(\mathbf{y}_{b}|\mathbf{x}_{b}^{(\mathsf{e})})}{f_{\mathbf{Y}_{b }}(\mathbf{y}_{b})}+\ln\!\!\prod_{b\in B_{\text{dt}}}\!\!\frac{f_{\mathbf{Y}_{b}|\mathbf{X }_{b}^{(\mathsf{e},2)}}(\mathbf{y}_{b}|\mathbf{x}_{b,2}^{(\mathsf{e})})}{f_{\mathbf{Y}_{b }}(\mathbf{y}_{b})} \tag{27}\]
among all codewords \(\{\{\mathbf{x}_{b}^{(\mathsf{e},1)}}(m^{\prime})\}_{b\notin B_{\text{dt}}},\{\mathbf{x}_{b }^{(\mathsf{e},2)}(m^{\prime})\}_{b\in B_{\text{dt}}}\).
#### Iii-C3 Decoding the eMBB Message under the SIC approach
Under this approach, before decoding the desired eMBB message, the Rx mitigates the interference of the correctly decoded URLLC messages from its observed output signal. Therefore, the decoding of the eMBB message depends not only on the detection of the sent URLLC messages but also on the decoding of such messages.
For each Block \(b\in\mathcal{B}_{\text{detect}}\), we define \(A_{b,\text{decode}}=1\) if \((\hat{M}_{b}^{(\mathsf{U})},\hat{j})=(M_{b}^{(\mathsf{U})},j)\), otherwise set \(A_{b,\text{decode}}=0\). Define the set of blocks in which an URLLC message is decoded correctly:
\[\mathcal{B}_{\text{decode}}:=\{b\in\mathcal{B}_{\text{detect}}:A_{b,\text{decode}}=1\}. \tag{28}\]
Let \(B_{\text{dt}}\) be a realization of the set \(\mathcal{B}_{\text{detect}}\) and \(B_{\text{dt}}\) be a realization of the set \(\mathcal{B}_{\text{decode}}\). After observing the channel outputs of the entire \(n_{\mathsf{e}}\) channel uses, the Rx decodes its desired eMBB message by looking for an index \(m\) such that its corresponding codewords \(\Big{\{}\{\mathbf{x}_{b}^{(\mathsf{e},1)}(m)\}_{b\notin B_{\text{dt}}},\{\mathbf{x}_{b}^ {(\mathsf{e},2)}(m)\}_{b\in B_{\text{dt}}}\Big{\}}\) maximize
\[i_{\text{SIC}}^{(\mathsf{e})}\Big{(}\{\mathbf{x}_{b}^{(\mathsf{e},1)} \}_{b\notin B_{\text{dt}}},\{\mathbf{x}_{b}^{(\mathsf{e},2)}\}_{b\in B_{\text{dt}}}; \mathbf{y}^{n_{\mathsf{e}}}|B_{\text{dt}},B_{\text{dt}},\{V_{b}\}_{b\in B_{\text{dt}}} \Big{)}\] \[:=\ln\!\!\prod_{b\notin B_{\text{dt}}}\!\!\frac{f_{\mathbf{Y}_{b}| \mathbf{X}_{b}^{(\mathsf{e},1)}}(\mathbf{y}_{b}|\mathbf{x}_{b}^{(\mathsf{e},1)})}{f_{\mathbf{Y}_{b }}(\mathbf{y}_{b})}+\ln\!\!\prod_{b\in B_{\text{dt}}}\!\!\frac{f_{\mathbf{Y}_{b}| \mathbf{X}_{b}^{(\mathsf{e},2)}}(\mathbf{y}_{b}|\mathbf{x}
## IV Main Results
Define \(\sigma^{2}:=h^{2}\mathsf{P}+1\), \(\sigma_{2}^{2}:=h^{2}\beta_{\text{v}}\mathsf{P}+1\), \(\sigma_{3}^{2}:=h^{2}(1-\alpha)^{2}\beta_{\text{e}}\mathsf{P}+1\) and
\[\lambda(x) :=\frac{x}{2}+\frac{u^{2}}{4}-\frac{u}{2}\sqrt{x+\frac{u^{2}}{4}}, \tag{31a}\] \[\tilde{\lambda}(x) :=\frac{x}{2}+\frac{u^{2}}{4}+\frac{u}{2}\sqrt{x+\frac{u^{2}}{4}},\] (31b) \[u :=\frac{2\sqrt{n_{\text{v}}\mathsf{P}}\left(\sigma_{3}^{2}(\sqrt{ \beta_{\text{v}}}+\sqrt{\beta_{\text{e}}})+\sigma^{2}\sqrt{\beta_{\text{e}}}(1 -\alpha)\right)}{h(\sigma^{2}-\sigma_{3}^{2})},\] (31c) \[\tau :=\frac{\sqrt{n_{\text{v}}\mathsf{P}}\left(\sqrt{\beta_{\text{v} }}(\sigma^{2}+\sigma_{2}^{2})+(1-\alpha)\sqrt{\beta_{\text{e}}}\sigma_{2}^{2} \right)}{\sigma^{2}\sigma_{2}^{2}}, \tag{31d}\]
and for all integer values \(n=1,2,\ldots\):
\[\kappa_{n}(x):=\frac{x(1-x^{2})^{n}}{2n+1}+\frac{2n}{2n+1}\kappa_{n-1}(x) \tag{31e}\]
where \(\kappa_{0}(x):=x\). By employing the scheme proposed in Section III, we have the following theorem on the upper bounds on the URLLC and eMBB error probabilities \(\epsilon_{b}^{(b)}\), \(\epsilon_{\text{TIN}}^{(\text{e})}\), and \(\epsilon_{\text{SIC}}^{(\text{e})}\).
**Theorem 1**: _For fixed \(\beta_{\text{e}}\), \(\beta_{0}\in[0,1]\) and message set sizes \(L_{\text{U}}\) and \(L_{\text{e}}\), the average error probabilities \(\epsilon_{b}^{(\text{U})}\), \(\epsilon_{\text{TIN}}^{(\text{e})}\), and \(\epsilon_{\text{SIC}}^{(\text{e})}\) are bounded by_
\[\epsilon_{b}^{(\text{U})} \leq\rho\left((1-\zeta)^{L_{\text{v}}}+q+1-q_{2}\right)+(1-\rho)q_ {1} \tag{32}\] \[\epsilon_{\text{TIN}}^{(\text{e})} \leq\sum_{k=0}^{\eta}\binom{\eta}{k}q_{3}^{k}(1-\rho_{0}q_{2})^{ \eta-k}\left(1-\Delta+T\right) \tag{33}\]
\[\epsilon_{\text{SIC}}^{(\text{e})} \leq\sum_{k=0}^{\eta}\binom{\eta}{k}q_{4}^{k}(1-\rho_{0}q_{2})^{ \eta-k}\] \[\quad\cdot\left(1-\Delta+\sum_{\tilde{k}=0}^{k}\binom{k}{\tilde{ k}}q^{\tilde{k}}(1-q)^{k-\tilde{k}}\left(\frac{\mu T}{\tilde{\mu}}-\nu \right)\right), \tag{34}\]
where \(\gamma^{(\text{U})},\gamma^{(\text{e})},\tilde{\gamma}^{(\text{e})}\) are arbitrary positive parameters, \(G(\cdot,\cdot)\) denotes the regularized gamma function, \(k:=|B_{\text{dl}}|\), \(\tilde{k}=|B_{\text{dc}}|\), \(\rho_{0}:=\rho\left(1-(1-\zeta)^{L_{\text{v}}}\right)\), \(q_{3}:=\rho_{0}q_{4}+(1-\rho_{0})q_{1}\), and
\[q:=\ {}^{L_{\text{v}}L_{\text{U}}}\!\!\sqrt{1-q_{2}}+(L_{\text{v}}L _{\text{U}}-1)e^{-\gamma^{(\text{U})}}, \tag{35a}\] \[q_{1}:=1-\left(1-e^{-\gamma^{(\text{U})}}\right)^{L_{\text{v}}L _{\text{U}}},\] (35b) \[q_{2}:=1-\left(1-G\left(\frac{n_{\text{v}}}{2},\lambda(\mu_{0}) \right)+G\left(\frac{n_{\text{U}}}{2},\tilde{\lambda}(\mu_{0})\right)\right)^{ L_{\text{v}}L_{\text{U}}},\] (35c) \[q_{4}:=1-\left(1-G\left(\frac{n_{\text{v}}}{2},\tilde{\lambda}( \tilde{\mu}_{0})\right)+G\left(\frac{n_{\text{U}}}{2},\lambda(\tilde{\mu}_{0}) \right)\right)^{L_{\text{v}}L_{\text{U}}},\] (35d) \[\Delta:=\frac{\rho_{0}^{k}(1-\rho_{0})^{\eta-k}q_{2}^{k}(1-q_{1}) ^{\eta-k}}{(\rho_{0}\cdot q_{3}+(1-\rho_{0})\cdot q_{1})^{k}(1-\rho_{0}\cdot q _{2})^{\eta-k}}\] (35e) \[J_{0}:=\frac{\pi\sqrt{\beta_{\text{v}}\beta_{\text{v}}}\delta_{ \text{v}}^{2}\frac{u+1}{2}e^{-\frac{h^{2}(1-\alpha)^{2}\beta_{\text{e}}\mathsf{P }_{\text{v}}\mathsf{P}_{\text{v}}}{2}}}{9h^{2}(1-\alpha)(\beta_{\text{v}}+(1- \alpha)^{2}\beta_{\text{e}})},\] (35f) \[\tilde{J}_{0}:=\frac{2\sqrt{\pi}(1+h^{2}(1-\alpha)^{2}\beta_{ \text{e}}\mathsf{P})e^{n_{\text{v}}h^{2}\mathsf{P}(\beta+(1-\alpha)^{2}\beta_{ \text{e}})}}{2(h^{2}(1-\alpha))^{n_{\text{v}}-2}\sqrt{8(1+2h^{2}(1-\alpha)^{2 }\beta_{\text{e}}\mathsf{P}}}. \tag{35g}\]
and \(J_{e},\tilde{J}_{e},\zeta,\mu_{0},\tilde{\mu}_{0},\mu,\tilde{\mu},T\) and \(\nu\) are defined in (30).
See Section VI.
\[J_{\text{e}} :=\left(\frac{\pi 2^{\frac{n_{\text{v}}+1}{2}}e^{-\frac{h^{2}\beta_{ \text{e}}\mathsf{P}_{\text{v}}\mathsf{P}_{\text{v}}}{2}}\sqrt{\beta_{\text{v}} \beta_{\text{e}}}}{9h^{2}(1-\alpha)^{n_{\text{v}}-1}(\beta_{\text{v}}+(1- \alpha)^{2}\beta_{\text{e}})}\right)^{k}\cdot\left(\frac{\sqrt{8(1+2h^{2} \mathsf{P})}}{27\sqrt{\pi}(1+h^{2}\mathsf{P})}\right)^{\eta-k}\cdot\left(\frac{ \sqrt{8(1+2h^{2}(1-\alpha)^{2}\beta_{\text{e}}\mathsf{P})}}{27\sqrt{\pi}(1+h^{2} (1-\alpha)^{2}\beta_{\text{e}}\mathsf{P})}\right)^{\tilde{k}}\] (30b) \[\zeta:=\frac{1}{\sqrt{\pi}}\frac{\Gamma(\frac{\frac{n_{\text{v}} }{2}})}{\Gamma(\frac{n_{\text{v}-1}}{2})}\left(\kappa_{\frac{n_{\text{v}}-3}{2}} \left(\alpha\sqrt{\beta_{\text{e}}/\beta_{\text{v}}}+\delta_{b}/(2\alpha n_{\text{v}} \mathsf{P}\sqrt{\beta_{\text{v}}\beta_{\text{e}}})\right)-\kappa_{\frac{n_{ \text{v}}-3}{2}}\left(\alpha\sqrt{\beta_{\text{e}}/\beta_{\text{v}}}\right)\right)\] (30c) \[\mu_{0}:=\frac{2\sigma^{2}\sigma_{3}^{2}}{h^{2}(\sigma^{2}-\sigma_ {3}^{2})}\left(\frac{n_{\text{U}}}{2}\frac{\sigma^{2}}{\sigma_{3}^{2}}- \gamma^{(\text{U})}+\ln J_{\text{U}}\right)+\frac{\sigma_{3}^{2}}{\sigma^{2}- \sigma_{3}^{2}}\left(n_{\text{v}}\mathsf{P}(\sqrt{\beta_{\text{v}}}-\sqrt{ \beta_{\text{e}}})^{2}-\delta_{b}\right)-\frac{\sigma^{2}n_{\text{U}}\beta_{ \text{e}}\mathsf{P}(1-\alpha)^{2}}{\sigma^{2}-\sigma_{3}^{2}}\] (30d) \[\tilde{\mu}_{0}:=\frac{2\sigma^{2}\sigma_{3}^{2}}{h^{2}(\sigma^{2}- \sigma_{3}^{2})}\left(\frac{n_{\text{U}}}{2}\frac{\sigma^{2}}{\sigma_{3}^{2}}- \gamma^{(\text{U})}+\ln\tilde{J}_{\text{U}}\right)+\frac{\sigma_{3}^{2}}{\sigma^{2}- \sigma_{3}^
## V Numerical Analysis
In Figure 2, we numerically compare the bounds in Theorem 1 with the time-sharing scheme where URLLC transmissions puncture the eMBB transmission upon arrival. In this figure, we set the maximum error probability of URLLC transmission to be equal to \(10^{-5}\). For each value of \(\rho\in\{0.2,0.4,0.6,0.8,1\}\), we then optimize the parameters \(\alpha\), \(\beta_{\mathsf{e}}\) and \(\beta_{\mathsf{U}}\) to minimize the eMBB error probability under both TIN and SIC approaches. As can be seem from this figure, our schemes outperform the time-sharing scheme specifically for large values of \(\rho\), i.e., in regions with dense URLLC arrivals.
In Figure 3, we numerically compare the bounds in Theorem 1 for \(\rho=0.2\) and \(\rho=0.8\). In this plot, \(n_{\mathsf{U}}=20\cdot b\) and \(n_{\mathsf{e}}=3n_{\mathsf{U}}\) and the value of \(b\) varies from \(10\) to \(2\) with step size \(2\).
The values of \(\alpha\), \(\beta_{\mathsf{e}}\) and \(\beta_{\mathsf{U}}\) are optimized to minimize \(\epsilon_{\text{TIN}}^{(\mathsf{e})}\) and \(\epsilon_{\text{SC}}^{(\mathsf{e})}\) for a given maximum \(\epsilon_{b}^{(\mathsf{U})}\). As can be seen from this figure, when \(\rho\) is high, the TIN scheme outperforms the SIC and the time-sharing schemes. For low values of \(\rho\), however, the SIC scheme outperforms the other two schemes. The reason is that for high values of \(\rho\), more subtracted URLLC interference will be wrong which introduces error in the eMBB decoding under the SIC scheme.
## VI Proof of Theorem 1
### _Bounding \(\epsilon_{b}^{(\mathsf{U})}\)_
Recall the definition of the sets \(\mathcal{B}_{\text{arrival}}\), \(\mathcal{B}_{\text{sent}}\) and \(\mathcal{B}_{\text{detect}}\) from (11), (18) and (25), respectively. Given that URLLC message \(M_{b}^{(\mathsf{U})}\) arrives at the beginning of Block \(b\), i.e., \(b\in\mathcal{B}_{\text{arrival}}\), we have the following error events:
\[\mathcal{E}_{\mathsf{U},1} :=\{b\notin\mathcal{B}_{\text{sent}}\} \tag{36}\] \[\mathcal{E}_{\mathsf{U},2} :=\{b\notin\mathcal{B}_{\text{detect}}\}\] (37) \[\mathcal{E}_{\mathsf{U},3} :=\left\{\left(\hat{M}_{b}^{(\mathsf{U})},\hat{j}\right)\neq\left( M_{b}^{(\mathsf{U})},j\right)\right\}. \tag{38}\]
Given that no URLLC message is sent over Block \(b\), i.e., \(b\notin\mathcal{B}_{\text{sent}}\), we have the following error event:
\[\mathcal{E}_{\mathsf{U},4}:=\{b\in\mathcal{B}_{\text{detect}}\}. \tag{39}\]
The error probability of decoding URLLC message \(M_{b}^{(\mathsf{U})}\) of Block \(b\) thus is bounded by
\[\epsilon_{b}^{(\mathsf{U})} \leq\mathbb{P}[b\in\mathcal{B}_{\text{arrival}}]\mathbb{P}[ \mathcal{E}_{\mathsf{U},1}|b\in\mathcal{B}_{\text{arrival}}]\] \[+\mathbb{P}[b\in\mathcal{B}_{\text{arrival}}]\mathbb{P}[\mathcal{E }_{\mathsf{U},2}|\mathcal{E}_{\mathsf{U},1}^{c},b\in\mathcal{B}_{\text{arrival}}]\] \[+\mathbb{P}[b\in\mathcal{B}_{\text{arrival}}]\mathbb{P}[\mathcal{E }_{\mathsf{U},3}|\mathcal{E}_{\mathsf{U},2}^{c},\mathcal{E}_{\mathsf{U},1}^{c },b\in\mathcal{B}_{\text{arrival}}]\] \[+\mathbb{P}[b\notin\mathcal{B}_{\text{arrival}}]\mathbb{P}[ \mathcal{E}_{\mathsf{U},4}|b\notin\mathcal{B}_{\text{arrival}}]. \tag{40}\]
#### Vi-A1 Analyzing \(\mathbb{P}[\mathcal{E}_{\mathsf{U},1}|b\in\mathcal{B}_{\text{arrival}}]\)
From (15) we notice that \(\left(\boldsymbol{V}_{b}-\alpha\boldsymbol{X}_{b}^{(\mathsf{e},2)}\right)\in \mathcal{D}_{b}\) if and only if
\[n_{\mathsf{U}}\beta_{\mathsf{U}}\mathsf{P}-\delta_{b}\leq||\boldsymbol{V}_{b} -\alpha\boldsymbol{X}_{b}^{(\mathsf{e},2)}||^{2}\leq n_{\mathsf{U}}\beta_{ \mathsf{U}}\mathsf{P}. \tag{41}\]
Recall that \(||\boldsymbol{V}_{k}||^{2}=n_{\mathsf{U}}\beta_{\mathsf{V}}\mathsf{P}\) almost surely.
**Lemma 1**: _We can prove that_
\[\mathbb{P}[(\boldsymbol{V}_{b}-\alpha\boldsymbol{X}_{b}^{(\mathsf{e},2)})\in \mathcal{D}_{b}]=\zeta \tag{42}\]
_where \(\zeta\) is defined in (30c)._
_Proof:_ see Appendix A.
Since the \(L_{v}\) codewords are generated independently:
\[\mathbb{P}[\mathcal{E}_{\mathsf{U},1}|b\in\mathcal{B}_{\text{arrival}}]=\left( 1-\zeta\right)^{L_{v}}. \tag{43}\]
To analyze the remaining error events, we employ the following lemma.
**Lemma 2**: _For any \(\gamma^{(\mathsf{U})}>0\):_
\[\mathbb{P}[i_{b}^{(\mathsf{U})}(\boldsymbol{V}_{b}(m,j); \boldsymbol{Y}_{b})\leq\gamma^{(\mathsf{U})}]\] \[\leq 1-G\left(\frac{n_{\mathsf{U}}}{2},\lambda(\mu_{\mathsf{U}}) \right)+G\left(\frac{n_{\mathsf{U}}}{2},\tilde{\lambda}(\mu_{\mathsf{U}}) \right), \tag{44}\]
_where \(G(\cdot,\cdot)\) is the regularized gamma function and \(\lambda(\cdot)\) and \(\tilde{\lambda}(\cdot)\) are defined in (31) and \(\mu_{\mathsf{U}}\) is defined in (30)._
_Proof:_ See Appendix B.
Iv-A2 Analyzing \(\mathbb{P}[\mathcal{E}_{\mathsf{U},2}|\mathcal{E}^{c}_{\mathsf{U},1},b\in\mathcal{B }_{\text{arrival}}]\)
This error event is equivalent to the probability that for all \(j\in[L_{v}]\) and for all \(m\in[L_{\mathsf{U}}]\) there is no codeword \(V_{b}(m,i)\) such that \(i(\boldsymbol{V}_{b}(m,i);\boldsymbol{Y}_{b})>\gamma^{(\mathsf{U})}\). Therefore,
\[\mathbb{P}[\mathcal{E}_{\mathsf{U},2}|\mathcal{E}^{c}_{\mathsf{U },1},b\in\mathcal{B}_{\text{arrival}}]\] \[=\left(\mathbb{P}\left[i(\boldsymbol{V}_{b}(m,j);\boldsymbol{Y} _{b})\leq\gamma^{(\mathsf{U})}\right]\right)^{L_{v}L_{\mathsf{U}}} \tag{45}\] \[\leq\left(1-G\left(\frac{n_{\mathsf{U}}}{2},\lambda(\mu_{ \mathsf{U}})\right)+G\left(\frac{n_{\mathsf{U}}}{2},\tilde{\lambda}(\mu_{ \mathsf{U}})\right)\right)^{L_{v}L_{\mathsf{U}}} \tag{46}\]
where the last inequality holds by Lemma 2.
Iv-A3 Analyzing \(\mathbb{P}[\mathcal{E}_{\mathsf{U},3}|\mathcal{E}^{c}_{\mathsf{U},2}, \mathcal{E}^{c}_{\mathsf{U},1},b\in\mathcal{B}_{\text{arrival}}]\)
To evaluate this probability, we use the threshold bound for maximum-metric decoding. For any given threshold \(\gamma^{(\mathsf{U})}\):
\[\mathbb{P}[\mathcal{E}_{\mathsf{U},3}|\mathcal{E}^{c}_{\mathsf{U },2},\mathcal{E}^{c}_{\mathsf{U},1},b\in\mathcal{B}_{\text{arrival}}] \tag{47}\] \[\leq\mathbb{P}[i(\boldsymbol{V}_{b}(M^{(\mathsf{U})}_{b}); \boldsymbol{Y}_{b})\leq\gamma^{(\mathsf{U})}]\] \[\quad+(L_{v}L_{\mathsf{U}}-1)\mathbb{P}[i(\bar{\boldsymbol{V}}_{ b}(m^{\prime},j^{\prime});\boldsymbol{Y}_{b})>\gamma^{\mathsf{U}}]\]
where \(m^{\prime}\in\{1,\ldots,L_{\mathsf{U}}\}\), \(j^{\prime}\in\{1,\ldots,L_{v}\}\), \((M^{(\mathsf{U})}_{b},j)\neq(m^{\prime},j^{\prime})\), \(\bar{\boldsymbol{V}}_{b}\sim f_{\boldsymbol{V}_{b}}\) and is independent of \((\boldsymbol{V}_{b},\boldsymbol{Y}_{b})\).
**Lemma 3**: _For any \(\gamma^{(\mathsf{U})}>0\):_
\[\mathbb{P}[i(\bar{\boldsymbol{V}}_{b};\boldsymbol{Y}_{b})>\gamma^{(\mathsf{ U})}]\leq e^{-\gamma^{(\mathsf{U})}}. \tag{48}\]
Proof:: See Appendix C.
By Lemmas 2 and 3, we have
\[\mathbb{P}[\mathcal{E}_{\mathsf{U},3}|\mathcal{E}^{c}_{\mathsf{U },2},\mathcal{E}^{c}_{\mathsf{U},1},b\in\mathcal{B}_{\text{arrival}}] \tag{49}\] \[\leq 1-G\left(\frac{n_{\mathsf{U}}}{2},\lambda(\mu_{\mathsf{U}}) \right)+G\left(\frac{n_{\mathsf{U}}}{2},\tilde{\lambda}(\mu_{\mathsf{U}}) \right)+(L_{v}L_{\mathsf{U}}-1)e^{-\gamma^{(\mathsf{U})}}.\]
#### Iv-A4 Analyzing \(\mathbb{P}[\mathcal{E}_{\mathsf{U},4}|b\notin\mathcal{B}_{\text{arrival}}]\)
This error event is equivalent to the probability that given no URLLC is arrived, there exists at least one codeword \(V_{b}(m,i)\) with \(m\in[L_{\mathsf{U}}]\) and \(j\in[L_{v}]\) such that \(i(\boldsymbol{V}_{b}(m,j);\boldsymbol{Y}_{b})>\gamma^{(\mathsf{U})}\). Therefore,
\[\mathbb{P}[\mathcal{E}_{\mathsf{U},4}|b\notin\mathcal{B}_{\text {arrival}}]\] \[=1-\left(\mathbb{P}\left[i(\boldsymbol{V}_{b}(m,j);\boldsymbol{Y }_{b})\leq\gamma^{(\mathsf{U})}\right]\right)^{L_{v}L_{\mathsf{U}}} \tag{50}\] \[\leq 1-\left(1-e^{-\gamma^{(\mathsf{U})}}\right)^{L_{v}L_{ \mathsf{U}}}. \tag{51}\]
where the last inequality follows by Lemma 3.
By combining (43), (49), (46) and (51) we prove bound (32).
### _Bounding \(\epsilon^{(\mathsf{e})}_{\text{TN}}\)_
Define
\[\rho_{\mathsf{U}} :=\mathbb{P}[b\in\mathcal{B}_{\text{sent}}], \tag{52a}\] \[\rho_{\text{det},0} :=\mathbb{P}[b\in\mathcal{B}_{\text{detect}}|b\in\mathcal{B}_{ \text{sent}}],\] (52b) \[\rho_{\text{det},1} :=\mathbb{P}[b\in\mathcal{B}_{\text{detect}}|b\notin\mathcal{B}_{ \text{sent}}]. \tag{52c}\]
**Lemma 4**: _We prove that_
\[\rho_{\mathsf{U}}=\rho\left(1-(1-\zeta)^{L_{v}}\right),\quad\rho_{\text{det},1 }\leq q_{1},\quad q_{2}\leq\rho_{\text{det},0}\leq q_{3}, \tag{53}\]
_where \(q_{1}\), \(q_{2}\) and \(q_{3}\) are defined in (35) and \(\zeta\) in (30c)._
Proof:: See Appendix D.
Given \(\mathcal{B}_{\text{detect}}=B_{\text{dt}}\), we have the following two error events:
\[\mathcal{E}_{\text{TN},1} =\{\mathcal{B}_{\text{detect}}\neq\mathcal{B}_{\text{sent}}\} \tag{54}\] \[\mathcal{E}_{\text{TN},2} =\{\hat{M}^{(\mathsf{e})}\neq M^{(\mathsf{e})}\}. \tag{55}\]
The eMBB decoding error probability under the TIN approach thus is bounded by
\[\epsilon^{\text{TN}}_{\mathsf{e}}\leq\sum_{B_{\text{dt}}}\mathbb{P }[\mathcal{B}_{\text{detect}}=B_{\text{dt}}] \tag{56}\] \[\cdot\left(\mathbb{P}[\mathcal{E}_{\text{TN},1}|\mathcal{B}_{ \text{detect}}=B_{\text{dt}}]+\mathbb{P}[\mathcal{E}_{\text{TN},2}|\mathcal{B}_ {\text{detect}}=B_{\text{dt}},\mathcal{E}^{c}_{\text{TN},1}]\right).\]
#### Iv-A1 Analyzing \(\mathbb{P}[\mathcal{B}_{\text{detect}}=B_{\text{dt}}]\)
Define
\[\rho_{\text{det}} :=\mathbb{P}[b\in\mathcal{B}_{\text{detect}},b\in\mathcal{B}_{ \text{sent}}]+\mathbb{P}[b\in\mathcal{B}_{\text{detect}},b\notin\mathcal{B}_{ \text{sent}}] \tag{57}\] \[=\rho_{\mathsf{U}}\rho_{\text{det},0}+(1-\rho_{\mathsf{U}})\rho_{ \text{det},1}, \tag{58}\]
where \(\rho_{\mathsf{U}}\), \(\rho_{\text{det},0}\) and \(\rho_{\text{det},1}\) are defined in (52). By Lemma 4:
\[\rho_{\mathsf{U}}\cdot q_{2}\leq\rho_{\text{det}}\leq\rho_{\mathsf{U}}\cdot q_{3}+(1 -\rho_{\mathsf{U}})\cdot q_{1}, \tag{59}\]
and thus by the independence of the blocks:
\[\mathbb{P}[\mathcal{B}_{\text{detect}}=B_{\text{dt}}] \tag{60}\] \[=\rho_{\text{det}}^{|B_{\text{dt}}|}(1-\rho_{\text{det}})^{\eta-|B _{\text{dt}}|}\] (61) \[\leq(\rho_{\mathsf{U}}\cdot q_{3}+(1-\rho_{\mathsf{U}})\cdot q_{1})^ {|B_{\text{dt}}|}(1-\rho_{\mathsf{U}}\cdot q_{2})^{\eta-|B_{\text{dt}}|} \tag{62}\]
#### Iv-A2 Analyzing \(\mathbb{P}[\mathcal{E}_{\text{TN},1}|\mathcal{B}_{\text{detect}}=B_{\text{dt}}]\)
Notice that the values of \(\rho_{\mathsf{U}},\rho_{\text{det},0}\) and \(\rho_{\text{det},1}\) stay the same for all blocks in \([\eta]\). Thus
\[\mathbb{P}[\mathcal{B}_{\text{detect}}\neq\mathcal{B}_{\text{ sent}}|\mathcal{B}_{\text{detect}}=B_{\text{dt}}]\] (63) \[=1-\mathbb{P}[\mathcal{B}_{\text{sent}}=B_{\text{dt}}|\mathcal{B}_ {\text{detect}}=B_{\text{dt}}]\] (64) \[=1-\frac{\mathbb{P}[\mathcal{B}_{\text{sent}}=B_{\text{dt}}, \mathcal{B}_{\text{detect}}=B_{\text{dt}}]}{\mathbb{P}[\mathcal{B}_{\text{detect }}=B_{\text{dt}}]}\] (65) \[=1-\frac{\mathbb{P}[\mathcal{B}_{
where for each \(b\), \(\bar{\mathbf{X}}_{b}^{(\mathbf{e},1)}\sim f_{\mathbf{X}_{b}^{(\mathbf{e},1)}}\) and \(\bar{\mathbf{X}}_{b}^{(\mathbf{e},2)}\sim f_{\mathbf{X}_{b}^{(\mathbf{e},2)}}\) and are independent of \((\mathbf{X}_{b}^{(\mathbf{e},1)},\mathbf{X}_{b}^{(\mathbf{e},2)},\mathbf{Y}_{b})\). We use the following two lemmas to bound the above two probability terms.
**Lemma 5**: _For any \(\gamma^{(\mathbf{e})}>0\):_
\[\mathbb{P}\left[i_{\text{TN}}^{(\mathbf{e})}\left(\{\mathbf{X}_{b}^{(\bm {e},1)}\}_{b\notin B_{\text{d}}},\{\mathbf{X}_{b}^{(\mathbf{e},2)}\}_{b\in B_{\text{d}} };Y^{n_{\mathbf{e}}}|B_{\text{d}}\right)<\gamma^{(\mathbf{e})}\right]\] \[\leq T-(L_{v}-1)e^{-\gamma^{(\mathbf{e})}} \tag{70}\]
_where \(T\) is defined in (30h)._
See Appendix E.
**Lemma 6**: _For any \(\gamma^{(\mathbf{e})}>0\):_
\[\mathbb{P}\left[i_{\text{TN}}^{(\mathbf{e})}\left(\{\bar{\mathbf{X}}_{b}^ {(\mathbf{e},1)}\}_{b\notin B_{\text{d}}},\{\bar{\mathbf{X}}_{b}^{(\mathbf{e},2)}\}_{b\in B _{\text{d}}};\{\mathbf{Y}_{b}\}_{b=1}^{\eta+1}|B_{\text{d}}\right)\geq\gamma^{(\bm {e})}\right]\] \[\leq e^{-\gamma^{(\mathbf{e})}}. \tag{71}\]
The proof is similar to the proof of Lemma 3 and omitted.
Combining Lemmas 5 and 6 with (69) and defining \(k:=|B_{\text{d}}|\) proves the bound in (33).
### _Bounding \(\epsilon_{\text{SRC}}^{(\mathbf{\mathrm{e}})}\)_
Recall the definition of the sets \(\mathcal{B}_{\text{arrival}}\), \(\mathcal{B}_{\text{sent}}\), \(\mathcal{B}_{\text{detect}}\) and \(\mathcal{B}_{\text{decode}}\) from (11), (18), (25), and (28), respectively. Let \(B_{\text{dt}}\) be a realization of the set \(\mathcal{B}_{\text{detect}}\), and \(B_{\text{dc}}\) be a realization of the set \(\mathcal{B}_{\text{decode}}\). We have the following two error events:
\[\mathcal{E}_{\text{SIC},1} =\{\mathcal{B}_{\text{detect}}\neq\mathcal{B}_{\text{sent}}\} \tag{72}\] \[\mathcal{E}_{\text{SIC},2} =\{\hat{M}^{(\mathbf{e})}\neq M^{(\mathbf{e})}\} \tag{73}\]
The eMBB decoding error probability under the SIC approach thus is given by
\[\epsilon_{\mathbf{e}}^{\text{SIC}} \leq\sum_{B_{\text{d}}}\mathbb{P}[\mathcal{B}_{\text{detect}}=B_{ \text{dt}}]\] \[\Big{(}\mathbb{P}[\mathcal{E}_{\text{SIC},1}|\mathcal{B}_{\text{ detect}}=B_{\text{dt}}]\] \[+\sum_{B_{\text{d}}}\mathbb{P}[\mathcal{B}_{\text{decode}}=B_{ \text{dc}}|\mathcal{E}_{\text{SIC},1}^{c},\mathcal{B}_{\text{detect}}=B_{ \text{dt}}]\] \[\quad\quad-\mathbb{P}[\mathcal{E}_{\text{SIC},2}|\mathcal{B}_{ \text{detect}}=B_{\text{dt}},\mathcal{B}_{\text{decode}}=B_{\text{dc}}, \mathcal{E}_{\text{SIC},1}^{c}]\Big{)}. \tag{74}\]
Vi-C1 Analyzing \(\mathbb{P}[\mathcal{B}_{\text{decode}}=B_{\text{dc}}|\mathcal{E}_{\text{SIC },1}^{c},\mathcal{B}_{\text{detect}}=B_{\text{dt}}]\)
For any subset \(B_{c}\subseteq B_{d}\) we have:
\[\mathbb{P}[\mathcal{B}_{\text{decode}}=B_{\text{dc}}|\mathcal{ B}_{\text{detect}}=\mathcal{B}_{\text{sent}}=B_{\text{dt}}] \tag{75}\] \[=\prod_{b\in B_{\text{dc}}}\mathbb{P}[\hat{M}_{b}^{(\mathbf{U})}=M_{ b}^{(\mathbf{U})}|\mathcal{B}_{\text{detect}}=\mathcal{B}_{\text{sent}}=B_{ \text{dt}}]\] \[\quad\cdot\prod_{b\in B_{\text{d}}\setminus B_{\text{dc}}}\Big{(} 1-\mathbb{P}[\hat{M}_{b}^{(\mathbf{U})}=M_{b}^{(\mathbf{U})}|\mathcal{B}_{\text{detect} }=\mathcal{B}_{\text{sent}}=B_{\text{dt}}]\Big{)}\] (76) \[\leq q^{|B_{\text{dt}}|}(1-q)^{|B_{\text{dt}}|-|B_{\text{dt}}|} \tag{77}\]
where \(q\) is defined in (35). Inequality (77) holds by (49) and by the independence of the blocks.
Vi-C2 Analyzing \(\mathbb{P}[\mathcal{E}_{\text{SIC},2}|\mathcal{B}_{\text{detect}}=B_{\text{dt}}, \mathcal{B}_{\text{decode}}=B_{\text{dc}},\mathcal{E}_{\text{SIC},1}^{c}]\)
To bound this probability, we use the threshold bound for maximum-metric decoding. For any given threshold \(\tilde{\gamma}^{(\mathbf{\mathrm{e}})}\):
\[\mathbb{P}[\hat{M}^{(\mathbf{\mathrm{e}})}\neq M^{(\mathbf{\mathrm{e}})}| \mathcal{B}_{\text{detect}}=B_{\text{dt}},\mathcal{B}_{\text{decode}}=B_{ \text{dc}},\mathcal{E}_{\text{SIC},1}^{c}] \tag{78}\] \[\leq\mathbb{P}\Big{[}i_{\text{SIC}}^{(\mathbf{\mathrm{e}})}\{\{\mathbf{X} _{b}^{(\mathbf{\mathrm{e}},1)}\}_{b\notin B_{\text{d}}},\{\mathbf{X}_{b}^{(\mathbf{\mathrm{e} },2)}\}_{b\in B_{\text{d}}};\] \[\mathbf{Y}^{n_{\mathbf{e}}}|B_{\text{dt}},B_{\text{dc}},\{\mathbf{V}_{b}\}_{b \in B_{\text{dt}}})<\tilde{\gamma}^{(\mathbf{\mathrm{e}})}\Big{]}\] \[+(L_{\mathbf{e}}-1)\mathbb{P}\Big{[}i_{\text{SIC}}^{(\mathbf{\mathrm{e}} )}\{\{\mathbf{X}_{b}^{(\mathbf{\mathrm{e}})}\}_{b\notin B_{\text{d}}},\{\bar{\mathbf{X}}_{b }^{(\mathbf{\mathrm{e}})}\}_{b\in B_{\text{dt}}};\] \[\mathbf{Y}_{b}^{n_{\mathbf{e}}}|B_{\text{dt}},B_{\text{dc}},\{\mathbf{V}_{b} \}_{b\in B_{\text{dt}}})\geq\tilde{\gamma}^{(\mathbf{\mathrm{e}})}\Big{]} \tag{79}\]
where for each \(b\), \(\bar{\mathbf{X}}_{b}^{(\mathbf{\mathrm{e}},1)}\sim f_{\mathbf{X}_{b}^{(\mathbf{\mathrm{e}},1)}}\) and \(\bar{\mathbf{X}}_{b}^{(\mathbf{\mathrm{e}},2)}\sim f_{\mathbf{X}_{b}^{(\mathbf{\mathrm{e}},2)}}\) and are independent of \((\mathbf{X}_{b}^{(\mathbf{\mathrm{e}},1)},\mathbf{X}_{b}^{(\mathbf{\mathrm{e}},2)},\mathbf{Y}^{n_{ \mathbf{e}}})\). We use the following two lemmas to bound the above two probability terms.
**Lemma 7**: _Given \(\tilde{\gamma}^{(\mathbf{\mathrm{e}})}\), we prove that_
\[\mathbb{P}\Big{[}i_{\text{SIC}}^{(\mathbf{\mathrm{e}})}\{\{\mathbf{X}_{b }^{(\mathbf{\mathrm{e}},1)}\}_{b\notin B_{\text{d}}},\{\mathbf{X}_{b}^{(\mathbf{\mathrm{e}},2)}\}_{b\in B_{\text{d}}};\{\mathbf{Y}_{b}\}_{b=1}^{\eta+1}\] \[|B_{\text{dt}},\{\mathbf{V}_{b}\}_{b\in B_{\text{d}}})<\tilde{\gamma}^{( \mathbf{\mathrm{e}})}\Big{]}\] \[\leq\frac{\mu T}{\tilde{\mu}}-\nu \tag{80}\]
_where \(T\), \(\nu,\mu\) and \(\tilde{\mu}\) are defined in (30)._
See Appendix F.
**Lemma 8**: _We can prove that_
\[\mathbb{P}\Big{[}i_{\text{SIC}}^{(\mathbf{\mathrm{e}})}\{\{\mathbf{X}_{b}^{( \mathbf{\mathrm{e}})}\}_{b\notin B_{\text{d}}},\{\bar{\mathbf{X}}_{b,2}^{(\mathbf{ \mathrm{e}})}\}_{b\in B_{\text{d}}};\] \[\{\mathbf{Y}_{b}\}_{b=1}^{\eta+1}|B_{\text
## Appendix A Proof of Lemma 1
By (41) and since \(\mathbf{X}_{b}^{(\mathsf{e},2)}\) and \(\mathbf{V}_{b}\) are drawn uniformly on the \(n_{\mathsf{U}}\)-dimensional spheres of radii \(\sqrt{n_{\mathsf{U}}\beta_{\mathsf{e}}\mathsf{P}}\) and \(\sqrt{n_{\mathsf{U}}(\beta_{\mathsf{U}}+\alpha^{2}\beta_{\mathsf{e}})\mathsf{P}}\), the error event \(\mathcal{E}_{b,v}\) holds whenever the following condition is violated:
\[\alpha\beta_{\mathsf{e}}n_{\mathsf{U}}\mathsf{P}\leq\langle\mathbf{V}_{b},\mathbf{X}_{ b}^{(\mathsf{e},2)}\rangle\leq\alpha\beta_{\mathsf{e}}n_{\mathsf{U}}\mathsf{P}+ \frac{\delta_{b}}{2\alpha}. \tag{82}\]
The distribution of \(\langle\mathbf{V}_{b},\mathbf{X}_{b}^{(\mathsf{e},2)}\rangle\) depends on \(\mathbf{V}_{b}\) only through its magnitude, because \(\mathbf{X}_{b}^{(\mathsf{e},2)}\) is uniform over a sphere and applying an orthogonal transformation to \(\mathbf{V}_{b}\) and \(\mathbf{X}_{b}^{(\mathsf{e},2)}\) does neither change the inner product of the two vectors nor the distribution of \(\mathbf{X}_{b}^{(\mathsf{e},2)}\). In the following we therefore assume that \(\mathbf{V}_{b}=(||\mathbf{V}_{b}||,0,\ldots,0)\), in which case (82) is equivalent to:
\[\frac{\alpha\beta_{\mathsf{e}}n_{\mathsf{U}}\mathsf{P}}{\sqrt{\beta_{\mathsf{ v}}n_{\mathsf{U}}\mathsf{P}}}\leq X_{b,2,1}^{(\mathsf{e})}\leq\frac{\alpha\beta_{ \mathsf{e}}n_{\mathsf{U}}\mathsf{P}}{\sqrt{\beta_{\mathsf{v}}n_{\mathsf{U}} \mathsf{P}}}+\frac{\delta_{b}}{2\alpha\sqrt{\beta_{\mathsf{v}}n_{\mathsf{U}} \mathsf{P}}} \tag{83}\]
where \(X_{b,2,1}^{(\mathsf{e})}\) is the first entry of the vector \(\mathbf{X}_{b}^{(\mathsf{e},2)}\).
The distribution of a given symbol in a length-\(n_{\mathsf{U}}\) random sequence distributed uniformly on the sphere is [15]
\[f_{X_{b,2,1}^{(\mathsf{e})}}\left(x_{b,2,1}^{(\mathsf{e})}\right) = \frac{1}{\sqrt{\pi n_{\mathsf{U}}\beta_{\mathsf{e}}\mathsf{P}}} \frac{\Gamma(\frac{n_{\mathsf{U}}}{2})}{\Gamma(\frac{n_{\mathsf{U}}}{2})} \left(1-\frac{(x_{b,2,1}^{(\mathsf{e})})^{2}}{n_{\mathsf{U}}\beta_{\mathsf{e}} \mathsf{P}}\right)^{\frac{n_{\mathsf{U}}-3}{2}} \tag{84}\] \[\times 1\{(x_{b,2,1}^{(\mathsf{e})})^{2}\leq n_{\mathsf{U}}\beta_{ \mathsf{e}}\mathsf{P}\}.\]
Thus,
\[\mathbb{P}\left[\mathbf{V}_{b}-\alpha\mathbf{X}_{b}^{(\mathsf{e},2)}\in \mathcal{D}_{\mathsf{L}}\right] \tag{85}\] \[= \frac{\alpha\beta_{\mathsf{e}}n_{\mathsf{U}}\mathsf{P}}{\sqrt{ \beta_{\mathsf{v}}n_{\mathsf{U}}\mathsf{P}}}\frac{\Gamma(\frac{n_{\mathsf{U} }}{2})}{\Gamma(\frac{n_{\mathsf{U}}-1}{2})}\kappa_{\frac{n_{\mathsf{U}}-3}{2} }\left(\alpha\sqrt{\frac{\beta_{\mathsf{e}}}{\beta_{\mathsf{v}}}}\right),\] \[= \frac{1}{\sqrt{\pi}}\frac{\Gamma(\frac{n_{\mathsf{U}}}{2})}{\Gamma (\frac{n_{\mathsf{U}}}{2})}\kappa_{\frac{n_{\mathsf{U}}-3}{2}}\left(\frac{2 \alpha^{2}n_{\mathsf{U}}\mathsf{P}\beta_{\mathsf{e}}+\delta_{b}}{2\alpha n_{ \mathsf{U}}\mathsf{P}\sqrt{\beta_{\mathsf{v}}\beta_{\mathsf{e}}}}\right)\] (86) \[-\frac{1}{\sqrt{\pi}}\frac{\Gamma(\frac{n_{\mathsf{U}}}{2})}{ \Gamma(\frac{n_{\mathsf{U}}-1}{2})}\kappa_{\frac{n_{\mathsf{U}}-3}{2}}\left( \alpha\sqrt{\frac{\beta_{\mathsf{e}}}{\beta_{\mathsf{v}}}}\right),\]
where
\[\kappa_{n}(x)=\frac{x(1-x^{2})^{n}}{2n+1}+\frac{2n}{2n+1}\kappa_{n-1}(x) \tag{87}\]
with \(\kappa_{0}(x)=x\). This concludes the proof.
## Appendix B Proof of Lemma 2
Note that \(\mathbf{Y}_{b}\) and \(\mathbf{Y}_{b}|\mathbf{V}_{b}\) do not follow a Gaussian distribution. Define
\[Q_{\mathbf{Y}_{b}}(\mathbf{y}_{b}) = \mathcal{N}(\mathbf{y}_{k,1};\mathbf{0},I_{n_{\mathsf{U}}}\sigma^{2}) \tag{88}\] \[Q_{\mathbf{Y}_{b}|\mathbf{V}_{b}}(\mathbf{y}_{b}|\mathbf{v}_{b}) = \mathcal{N}(\mathbf{y}_{h};h\mathbf{V}_{b},I_{n_{\mathsf{U}}}\sigma_{3}^{2}) \tag{89}\]
with \(\sigma^{2}=h^{2}\mathsf{P}+1\) and \(\sigma_{3}^{2}=h^{2}(1-\alpha)^{2}\beta_{\mathsf{e}}\mathsf{P}+1\).
Introduce
\[\tilde{i}_{b}^{(\mathsf{U})}(\mathbf{v}_{b};\mathbf{y}_{b}):=\ln\frac{Q_{\mathbf{Y}_{b}|\mathbf{ V}_{b}}(\mathbf{y}_{b}|\mathbf{v}_{b})}{Q_{\mathbf{Y}_{b}}(\mathbf{y}_{b})}. \tag{90}\]
**Lemma 9**: _We can prove that_
\[i_{b}^{(\mathsf{U})}(\mathbf{v}_{b};\mathbf{y}_{b})\geq\tilde{i}_{b}^{(\mathsf{U})}( \mathbf{v}_{b};\mathbf{y}_{b})+\ln J_{\mathsf{U}}, \tag{91}\]
_where_
\[J_{\mathsf{U}}:=\frac{\pi\sqrt{\beta_{\mathsf{U}}\beta_{\mathsf{e}}}2^{\frac{n_{ \mathsf{U}}+1}{2}}}{9h^{2}(1-\alpha)(\beta_{\mathsf{v}}+(1-\alpha)^{2}\beta_{ \mathsf{e}})} \tag{92}\]
_Proof:_ By [16, Proposition 2]:
\[\frac{f_{\mathbf{Y}_{b}}(\mathbf{y}_{b})}{Q_{\mathbf{Y}_{b}}(\mathbf{y}_{b})}\leq\frac{9((1- \alpha)h^{n_{\mathsf{U}}}}{2\pi\sqrt{2}}\frac{\beta_{\mathsf{v}}\mathsf{P}+(1- \alpha)^{2}\beta_{\mathsf{e}}\mathsf{P}}{(1-\alpha)\mathsf{P}\sqrt{\beta_{ \mathsf{v}}\beta_{\mathsf{e}}}}. \tag{93}\]
By [17, Lemma 5]:
\[\frac{f_{\mathbf{Y}_{b}|\mathbf{V}_{b}}(\mathbf{y}_{b}|\mathbf{v}_{b})}{Q_{\mathbf{Y}_{b}|\mathbf{V}_{b}}( \mathbf{y}_{b}|\mathbf{v}_{b})}\geq 2^{\frac{n_{\mathsf{U}}-2}{2}}\left(h(1-\alpha)\right)^{n_{ \mathsf{U}}-2}e^{-\frac{h^{2}(1-\alpha)^{2}\beta_{\mathsf{e}}\rho_{\mathsf{U}}}{2}} \tag{94}\]
Combining the two bounds concludes the proof.
As a result, we have
\[\mathbb{P}[i_{b}^{(\mathsf{U})}(\mathbf{V}_{b};\mathbf{Y}_{b})\leq\gamma^{( \mathsf{U})}]\] (95) \[\leq \mathbb{P}[\tilde{i}(\mathbf{V}_{b};\mathbf{Y}_{b})\leq\gamma^{(\mathsf{U })}-\ln J_{\mathsf{U}}]\] (96) \[= \mathbb{P}\left[\ln\frac{\Omega\mathbf{Y}_{b}|\mathbf{V}_{b}(\mathbf{Y}_{b}| \mathbf{V}_{b})}{Q_{\mathbf{Y}_{b}}(\mathbf{Y}_{b})}\leq\gamma^{(\mathsf{U})}-\ln J_{ \mathsf{U}}\right]\] (97) \[= \mathbb{P}\left[\ln\frac{\frac{1}{\left(\frac{\sqrt{2}\sigma_{3}^ {2}n_{\mathsf{U}}}\right)^{n_{\mathsf{U}}}}\exp\left(-\frac{||\mathbf{Y}_{b}-h\mathbf{V}_{b }||^{2}}{2\sigma_{3}^{2}}\right)}{\frac{1}{\left(\sqrt{2\pi\sigma_{3}^{2}} \right)^{n_{\mathsf{U}}}}\exp\left(-\frac{||\mathbf{Y}_{b}||^{2}}{2\sigma^{2}}\right)} \leq\gamma^{(\mathsf{U})}-\ln J_{\mathsf{U}}\right]\] (98) \[= \mathbb{P}\left[\frac{n_{\mathsf{U}}}{2}\ln\frac{\sigma^{2}}{\sigma_ {3}^{2}}+\frac{||\mathbf{Y}_{b}||^{2}}{2\sigma^{2}}-\frac{||\mathbf{Y}
where
\[\mu_{\mathsf{U}} :=\frac{2\sigma^{2}\sigma_{3}^{2}}{h^{2}(\sigma^{2}-\sigma_{3}^{2})} \left(\frac{n_{\mathsf{U}}}{2}\ln\frac{\sigma^{2}}{\sigma_{3}^{2}}-\gamma^{( 0)}+\ln J_{\mathsf{U}}\right)\] \[\quad\quad+\frac{\sigma_{3}^{2}}{\sigma^{2}-\sigma_{3}^{2}}\left(n _{\mathsf{U}}\mathsf{P}(\sqrt{\beta_{\mathsf{U}}}-\sqrt{\beta_{\mathsf{e}}})^{ 2}-\delta_{b}\right)\] \[\quad\quad-\frac{\sigma^{2}n_{\mathsf{U}}\beta_{\mathsf{e}} \mathsf{P}(1-\alpha)^{2}}{\sigma^{2}-\sigma_{3}^{2}}\] \[u :=\frac{2\sqrt{n_{\mathsf{U}}\mathsf{P}}\left(\sigma_{3}^{2}( \sqrt{\beta_{\mathsf{U}}}+\sqrt{\beta_{\mathsf{e}}})+\sigma^{2}\sqrt{\beta_{ \mathsf{e}}}(1-\alpha)\right)}{h(\sigma^{2}-\sigma_{3}^{2})}\]
Notice that in (104) we use the fact that \(||\mathbf{Z}_{b}||\) follows a chi-distribution with degree \(n_{\mathsf{U}}\) and \(F(\cdot)\) represents its CDF.
## Appendix C Proof of Lemma 3
By Bayes' rule we have
\[f_{\mathbf{V}_{b}}(\bar{\mathbf{v}}_{b}) =\frac{f_{\mathbf{Y}_{b}}(\mathbf{y}_{b})f_{\mathbf{V}_{b}|\mathbf{Y}_{b}}(\bar{ \mathbf{v}}_{b}|\mathbf{y}_{b})}{f_{\mathbf{Y}_{b}|\mathbf{V}_{b}}(\mathbf{y}_{b}|\bar{\mathbf{v}}_{b})} \tag{106}\] \[=f_{\mathbf{V}_{b}|\mathbf{Y}_{b}}(\bar{\mathbf{v}}_{b}|\mathbf{y}_{b})\exp\left( -i(\bar{\mathbf{v}}_{b},\mathbf{y}_{b})\right). \tag{107}\]
By multiplying both sides of the above equation by \(\mathbbm{1}\{i(\bar{\mathbf{v}}_{b},\mathbf{y}_{b})>\gamma\}\) and integrating over all \(\bar{\mathbf{v}}_{b}\), we have
\[\int_{\bar{\mathbf{v}}_{b}}\mathbbm{1}\{i(\bar{\mathbf{v}}_{b},\mathbf{y}_{b}) >\gamma\}f_{\mathbf{V}_{b}}(\bar{\mathbf{v}}_{b})d\bar{\mathbf{v}}_{b}=\] \[\int_{\bar{\mathbf{v}}_{b}}\mathbbm{1}\{i(\bar{\mathbf{v}}_{b},\mathbf{y}_{b} )>\gamma\}e^{-i(\bar{\mathbf{v}}_{b},\mathbf{y}_{b})}f_{\mathbf{V}_{b}|\mathbf{Y}_{b}}(\bar{ \mathbf{v}}_{b}|\mathbf{y}_{b})d\bar{\mathbf{v}}_{b}. \tag{108}\]
Note that the left-hand side of (108) is equivalent to \(\mathbb{P}[i(\bar{\mathbf{v}}_{b},\mathbf{y}_{b})>\gamma|\mathbf{Y}_{b}=\mathbf{y}_{b}]\). Thus
\[\mathbb{P}[i(\bar{\mathbf{v}}_{b},\mathbf{y}_{b})>\gamma|\mathbf{Y}_{b}=\mathbf{y }_{b}] \tag{109}\] \[=\int_{\bar{\mathbf{v}}_{b}}\mathbbm{1}\left\{i(\bar{\mathbf{v}}_{b},\bm {y}_{b})>\gamma\right\}\] \[\quad\quad\times\exp\left(-i(\bar{\mathbf{v}}_{b},\mathbf{y}_{b})\right)f _{\mathbf{V}_{b}|\mathbf{Y}_{b}}(\bar{\mathbf{v}}_{b}|\mathbf{y}_{b})d\bar{\mathbf{v}}_{b}\] (110) \[=\int_{\bar{\mathbf{v}}_{b}}\mathbbm{1}\left\{\frac{f_{\mathbf{Y}_{b}|\bm {V}_{b}}(\bar{\mathbf{v}}_{b}|\bar{\mathbf{v}}_{b})}{f_{\mathbf{Y}_{b}}(\bar{\mathbf{v}}_{b})} e^{-\gamma}>1\right\}\] \[\quad\quad\times\exp\left(-i(\bar{\mathbf{v}}_{b},\mathbf{y}_{b})\right)f _{\mathbf{V}_{b}|\mathbf{Y}_{b}}(\bar{\mathbf{v}}_{b}|\mathbf{y}_{b})d\bar{\mathbf{v}}_{b}\] (111) \[\leq\int_{\bar{\mathbf{v}}_{b}}\frac{f_{\mathbf{Y}_{b}|\mathbf{V}_{b}}(\bar{ \mathbf{v}}_{b}|\bar{\mathbf{v}}_{b})}{f_{\mathbf{Y}_{b}}(\mathbf{y}_{b})}e^{-\gamma}\] \[\quad\quad\quad\times\exp\left(-i(\bar{\mathbf{v}}_{b},\mathbf{y}_{b}) \right)f_{\mathbf{V}_{b}|\mathbf{Y}_{b}}(\bar{\mathbf{v}}_{b}|\mathbf{y}_{b})d\bar{\mathbf{v}}_{b}\] (112) \[=\int_{\bar{\mathbf{v}}_{b}}e^{-\gamma}f_{\mathbf{V}_{b}|\mathbf{Y}_{b}}(\bar {\mathbf{v}}_{b}|\mathbf{y}_{b})d\bar{\mathbf{v}}_{b}\] (113) \[=e^{-\gamma}. \tag{114}\]
## Appendix D Proof of Lemma 4
We start by analyzing the quantities in \(\rho_{\mathsf{U}}\), \(\rho_{\mathsf{det},0}\) and \(\rho_{\mathsf{det},1}\) defined in (52a), (52b) and (52c).
#### c.0.1 Analyzing \(\rho_{\mathsf{U}}\)
\[\rho_{\mathsf{U}} =\rho\cdot\mathbb{P}[\exists\;j\in[L_{v}]\;\text{s.t.}\;\mathbf{X}_ {b}^{(\mathsf{U})}(\mathbf{V}_{b}(M_{b}^{(\mathsf{U})},j))\in\mathcal{D}_{b}] \tag{115}\] \[=\rho(1-(1-\zeta)^{L_{v}}) \tag{116}\]
where the last equality is by (43).
#### c.0.2 Bounding \(\rho_{\mathsf{det},0}\)
\[\rho_{\mathsf{det},0}\] \[=\mathbb{P}[b\in\mathcal{B}_{\mathsf{det}}]b\in\mathcal{B}_{ \mathsf{sen}}] \tag{117}\] \[=1-\mathbb{P}[\forall m,\forall j:i_{b}^{(\mathsf{U})}(\mathbf{V}_{b}( m,j);\mathbf{Y}_{b})\leq\gamma^{(\mathsf{U})}|b\in\mathcal{B}_{\mathsf{sen}}]\] (118) \[\geq 1-\left(1-G\left(\frac{n_{\mathsf{U}}}{2},\lambda(\mu_{ \mathsf{U}})\right)+G\left(\frac{n_{\mathsf{U}}}{2},\tilde{\lambda}(\mu_{ \mathsf{U}})\right)\right)^{L_{v}L_{\mathsf{U}}} \tag{119}\]
where (119) is by (46).
**Lemma 10**: _For any \(\gamma^{(\mathsf{U})}>0\):_
\[\mathbb{P}[i_{b}^{(\mathsf{U})}(\mathbf{V}_{b}(m,j);\mathbf{Y}_{b})\leq \gamma^{(\mathsf{U})}]\] \[\geq 1-G\left(\frac{n_{\mathsf{U}}}{2},\tilde{\lambda}(\tilde{\mu} _{\mathsf{U}})\right)+G\left(\frac{n_{\mathsf{U}}}{2},\lambda(\tilde{\mu}_{ \mathsf{U}})\right) \tag{120}\]
_where \(G(\cdot,\cdot)\) is the regularized gamma function, \(\lambda(\cdot)\) and \(\tilde{\lambda}(\cdot)\) are defined in (31) and \(\tilde{\mu}_{\mathsf{U}}\) is defined in (30)._
The proof is similar to the proof of Lemma 2. We present a sketch of the proof.
We start by upper bounding
\[i_{b}^{(\mathsf{U})}(\mathbf{v}_{b};\mathbf{y}_{b})\leq\tilde{i}_{b}^{(\mathsf{U})}(\mathbf{v}_ {b};\mathbf{y}_{b})+\ln\tilde{J}_{\mathsf{U}}, \tag{121}\]
where by [16, Proposition 2] and [17, Lemma 6] we can prove that
\[\tilde{J}_{\mathsf{U}}:=\frac{27\sqrt{\pi}(1+h^{2}(1-\alpha)^{2}\beta_{ \mathsf{e}}\mathsf{P})e^{n_{\mathsf{U}}h^{2}\mathsf{P}(\beta_{\mathsf{e}}+(1- \alpha)^{2}\beta_{\mathsf{e}})}}{2(h^{2}(1-\alpha))^{n_{\mathsf{U}}-2}\sqrt{8(1+2h^ {2}(1-\alpha)^{2}\beta_{\mathsf{e}}\mathsf{P}}}. \tag{122}\]
Thus
\[\mathbb{P}[i_{b}^{(\mathsf{U})}(\mathbf{V}_{b};\mathbf{Y}_{b})\leq\gamma^{( \mathsf{U})}]\] (123) \[\geq\mathbb{P}[\tilde{i}(\mathbf{V}_{b};\mathbf{Y}_{b})\leq\gamma^{( \mathsf{U})}-\ln\tilde{J}_{\mathsf{U}}]\] (124) \[=\mathbb{P}\left[||\mathbf{Z}_{b}||^{2}-u||\mathbf{Z}_{b}||\geq\tilde{\mu}_ {\mathsf{U}}]\] (1
## Appendix E Proof of Lemma 5
Notice that for each \(b\in[1:\eta+1]\), \(\mathbf{Y}_{b}\) and for \(b\in B_{\text{d}}\), \(\mathbf{Y}_{b}|\mathbf{X}_{b}^{(\text{e},2)}\) do not follow a Gaussian distribution. Define \(Q_{\mathbf{Y}_{b}}(\mathbf{y}_{b})\) as in (88) and
\[Q_{\mathbf{Y}_{b}|\mathbf{X}_{b}^{(\text{e},2)}}(\mathbf{y}_{b}|\mathbf{x}_{b}^{( \text{e},2)})=\mathcal{N}(\mathbf{y}_{b};h(1-\alpha)\mathbf{X}_{b}^{(\text{e},2)},I_{n _{0}}\sigma_{2}^{2}) \tag{133}\]
with \(\sigma_{2}^{2}=h^{2}\beta\mathsf{P}+1\).
Introduce
\[\tilde{\tilde{\imath}}_{\text{TN}}^{(\text{e})}\left(\{\mathbf{x}_{b}^ {(\text{e},1)}\}_{b\notin B_{\text{a}}},\{\mathbf{x}_{b}^{(\text{e},2)}\}_{b\in B_{ \text{a}}};\{\mathbf{y}_{b}\}_{b=1}^{\eta+1}|B_{\text{d}}\right)\] \[=\ln\hskip-1.422638pt\prod_{b\notin B_{\text{a}}}\hskip-1.422638pt \frac{\mathbf{Y}_{\mathbf{v}_{1}|\mathbf{X}_{b}^{(\text{e},1)}}(\mathbf{y}_{b}|\mathbf{x}_{b}^{( \text{e},1)})}{Q_{\mathbf{Y}_{b}}(\mathbf{y}_{b})}+\ln\hskip-1.422638pt\prod_{b\in B_{ \text{a}}}\hskip-1.422638pt\frac{Q_{\mathbf{Y}_{b}|\mathbf{X}_{b}^{(\text{e},2)}}(\bm {y}_{b}|\mathbf{x}_{b}^{(\text{e},2)})}{Q_{\mathbf{Y}_{b}}(\mathbf{y}_{b})} \tag{134}\]
**Lemma 11**: _We can prove that_
\[\tilde{\imath}_{\text{TN}}^{(\text{e})}\left(\{\mathbf{x}_{b}^{( \text{e},1)}\}_{b\notin B_{\text{a}}},\{\mathbf{x}_{b}^{(\text{e},2)}\}_{b\in B_{ \text{d}}};\{\mathbf{y}_{b}\}_{b=1}^{\eta+1}|B_{\text{d}}\right)\] \[\geq\tilde{\imath}_{\text{TN}}^{(\text{e})}\left(\{\mathbf{x}_{b}^{( \text{e},1)}\}_{b\notin B_{\text{a}}},\{\mathbf{x}_{b}^{(\text{e},2)}\}_{b\in B_{ \text{a}}};\{\mathbf{y}_{b}\}_{b=1}^{\eta+1}|B_{\text{d}}\right)+\ln J_{\text{e}}, \tag{135}\]
_where_
\[J_{\text{e}}:=\left(\frac{\pi 2^{\frac{n_{0}+1}{2}}e^{\frac{-n _{0}\mu_{0}}{2}}\sqrt{\beta_{\text{e}}\beta_{\text{e}}}}{9h^{2}(1-\alpha)^{n_{ 0}-1}(\beta_{\text{v}}+(1-\alpha)^{2}\beta_{\text{e}})}\right)^{k}\] \[\quad\quad\cdot\left(\frac{\sqrt{8(1+2h^{2}\mathsf{P})}}{27\sqrt {\pi}(1+h^{2}\mathsf{P})}\right)^{\eta-k} \tag{136}\]
_Proof:_ similar to the proof of Lemma 9 and by [16, Proposition 2], for \(b\notin B_{\text{d}}\):
\[\frac{f_{\mathbf{Y}_{b}}(\mathbf{y}_{b})}{Q_{\mathbf{Y}_{b}}(\mathbf{y}_{b})}\leq \frac{27\sqrt{\pi}(1+h^{2}\mathsf{P})}{\sqrt{8(1+2h^{2}\mathsf{P})}}. \tag{137}\]
As a result, we have
\[\mathbb{P}\left[i_{\text{TN}}^{(\text{e})}\left(\{\mathbf{X}_{b}^{( \text{e},1)}\}_{b\notin B_{\text{a}}},\{\mathbf{X}_{b}^{(\text{e},2)}\}_{b\in B_{ \text{a}}};\mathbf{Y}^{n_{\text{a}}}|B_{\text{d}}\right)<\gamma^{(\text{e})}\right]\] \[\leq\mathbb{P}\left[\tilde{\imath}_{\text{TN}}^{(\text{e})} \left(\{\mathbf{X}_{b}^{(\text{e},1)}\}_{b\notin B_{\text{a}}},\{\mathbf{X}_{b}^{( \text{e},2)}\}_{b\in B_{\text{a}}};\mathbf{Y}^{n_{\text{a}}}|B_{\text{d}}\right)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \
\[\mathbb{E}[\tilde{Z}_{2}] = kn_{0}, \tag{147}\] \[\mathbb{E}[||\mathbf{Z}_{b}||] = \frac{\sqrt{2}\Gamma\left(\frac{n\omega+1}{2}\right)}{\Gamma\left( \frac{n\omega}{2}\right)}. \tag{148}\]
## Appendix F Proof of Lemma 7
Define \(Q_{\mathbf{Y}_{b}}(\mathbf{y}_{b})\) as in (88), \(Q_{\mathbf{Y}_{b}|\mathbf{V}_{b}}(\mathbf{y}_{b}|\mathbf{v}_{b})\) as in (89) and \(Q_{\mathbf{Y}_{b}|\mathbf{X}_{b}^{(\mathbf{x},2)}}(\mathbf{y}_{b}|\mathbf{x}_{b}^{(\mathbf{e},2)})\) as in (133).
Introduce
\[\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\[+\frac{\mathbb{E}\left[\frac{(1-\alpha)\sqrt{n_{\mathsf{b}}\beta \mathsf{P}}}{\sigma_{3}^{2}}\sum_{b\in B_{\mathsf{k}}}||\mathbf{Z}_{b}||\right]}{ \widetilde{\mu}}\] \[+\frac{\mathbb{E}\left[\frac{\sigma_{3}^{2}-1}{2\sigma_{3}^{2}} \sum_{b\in B_{\mathsf{k}}}||\mathbf{Z}_{b}||^{2}\right]}{\widetilde{\mu}}\] \[= \frac{(n_{\mathsf{e}}-kn\upsilon)(\sigma^{2}-1)}{2\sigma^{2} \widetilde{\mu}}+\frac{(\eta+1-k)\sqrt{n_{\mathsf{b}}\mathsf{P}}}{\sigma^{2} \widetilde{\mu}}\frac{\sqrt{2}\Gamma\left(\frac{n_{\mathsf{b}}+1}{2}\right)}{ \Gamma\left(\frac{n_{\mathsf{b}}}{2}\right)}\] \[+\frac{k\tau}{\widetilde{\mu}}\frac{\sqrt{2}\Gamma\left(\frac{n \upsilon+1}{2}\right)}{\Gamma\left(\frac{n_{\mathsf{b}}}{2}\right)}+\frac{kn _{\mathsf{U}}(\sigma^{2}-\sigma_{2}^{2})}{2\sigma^{2}\widetilde{\mu}}\] \[-\frac{\tilde{k}}{\widetilde{\mu}}\frac{\sqrt{2}\Gamma\left( \frac{n\upsilon+1}{2}\right)}{\Gamma\left(\frac{n_{\mathsf{b}}}{2}\right)} \left(\tau-\frac{(1-\alpha)\sqrt{n_{\mathsf{b}}\beta_{\mathsf{e}}\mathsf{P}}}{ \sigma_{3}^{2}}\right)\] \[-\frac{n_{\mathsf{b}}\tilde{k}}{\widetilde{\mu}}\left(\frac{ \sigma^{2}-\sigma_{2}^{2}}{2\sigma^{2}\sigma_{2}^{2}}-\frac{\sigma_{3}^{2}-1}{ 2\sigma_{3}^{2}}\right) \tag{158}\]
where
\[\tilde{\mu}:=\frac{n_{\mathsf{e}}-kn_{\mathsf{U}}}{2}\ln\sigma^{ 2}+\frac{(k-\tilde{k})n_{\mathsf{U}}}{2}\ln\frac{\sigma^{2}}{\sigma_{2}^{2}}+ \frac{\tilde{k}n_{\mathsf{U}}}{2}\ln\sigma_{3}^{2}\] \[-\frac{\eta-k}{2\sigma^{2}}n_{\mathsf{U}}\mathsf{P}+\frac{k- \tilde{k}}{2\sigma_{2}^{2}}\beta_{\mathsf{V}}n_{\mathsf{U}}\mathsf{P}-\frac{ \tilde{k}(1-\alpha)^{2}n_{\mathsf{U}}\mathsf{P}\beta_{\mathsf{e}}}{2\sigma_{3 }^{2}}\] \[-\frac{k-\tilde{k}}{2\sigma^{2}}\left(\sqrt{\beta_{\mathsf{V}}}+( 1-\alpha)\sqrt{\beta_{\mathsf{e}}}\right)^{2}n_{\mathsf{U}}\mathsf{P}-\tilde{ \gamma}^{(\mathsf{e})}+\ln\tilde{J}_{\mathsf{e}}.\]
This concludes the proof.
|
2306.06127 | A framework of windowed octonion linear canonical transform | The uncertainty principle is a fundamental principle in theoretical physics,
such as quantum mechanics and classical mechanics. It plays a prime role in
signal processing, including optics, where a signal is to be analyzed
simultaneously in both domains; for instance, in harmonic analysis, both time
and frequency domains, and in quantum mechanics, both time and momentum. On the
other hand, many mathematicians, physicists, and other related domain
researchers have paid more attention to the octonion-related integral
transforms in recent years. In this paper, we define important properties of
the windowed octonion linear canonical transform (WOCLCT), such as inversion,
linearity, parity, shifting, and the relationship between OCLCT and WOCLCT.
Further, we derived sharp Pitt's and sharp Young-Hausdorff inequalities for 3D
WOCLCT. We obtain the logarithmic uncertainty principle for the 3D WOCLCT.
Furthermore, Heisenberg's and Donoho-Stark's uncertainty principles are derived
for WOCLCT, and the potential applications of WOCLCT are also discussed. | Manish Kumar, Bhawna | 2023-06-08T07:49:08Z | http://arxiv.org/abs/2306.06127v1 | ###### Abstract
###### Abstract
The uncertainty principle is a fundamental principle in theoretical physics, such as quantum mechanics and classical mechanics. It plays a prime role in signal processing, including optics, where a signal is to be analyzed simultaneously in both domains; for instance, in harmonic analysis, both time and frequency domains, and in quantum mechanics, both time and momentum. On the other hand, many mathematicians, physicists, and other related domain researchers have paid more attention to the octonion-related integral transforms in recent years. In this paper, we define important properties of the windowed octonion linear canonical transform (WOCLCT), such as inversion, linearity, parity, shifting, and the relationship between OCLCT and WOCLCT. Further, we derived sharp Pitt's and sharp Young-Hausdorff inequalities for 3D WOCLCT. We obtain the logarithmic uncertainty principle for the 3D WOCLCT. Furthermore, Heisenberg's and Donoho-Stark's uncertainty principles are derived for WOCLCT, and the potential applications of WOCLCT are also discussed.
**A framework of windowed octonion linear canonical transform**
**[0.2cm] canonical transform**
**[0.2cm]**
**Manish Kumar* and Bhawna**
Department of Mathematics, Birla Institute of Technology and Science-Pilani, Hyderabad Campus, Hyderabad-500078, Telangana, India
*Corresponding author: [email protected]
**MSC**: 46F12, 53D22.
**Keywords**: Octonion linear canonical transform, Sharp Pitt's inequality, Logarithmic uncertainty principle, Sharp Young-Hausdorff inequality, Heisenberg's uncertainty principle, Donoho-Stark's uncertainty principle.
## 1 Introduction
Many interesting physical and engineering systems are characterized by a wide range of multi-channel signals (for instance, seismic signals have four channels, a color image has three channels, etc.). Sometimes these multi-channel signals with several components must be controlled simultaneously (for instance, image encryption see [1, 2, 3, 4, 6]). However, implementation becomes challenging, especially for problems dealing with multi-channel signals. Taking each channel at a time and considering its integral transform does not yield a desirable outcome. Applied mathematicians and engineers encounter this problem in several applications of practical interest, such as structural design, predicting earthquakes using seismic signals, computer graphics, aerospace engineering, quantum mechanics, time-frequency analysis, optics, signal processing, image processing and enhancement, pattern recognition, artificial intelligence, etc. On the other hand, one can see many real-life applications based on hyper-complex algebra-based transforms where multichannel components need to be processed simultaneously (see, for more details [1, 2, 3, 4, 5, 6] and references therein). Motivated by lack of
processing multi-channel signals simultaneously. The windowed octonion linear canonical transform (WOCLCT) appears to be a promising method.
The WOCLCT is a family of integral transforms, which is a generalization of many integral transforms, including quaternion windowed linear canonical transform (QWLCT) [7], quaternion linear canonical transform QLCT [8], quaternion Fourier transform (QFT) [9, 10, 11], the quaternion fractional Fourier transform (QFRFT) [12], the octonion Fourier transform (OFT) [13], windowed linear canonical transform (WLCT) [14] and many more. The WOCLCT is a generalization of the WLCT to octonions. Octonions are disordered, non-commutative, non-associative, alternative, no non-trivial zero divisors algebra of dimension eight that generalizes real numbers, complex numbers, and quaternions. The WOCLCT has properties that make it useful for analyzing non-commutative systems, such as those found in quantum mechanics and relativistic physics. Uncertainty principles based on the Fourier transform can be viewed in [15, 16]. Many works have been reported in the literature on developments of such integral transforms; see for more detail [7, 13, 17, 18]. An application, examples, and uncertainty principles (Donoho-Stark's inequality, Pitt's inequality, Heisenberg's inequality, Lieb, and local including reproducing kernel and characterization range) using quaternion window linear canonical transform (QWLCT) are researched in [7]. The authors established QFT and OFT relations and explored the Mustard convolution using the OFT, including several uncertainty principles in [13]. The properties and uncertainty principles are derived using logarithmic estimates obtained from a sharp form of Pitt's inequality. Further few more results obtained on the Hardy-Littlewood-Sobolev inequality, including entropy using Fourier transform, are obtained in [17]. In [18], the authors explored important applications, properties such as isometry, shifting properties, inversion property, Riemann-Lebesgue Lemma, including uncertainty principle (such as Heisenberg's inequality and Donoho-Stark's inequality) are obtained using OCLCT.
To the best of our knowledge, the theory of WOCLCT has not yet appeared in the literature and is still an open area for researchers. Motivated by the fact that WOCLCT is a new area of research, we contribute first in establishing important properties of WOCLCT, such as inversion, linearity, parity, and shifting. Further, we derived the main inequality, such as sharp Pitt's and sharp Young-Hausdorff inequalities, including uncertainty principles (i.e., logarithmic uncertainty principle, Heisenberg's uncertainty principle, and Donoho-Stark's uncertainty principle) for 3D WOCLCT and the potential applications of WOCLCT are also discussed.
## Organization of the work
In section 2, we recall some basic properties and definitions of 3D OFT and its inverse. in this section, we also discussed the definition of 3D OCLCT and its
inverse. In section 3, we define 3D WOCLCT and its inverse, including important properties of WOCLCT, such as inversion, linearity, parity, and shifting. In section 4, the work's main contribution in the direction of inequalities and uncertainty principles (sharp Pitt's inequality, sharp Young-Hausdorff inequality, logarithmic uncertainty principle, Heisenberg's and Donoho-Stark's uncertainty principle for 3D WOCLCT) are derived. Moreover, the potential applications of 3D WOCLCT are also discussed in section 5. Finally, we conclude the work in section 6.
## 2 Preliminaries
In this section, we provide basic information on octonion algebra, some basic definitions of OCLCT, and an important Lemma used throughout the work. In history [19], John T. Graves, along with Hamilton, discovered octonions and called them octaves but did not publish work until 1845. Arthur Cayley published his discovery of the octonions and provided a name to it as Cayley numbers. Hamilton reorganized that Cayley published first, but the invention of octonions was done before with Graves. Hence, credit to both ware given for independently discovering the octonions. The octonions are constructed through Cayley-Dickons Process as \(\mathbb{O}=\mathbb{H}+\mathbb{H}e_{4}\).
### Algebra on octonions
Let us consider standard natural basis set \(\{e_{k};\ k=0,1,\ldots,7\}\) to represent octonion numbers. For simplicity, we assume \(e_{0}=1\) and renaming seven independent basis elements are imaginary units, and we could write for every \(z\in\mathbb{O}\) as follows:
\[z=z_{0}+z_{1}e_{1}+z_{2}e_{2}+z_{3}e_{3}+z_{4}e_{4}+z_{5}e_{5}+z_ {6}e_{6}+z_{7}e_{7},\]
Further, the octonion conjugate is defined by:
\[\bar{z}=z_{0}-z_{1}e_{1}-z_{2}e_{2}-z_{3}e_{3}-z_{4}e_{4}-z_{5}e_{ 5}-z_{6}e_{6}-z_{7}e_{7},\]
and satisfies
\[\overline{z_{1}z_{2}}=\bar{z_{2}}\bar{z_{1}}.\]
The norm of an octonion \(|z|\) is defined by:
\[|z|^{2}=z\bar{z}=z_{0}^{2}+z_{1}^{2}+z_{2}^{2}+z_{3}^{2}+z_{4}^{2}+z_{5}^{2}+z_ {6}^{2}+z_{7}^{2},\]
for each \(k=0,1,\ldots,7\), the components \(z_{k}\in\mathbb{R}\), which can be thought of as a point or vector in \(\mathbb{R}^{8}\). The real part of \(z\) is just \(z_{1}\); the imaginary part of \(z\) is everything else. This is more similar to a complex number but slightly different from representing an imaginary part with seven degrees of freedom and can be viewed as a vector in \(\mathbb{R}^{7}\). As we know, \(e_{1},e_{2},\ldots e_{7}\) are the seven
imaginary standard units on octonion algebra which is non-associative and non-commutative over the field of real numbers. The multiplication table is provided in figure 1; using a 7-point projective plane. Each point corresponds to an imaginary unit, and each line corresponds to a quaternionic triple, with the arrow giving orientation. As the other division algebras, the norm satisfies the identity \(|z_{1}z_{2}|=|z_{1}||z_{2}|\). The octonion \(z\in\mathbb{O}\) can also be represented as
\[z=\gamma+\delta e_{4},\]
where \(\gamma=z_{0}+z_{1}e_{1}+z_{2}e_{2}+z_{3}e_{3}\) and \(\delta=z_{4}+z_{5}e_{1}+z_{6}e_{2}+z_{7}e_{3}\) are quaternions, which belongs to \(\mathbb{H}\). Now, we borrow the following Lemma provided in [13].
**Lemma 2.1**.: _Let \(\gamma,\delta\in\mathbb{H}\), then_
\[(i)\ \ e_{4}\gamma=\bar{\gamma}e_{4};\ \ \ \ \ (ii)\ \ e_{4}( \gamma e_{4})=-\bar{\gamma};\ \ \ \ (iii)\ \ (\gamma e_{4})e_{4}=-\gamma;\] \[(iv)\ \ \gamma(\delta e_{4})=(\delta\gamma)e_{4};\ \ \ \ (v)\ \gamma e_{4}( \delta)=(\gamma\bar{\delta})e_{4};\ \ \ \ (vi)\ \ (\gamma e_{4})(\delta e_{4})=-\bar{\delta}\gamma.\]
From this Lemma, one can conclude that
\[\overline{\gamma+\delta e_{4}}=\bar{\gamma}-\delta e_{4};\ \ \ \ \ |\gamma+\delta e_{4}|^{2}=|\gamma|^{2}+|\delta|^{2}.\]
An octonion-valued signal (or function) \(f(t_{1},t_{2},t_{3})\) is a map from \(\mathbb{R}^{3}\) to \(\mathbb{O}\) which takes the following explicit form as follows:
\[f(t_{1},t_{2},t_{3}) =f_{0}(t_{1},t_{2},t_{3})+f_{1}(t_{1},t_{2},t_{3})e_{1}+f_{2}(t_ {1},t_{2},t_{3})e_{2}+f_{3}(t_{1},t_{2},t_{3})e_{3}\] \[+f_{4}(t_{1},t_{2},t_{3})e_{4}+f_{5}(t_{1},t_{2},t_{3})e_{5}+f_{6 }(t_{1},t_{2},t_{3})e_{6}+f_{7}(t_{1},t_{2},t_{3})e_{7},\]
Figure 1: Octonion multiplication table.
where each \(f_{k}(t_{1},t_{2},t_{3})\) is a real-valued signal (or function) for \(k=0,1,\ldots,7\). The \(L^{p}\) norm \(1\leq p<\infty\), for each octonion-valued signal (or function) \(f(t_{1},t_{2},t_{3})\) over \(\mathbb{R}^{3}\) is defined by:
\[||f(t_{1},t_{2},t_{3})||_{p}:=\left(\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}\int_{-\infty}^{\infty}\Big{|}f(t_{1},t_{2},t_{3})\Big{|}^{p}\mathrm{d} t_{1}\mathrm{d}t_{2}\mathrm{d}t_{3}\right)^{\frac{1}{p}}\ \ \ <\infty.\]
The \(L^{\infty}\) norm is given by
\[||f(t_{1},t_{2},t_{3})||_{\infty}=ess\sup_{x\in\mathbb{R}^{3}}\Big{|}f(t_{1},t _{2},t_{3})\Big{|},\ \text{for}\ p=\infty.\]
**Definition 2.1** (3D OFT).: The 3D OFT of an octonion-valued signal (or function) \(f(t_{1},t_{2},t_{3})\) is a map from \(\mathbb{R}^{3}\) to \(\mathbb{O}\) defined by:
\[\hat{f}_{3}(\omega_{1},\omega_{2},\omega_{3})=(\mathcal{F}_{3}^{ \mathbb{O}}f)(\omega_{1},\omega_{2},\omega_{3})\] \[=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}f(t_{1},t_{2},t_{3})e^{-e_{1}2\pi(t_{1},t_{2},t_{3})(\omega_{1},\omega _{2},\omega_{3})}\] \[\times e^{-e_{2}2\pi(t_{1},t_{2},t_{3})(\omega_{1},\omega_{2}, \omega_{3})}e^{-e_{4}2\pi(t_{1},t_{2},t_{3})(\omega_{1},\omega_{2},\omega_{3} )}\mathrm{d}t_{1}\mathrm{d}t_{2}\mathrm{d}t_{3}. \tag{1}\]
**Definition 2.2** (Inversion formula for 3D OFT).: The inverse 3D OFT of the spectrum \((\mathcal{F}_{3}^{\mathbb{O}}g)\) is given by
\[f(t_{1},t_{2},t_{3})=\left((\mathcal{F}_{3}^{\mathbb{O}}f)^{-1} \hat{f}\right)(t_{1},t_{2},t_{3})\] \[=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}\hat{f}_{3}(\omega_{1},\omega_{2},\omega_{3})e^{e_{1}2\pi(t_{1},t_{2}, t_{3})(\omega_{1},\omega_{2},\omega_{3})}\] \[\times e^{e_{2}2\pi(t_{1},t_{2},t_{3})(\omega_{1},\omega_{2}, \omega_{3})}e^{e_{4}2\pi(t_{1},t_{2},t_{3})(\omega_{1},\omega_{2},\omega_{3})} \mathrm{d}\omega_{1}\mathrm{d}\omega_{2}\mathrm{d}\omega_{3}.\]
**Definition 2.3** (3D OCLCT).: The OCLCT of an octonion-valued signal (or function) \(f(t_{1},t_{2},t_{3})\) is a map from \(\mathbb{R}^{3}\) to \(\mathbb{O}\) defined by:
\[(\mathcal{L}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}^{ \mathbb{O}}f)(\omega_{1},\omega_{2},\omega_{3})\] \[=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}f(t_{1},t_{2},t_{3})\kappa_{\mathcal{N}_{1}}^{e_{1}}(t_{1},\omega_{1}) \kappa_{\mathcal{N}_{2}}^{e_{2}}(t_{2},\omega_{2})\kappa_{\mathcal{N}_{3}}^{e_ {4}}(t_{3},\omega_{3})\mathrm{d}t_{1}\mathrm{d}t_{2}\mathrm{d}t_{3}, \tag{2}\]
where \(\mathcal{N}_{j}=\begin{pmatrix}a_{j}&b_{j}\\ c_{j}&d_{j}\end{pmatrix}\in\mathbb{R}^{2\times 2}\) be a matrix parameter satisfying \(det(\mathcal{N}_{j})=1\), for \(j=1,2,3\) and the kernel
\[\kappa_{\mathcal{N}_{1}}^{e_{1}}(t_{1},\omega_{1})=\left\{\begin{array}{ll} \frac{1}{\sqrt{2\pi|b_{1}|}}\mathrm{e}^{\mathrm{e}_{1}\left(\frac{a_{1}t_{1}^ {2}}{2b_{1}}-\frac{t_{1}\omega_{1}}{b_{1}}+\frac{d_{1}\omega_{1}^{2}}{2b_{1}}- \frac{\pi}{2}\right)}&b_{1}\neq 0\\ \sqrt{d}\mathrm{e}^{e_{1}\frac{c_{1}d_{1}}{2}\omega_{1}^{2}}\delta(t_{1}-d_{1} \omega_{1})&b_{1}=0,\end{array}\right. \tag{3}\]
\[\kappa_{\mathcal{N}_{2}}^{e_{2}}(t_{2},\omega_{2})=\left\{\begin{array}{ll}\frac{1} {\sqrt{2\pi|b_{2}|}}\;\mathrm{e}^{\mathrm{e}\left(\frac{a_{2}t_{2}^{2}}{2b_{2}} -\frac{t_{2}\omega_{2}}{b_{2}}+\frac{d_{2}\omega_{2}^{2}}{2b_{2}}-\frac{\pi}{2 }\right)}&b_{2}\neq 0\\ \sqrt{d_{2}}\mathrm{e}^{e_{4}\frac{c_{2}d_{2}}{2}\omega_{2}^{2}}\delta(t_{2} -d_{2}\omega_{2})&b_{2}=0,\end{array}\right. \tag{4}\]
\[\kappa_{\mathcal{N}_{3}}^{e_{4}}(t_{3},\omega_{3})=\left\{\begin{array}{ll} \frac{1}{\sqrt{2\pi|b_{3}|}}\;\mathrm{e}^{\mathrm{e}\left(\frac{a_{3}t_{3}^{2} }{2b_{3}}-\frac{t_{3}\omega_{3}}{b_{3}}+\frac{d_{3}\omega_{3}^{2}}{2b_{3}}- \frac{\pi}{2}\right)}&b_{3}\neq 0\\ \sqrt{d_{3}}\mathrm{e}^{e_{4}\frac{c_{3}d_{3}}{2}\omega_{3}^{2}}\delta(t_{3} -d_{3}\omega_{3})&b_{3}=0,\end{array}\right. \tag{5}\]
where \(\delta(t_{1},t_{2},t_{3})\) representing the Dirac delta function.
**Definition 2.4** (Inversion formula for 3D OCLCT).: The inverse of OCLCT having an octonion-valued signal (or function) \(f(t_{1},t_{2},t_{3})\) is a map from \(\mathbb{R}^{3}\) to \(\mathbb{O}\) defined by:
\[f(t_{1},t_{2},t_{3})\] \[=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}(\mathcal{L}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}^{\mathbb{ O}}f)(\omega_{1},\omega_{2},\omega_{3})\kappa_{\mathcal{N}_{1}}^{-e_{1}}(t_{1}, \omega_{1})\kappa_{\mathcal{N}_{2}}^{-e_{2}}(t_{2},\omega_{2})\kappa_{\mathcal{ N}_{3}}^{-e_{4}}(t_{3},\omega_{3})\mathrm{d}\omega_{1}\mathrm{d}\omega_{2} \mathrm{d}\omega_{3}, \tag{6}\]
where
\[\kappa_{\mathcal{N}_{1}}^{-e_{1}}(t_{1},\omega_{1}) =\kappa_{\mathcal{N}_{1}^{-1}}^{e_{1}}(\omega_{1},t_{1})=\frac{1 }{\sqrt{2\pi|b_{1}|}}e^{-e_{1}(\frac{a_{1}t_{1}^{2}}{2b_{1}}-\frac{t_{1}\omega _{1}}{b_{1}}+\frac{d_{1}\omega_{1}^{2}}{2b_{1}}-\frac{\pi}{2})},\] \[\kappa_{\mathcal{N}_{2}}^{-e_{2}}(t_{2},\omega_{2}) =\kappa_{\mathcal{N}_{2}^{-1}}^{e_{2}}(\omega_{2},t_{2})=\frac{1 }{\sqrt{2\pi|b_{2}|}}e^{-e_{2}(\frac{a_{2}t_{2}^{2}}{2b_{2}}-\frac{t_{2}\omega _{2}}{b_{2}}+\frac{d_{2}\omega_{2}^{2}}{2b_{2}}-\frac{\pi}{2})},\] \[\kappa_{\mathcal{N}_{3}}^{-e_{4}}(t_{3},\omega_{3}) =\kappa_{\mathcal{N}_{3}^{-1}}^{e_{4}}(\omega_{3},t_{3})=\frac{1 }{\sqrt{2\pi|b_{3}|}}e^{-e_{4}(\frac{a_{3}t_{3}^{2}}{2b_{3}}-\frac{t_{3}\omega _{3}}{b_{3}}+\frac{d_{3}\omega_{3}^{2}}{2b_{3}}-\frac{\pi}{2})},\] \[\mathcal{N}_{j}=\begin{pmatrix}a_{j}&b_{j}\\ c_{j}&d_{j}\end{pmatrix}\in\mathbb{R}^{2\times 2},\mathcal{N}_{j}^{-1}=\begin{pmatrix}d_{j}&-b_{j} \\ -c_{j}&a_{j}\end{pmatrix}\in\mathbb{R}^{2\times 2}\;\text{ and }\;b\neq 0.\]
_Remark 2.1_ (Relationship between OCLCT and OFT).: From [18], one can establish a relationship between OCLCT and OFT with certain parameters of the matrix \(\mathcal{N}_{j}\), for \(j=1,2,3\) as follows:
\[(\mathcal{L}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}^{ \mathbb{O}}f)(\omega_{1},\omega_{2},\omega_{3})\] \[=\frac{1}{(2\pi)^{\frac{3}{2}}}\int_{-\infty}^{\infty}\int_{- \infty}^{\infty}\int_{-\infty}^{\infty}f(t_{1},t_{2},t_{3})e^{e_{1}\left(-t_{1} \omega_{1}-\frac{\pi}{2}\right)}e^{e_{2}\left(-t_{2}\omega_{2}-\frac{\pi}{2} \right)}e^{e_{4}\left(-t_{3}\omega_{3}-\frac{\pi}{2}\right)}\mathrm{d}t_{1} \mathrm{d}t_{2}\mathrm{d}t_{3}\] \[=\frac{1}{(2\pi)^{\frac{3}{2}}}\int_{-\infty}^{\infty}\int_{- \infty}^{\infty}\int_{-\infty}^{\infty}f(t_{1},t_{2},t_{3})e^{e_{1}\left(-t_{1} \omega_{1}\right)}(-e_{1})e^{e_{2}\left(-t_{2}\omega_{2}\right)}\] \[\times(-e_{2})e^{e_{4}\left(-t_{3}\omega_{3}\right)}(-e_{4}) \mathrm{d}t_{1}\mathrm{d}t_{2}\mathrm{d}t_{3}\] \[=\frac{1}{(2\pi)^{\frac{3}{2}}}(\mathcal{F}_{3}^{\mathbb{O}}f) \left(\frac{\omega_{1}}{2\pi},-\frac{\omega_{2}}{2\pi},-\frac{\omega_{3}}{2 \pi}\right)e_{7},\]
where \(a_{j}=d_{j}=0,b_{j}=1,c_{j}=-1\).
Definition and properties of the 3D Woclt
Before defining the WOCLCT, we first define the octonion window signal (or function):
**Definition 3.1**.: An octonion window (OW) of an octonion-valued signal (or function) \(\Psi(t_{1},t_{2},t_{3})\in L^{2}(\mathbb{R}^{3},\mathbb{O})\setminus\{0\}\) defined by:
\[\Psi^{(\omega_{1},\omega_{2},\omega_{3})}_{(\mu_{1},\mu_{2},\mu_{3})}(t_{1},t_{ 2},t_{3})=\kappa^{-e_{1}}_{\mathcal{N}_{1}}(t_{1},\omega_{1})\kappa^{-e_{2}}_{ \mathcal{N}_{2}}(t_{2},\omega_{2})\kappa^{-e_{4}}_{\mathcal{N}_{3}}(t_{3}, \omega_{3})\Psi(t_{1}-\mu_{1},t_{2}-\mu_{2},t_{3}-\mu_{3}) \tag{7}\]
for each \((t_{1},t_{2},t_{3}),(\omega_{1},\omega_{2},\omega_{3})\) and \((\mu_{1},\mu_{2},\mu_{3})\in\mathbb{R}^{3}\).
**Definition 3.2** (3D Woclcct).: The WOCLCT of an octonion-valued signal (or function) \(f\in L^{2}(\mathbb{R}^{3},\mathbb{O})\) with respect to OW signal (or function) \(\Psi\in L^{2}(\mathbb{R}^{3},\mathbb{O})\setminus\{0\}\) defined by:
\[\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)\}((\omega_{1},\omega_{2},\omega_{3}),(\mu_{1},\mu_{2},\mu_{3}))\] \[=\Big{\langle}f,\Psi^{(\omega_{1},\omega_{2},\omega_{3})}_{(\mu_{ 1},\mu_{2},\mu_{3})}(t_{1},t_{2},t_{3})\Big{\rangle}_{L^{2}(\mathbb{R}^{3}, \mathbb{O})}\] \[=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}f(t_{1},t_{2},t_{3})\overline{\Psi^{(\omega_{1},\omega_{2},\omega_{3})} _{(\mu_{1},\mu_{2},\mu_{3})}(t_{1},t_{2},t_{3})}\mathrm{d}t_{1}\mathrm{d}t_{2} \mathrm{d}t_{3}\] \[=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}f(t_{1},t_{2},t_{3})\overline{\Psi(t_{1}-\mu_{1},t_{2}-\mu_{2},t_{3}- \mu_{3})}\kappa^{e_{1}}_{\mathcal{N}_{1}}(t_{1},\omega_{1})\] \[\times\kappa^{e_{2}}_{\mathcal{N}_{2}}(t_{2},\omega_{2})\kappa^{ e_{4}}_{\mathcal{N}_{3}}(t_{3},\omega_{3})\mathrm{d}t_{1}\mathrm{d}t_{2} \mathrm{d}t_{3}. \tag{8}\]
_Remark 3.1_ (Relationship between WOCLCT and OCLCT).: Let \(\Psi\) be an octonion-valued window signal (or function). For every \(f\in L^{2}(\mathbb{R}^{3},\mathbb{O})\), then we have
\[\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)\}((\omega_{1},\omega_{2},\omega_{3}),(\mu_{1},\mu_{2},\mu_{3}))=(\mathcal{L}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f\mathcal{T}_{(\mu_{1},\mu_{2},\mu_{3})}\Psi))(\omega_{1}, \omega_{2},\omega_{3}),\]
where \(\mathcal{T}_{(\mu_{1},\mu_{2},\mu_{3})}\Psi(t_{1},t_{2},t_{3})=\Psi(t_{1}-\mu_ {1},t_{2}-\mu_{2},t_{3}-\mu_{3})\).
**Definition 3.3** (Inversion formula for 3D WOCLCT).: The inverse transform of WOCLCT of an octonion-valued signal (or function) \(f\in L^{2}(\mathbb{R}^{3},\mathbb{O})\) with respect to OW signal (or function) \(\Psi\in L^{2}(\mathbb{R}^{3},\mathbb{O})\setminus\{0\}\) defined by:
\[f(t_{1},t_{2},t_{3})\] \[=\frac{1}{||\Psi||^{2}}\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \int_{-\infty}^{\infty}\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N }_{2},\mathcal{N}_{3}}(f,\Psi)\}((\omega_{1},\omega_{2},\omega_{3}),(\mu_{1}, \mu_{2},\mu_{3}))\] \[\times\Psi^{(\omega_{1},\omega_{2},\omega_{3})}_{(\mu_{1},\mu_{2 },\mu_{3})}(t_{1},t_{2},t_{3})\mathrm{d}\omega_{1}\mathrm{d}\omega_{2}\mathrm{d }\omega_{3}\mathrm{d}\mu_{1}\mathrm{d}\mu_{2}\mathrm{d}\mu_{3}. \tag{9}\]
Proof.: To establish the inversion formula for 3D WOCLCT, we use the definition of 3D WOCLCT provided in equation (3) as follows:
\[\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)\}((\omega_{1},\omega_{2},\omega_{3}),(\mu_{1},\mu_{2},\mu_{3}))\] \[=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}f(t_{1},t_{2},t_{3})\overline{\Psi(t_{1}-\mu_{1},t_{2}-\mu_{2},t_{3}- \mu_{3})}\kappa^{e_{1}}_{\mathcal{N}_{1}}(t_{1},\omega_{1})\] \[\times\kappa^{e_{2}}_{\mathcal{N}_{2}}(t_{2},\omega_{2})\kappa^{ e_{4}}_{\mathcal{N}_{3}}(t_{3},\omega_{3})\mathrm{d}t_{1}\mathrm{d}t_{2} \mathrm{d}t_{3} \tag{10}\]
Now, applying the inversion formula for the 3D OCLCT provided in equation (6) on equation (10), we have
\[f(t_{1},t_{2},t_{3})\overline{\Psi(t_{1}-\mu_{1},t_{2}-\mu_{2},t_{3} -\mu_{3})}\] \[=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}\{\mathcal{G}^{\mathcal{O}}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{ N}_{3}}(f,\Psi)\}((\omega_{1},\omega_{2},\omega_{3}),(\mu_{1},\mu_{2},\mu_{3})) \kappa_{\mathcal{N}_{1}}^{-e_{1}}(t_{1},\omega_{1})\] \[\times\kappa_{\mathcal{N}_{2}}^{-e_{2}}(t_{2},\omega_{2})\kappa_{ \mathcal{N}_{3}}^{-e_{4}}(t_{3},\omega_{3})\mathrm{d}\omega_{1}\mathrm{d} \omega_{2}\mathrm{d}\omega_{3} \tag{11}\]
Post-multiplying both sides of equation (11) by \(\Psi(t_{1}-\mu_{1},t_{2}-\mu_{2},t_{3}-\mu_{3})\) and integrating with respect to \(\mu_{1},\mu_{2}\), and \(\mu_{3}\), we get
\[\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}f(t_{1},t_{2},t_{3})\overline{\Psi(t_{1}-\mu_{1},t_{2}-\mu_{2},t_{3}- \mu_{3})}\Psi(t_{1}-\mu_{1},t_{2}-\mu_{2},t_{3}-\mu_{3})\mathrm{d}\mu_{1} \mathrm{d}\mu_{2}\mathrm{d}\mu_{3}\] \[=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \{\mathcal{G}^{\mathcal{O}}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}( f,\Psi)\}((\omega_{1},\omega_{2},\omega_{3}),(\mu_{1},\mu_{2},\mu_{3})) \kappa_{\mathcal{N}_{1}}^{-e_{1}}(t_{1},\omega_{1})\] \[\times\kappa_{\mathcal{N}_{2}}^{-e_{2}}(t_{2},\omega_{2})\kappa_{ \mathcal{N}_{3}}^{-e_{4}}(t_{3},\omega_{3})\mathrm{d}\omega_{1}\mathrm{d} \omega_{2}\mathrm{d}\omega_{3}\mathrm{d}\mu_{1}\mathrm{d}\mu_{2}\mathrm{d}\mu _{3}.\]
By using the alternativity property of octonion algebra, we have
\[f(t_{1},t_{2},t_{3})\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}\int_{-\infty}^{\infty}\left|\Psi(t_{1}-\mu_{1},t_{2}-\mu_{2},t_{3}- \mu_{3})\right|^{2}\mathrm{d}\mu_{1}\mathrm{d}\mu_{2}\mathrm{d}\mu_{3}\] \[=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \{\mathcal{G}^{\mathcal{O}}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}} (f,\Psi)\}((\omega_{1},\omega_{2},\omega_{3}),(\mu_{1},\mu_{2},\mu_{3})) \kappa_{\mathcal{N}_{1}}^{-e_{1}}(t_{1},\omega_{1})\] \[\times\kappa_{\mathcal{N}_{2}}^{-e_{2}}(t_{2},\omega_{2})\kappa_ {\mathcal{N}_{3}}^{-e_{4}}(t_{3},\omega_{3})\mathrm{d}\omega_{1}\mathrm{d} \omega_{2}\mathrm{d}\omega_{3}\mathrm{d}\mu_{1}\mathrm{d}\mu_{2}\mathrm{d}\mu _{3}.\]
Using equation (7) on the right-hand side of the above equation, we get
\[f(t_{1},t_{2},t_{3})\] \[=\frac{1}{||\Psi||^{2}}\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\{\mathcal{G}^{\mathcal{O}}_{ \mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}(f,\Psi)\}((\omega_{1},\omega_ {2},\omega_{3}),(\mu_{1},\mu_{2},\mu_{3}))\] \[\times\Psi_{(\mu_{1},\mu_{2},\mu_{3})}^{(\omega_{1},\omega_{2}, \omega_{3})}(t_{1},t_{2},t_{3})\mathrm{d}\omega_{1}\mathrm{d}\omega_{2} \mathrm{d}\omega_{3}\mathrm{d}\mu_{1}\mathrm{d}\mu_{2}\mathrm{d}\mu_{3}. \tag{12}\]
Thus, the desired result is obtained.
Now, the goal is to define various important properties of the 3D WOCLCT. To achieve this goal, we first expand the kernel of 3D WOCLCT given in equa
tion (8) in full octonion form as follows:
\[\left(\kappa_{\mathcal{N}_{1}}^{e_{1}}(t_{1},\omega_{1})\kappa_{ \mathcal{N}_{2}}^{e_{2}}(t_{2},\omega_{2})\right)\kappa_{\mathcal{N}_{3}}^{e_{4 }}(t_{3},\omega_{3})\] \[=\frac{1}{(2\pi)^{\frac{3}{2}}\sqrt{|b_{1}b_{2}b_{3}|}}\Bigg{[}e^ {e_{1}\left(\frac{a_{1}t_{1}^{2}}{2b_{1}}-\frac{t_{1}\omega_{1}}{b_{1}}+\frac{ d_{1}\omega_{1}^{2}}{2b_{1}}-\frac{\pi}{2}\right)}e^{e_{2}\left(\frac{a_{2}t_{2}^{2} }{2b_{2}}-\frac{t_{2}\omega_{2}}{b_{2}}+\frac{d_{2}\omega_{2}^{2}}{2b_{2}}- \frac{\pi}{2}\right)}\Bigg{]}\,e^{e_{4}\left(\frac{a_{3}t_{3}^{2}}{2b_{3}}- \frac{t_{3}\omega_{3}}{b_{3}}+\frac{d_{3}\omega_{3}^{2}}{2b_{3}}-\frac{\pi}{2 }\right)}\] \[=\frac{1}{(2\pi)^{\frac{3}{2}}\sqrt{|b_{1}b_{2}b_{3}|}}\Bigg{[} \cos\left(\frac{a_{1}t_{1}^{2}}{2b_{1}}-\frac{t_{1}\omega_{1}}{b_{1}}+\frac{ d_{1}\omega_{1}^{2}}{2b_{1}}-\frac{\pi}{2}\right)\cos\left(\frac{a_{2}t_{2}^{2} }{2b_{2}}-\frac{t_{2}\omega_{2}}{b_{2}}+\frac{d_{2}\omega_{2}^{2}}{2b_{2}}- \frac{\pi}{2}\right)\] \[\times\cos\left(\frac{a_{3}t_{3}^{2}}{2b_{3}}-\frac{t_{3}\omega_{ 3}}{b_{3}}+\frac{d_{3}\omega_{3}^{2}}{2b_{3}}-\frac{\pi}{2}\right)+\sin\left( \frac{a_{1}t_{1}^{2}}{2b_{1}}-\frac{t_{1}\omega_{1}}{b_{1}}+\frac{d_{1}\omega_ {1}^{2}}{2b_{1}}-\frac{\pi}{2}\right)\cos\left(\frac{a_{2}t_{2}^{2}}{2b_{2}}- \frac{t_{2}\omega_{2}}{b_{2}}+\frac{d_{2}\omega_{2}^{2}}{2b_{2}}-\frac{\pi}{2 }\right)\] \[\times\cos\left(\frac{a_{3}t_{3}^{2}}{2b_{3}}-\frac{t_{3}\omega_{ 3}}{b_{3}}+\frac{d_{3}\omega_{3}^{2}}{2b_{3}}-\frac{\pi}{2}\right)e_{2}+\sin \left(\frac{a_{1}t_{1}^{2}}{2b_{1}}-\frac{t_{1}\omega_{1}}{b_{1}}+\frac{d_{1} \omega_{1}^{2}}{2b_{1}}-\frac{\pi}{2}\right)\sin\left(\frac{a_{2}t_{2}^{2}}{2b _{2}}-\frac{t_{2}\omega_{2}}{b_{2}}+\frac{d_{2}\omega_{2}^{2}}{2b_{2}}-\frac{ \pi}{2}\right)\] \[\times\cos\left(\frac{a_{3}t_{3}^{2}}{2b_{3}}-\frac{t_{3}\omega_{ 3}}{b_{3}}+\frac{d_{3}\omega_{3}^{2}}{2b_{3}}-\frac{\pi}{2}\right)e_{3}+\cos \left(\frac{a_{1}t_{1}^{2}}{2b_{1}}-\frac{t_{1}\omega_{1}}{b_{1}}+\frac{d_{1} \omega_{1}^{2}}{2b_{1}}-\frac{\pi}{2}\right)\cos\left(\frac{a_{2}t_{2}^{2}}{2b _{2}}-\frac{t_{2}\omega_{2}}{b_{2}}+\frac{d_{2}\omega_{2}^{2}}{2b_{2}}-\frac{ \pi}{2}\right)\] \[\times\sin\left(\frac{a_{3}t_{3}^{2}}{2b_{3}}-\frac{t_{3}\omega_{ 3}}{b_{3}}+\frac{d_{3}\omega_{3}^{2}}{2b_{3}}-\frac{\pi}{2}\right)e_{4}+\sin \left(\frac{a_{1}t_{1}^{2}}{2b_{1}}-\frac{t_{1}\omega_{1}}{b_{1}}+\frac{d_{1} \omega_{1}^{2}}{2b_{1}}-\frac{\pi}{2}\right)\cos\left(\frac{a_{2}t_{2}^{2}}{2b _{2}}-\frac{t_{2}\omega_{2}}{b_{2}}+\frac{d_{2}\omega_{2}^{2}}{2b_{2}}-\frac{ \pi}{2}\right)\] \[\times\sin\left(\frac{a_{3}t_{3}^{2}}{2b_{3}}-\frac{t_{3}\omega_ {3}}{b_{3}}+\frac{d_{3}\omega_{3}^{2}}{2b_{3}}-\frac{\pi}{2}\right)e_{5}+\cos \left(\frac{a_{1}t_{1}^{2}}{2b_{1}}-\frac{t_{1}\omega_{1}}{b_{1}}+\frac{d_{1} \omega_{1}^{2}}{2b_{1}}-\frac{\pi}{2}\right)\sin\left(\frac{a_{2}t_{2}^{2}}{2b _{2}}-\frac{t_{2}\omega_{2}}{b_{2}}+\frac{d_{2}\omega_{2}^{2}}{2b_{2}}-\frac{ \pi}{2}\right)\] \[\times\sin\left(\frac{a_{3}t_{3}^{2}}{2b_{3}}-\frac{t_{3}\omega_ {3}}{b_{3}}+\frac{d_{3}\omega_{3}^{2}}{2b_{3}}-\frac{\pi}{2}\right)e_{6}+\sin \left(\frac{a_{1}t_{1}^{2}}{2b_{1}}-\frac{t_{1}\omega_{1}}{b_{1}}+\frac{d_{1} \omega_{1}^{2}}{2b_{1}}-\frac{\pi}{2}\right)\sin\left(\frac{a_{2}t_{2}^{2}}{2b _{2}}-\frac{t_{2}\omega_{2}}{b_{2}}+\frac{d_{2}\omega_{2}^{2}}{2b_{2}}-\frac{ \pi}{2}\right)\] \[\times\sin\left(\frac{a_{3}t_{3}^{2}}{2b_{3}}-\frac{t_{3}\omega_ {3}}{b_{3}}+\frac{d_{3}\omega_{3}^{2}}{2b_{3}}-\frac{\pi}{2}\right)e_{7}\Bigg{]}. \tag{13}\]
On the basis of full octonion form (13) of kernel, the 3D OW signal (or function) \(f(t_{1},t_{2},t_{3})\) is a map from \(\mathbb{R}^{3}\) to \(\mathbb{O}\) defined by:
\[\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N }_{3}}(f,\Psi)\} =\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N }_{3}}(f,\Psi)\}_{eee}+\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)\}_{oee}e_{1}+\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1}, \mathcal{N}_{2},\mathcal{N}_{3}}(f,\Psi)\}_{eoe}e_{2}\] \[+\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)\}_{oee_{3}}+\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1}, \mathcal{N}_{2},\mathcal{N}_{3}}(f,\Psi)\}_{eeoe}e_{4}+\{\mathcal{G}^{\mathbb{O}}_{ \mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}(f,\Psi)\}_{oeoe}e_{5}\] \[+\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)\}_{eoo}e_{6}+\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1}, \mathcal{N}_{2},\mathcal{N}_{3}}(f,\Psi)\}_{
where
\[\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N }_{3}}(f,\Psi)\}_{eee}(\omega_{1},\omega_{2},\omega_{3})\] \[=\frac{1}{(2\pi)^{\frac{3}{2}}\sqrt{|b_{1}b_{2}b_{3}|}}\int_{- \infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f_{eee}(t_{1},t_ {2},t_{3})\overline{\Psi_{eee}(t_{1}-\mu_{1},t_{2}-\mu_{2},t_{3}-\mu_{3})}\] \[\times\cos\left(\frac{a_{1}t_{1}^{2}}{2b_{1}}-\frac{t_{1}\omega_{ 1}}{b_{1}}+\frac{d_{1}\omega_{1}^{2}}{2b_{1}}-\frac{\pi}{2}\right)\cos\left( \frac{a_{2}t_{2}^{2}}{2b_{2}}-\frac{t_{2}\omega_{2}}{b_{2}}+\frac{d_{2}\omega _{2}^{2}}{2b_{2}}-\frac{\pi}{2}\right)\] \[\times\cos\left(\frac{a_{3}t_{3}^{2}}{2b_{3}}-\frac{t_{3}\omega_ {3}}{b_{3}}+\frac{d_{3}\omega_{3}^{2}}{2b_{3}}-\frac{\pi}{2}\right)\!\mathrm{ d}t_{1}\mathrm{d}t_{2}\mathrm{d}t_{3},\] \[\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)\}_{eee}(\omega_{1},\omega_{2},\omega_{3})\] \[=\frac{1}{(2\pi)^{\frac{3}{2}}\sqrt{|b_{1}b_{2}b_{3}|}}\int_{- \infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f_{oee}(t_{1},t _{2},t_{3})\overline{\Psi_{oee}(t_{1}-\mu_{1},t_{2}-\mu_{2},t_{3}-\mu_{3})}\] \[\times\sin\left(\frac{a_{1}t_{1}^{2}}{2b_{1}}-\frac{t_{1}\omega_ {1}}{b_{1}}+\frac{d_{1}\omega_{1}^{2}}{2b_{1}}-\frac{\pi}{2}\right)\cos\left( \frac{a_{2}t_{2}^{2}}{2b_{2}}-\frac{t_{2}\omega_{2}}{b_{2}}+\frac{d_{2} \omega_{2}^{2}}{2b_{2}}-\frac{\pi}{2}\right)\] \[\times\cos\left(\frac{a_{3}t_{3}^{2}}{2b_{3}}-\frac{t_{3}\omega_ {3}}{b_{3}}+\frac{d_{3}\omega_{3}^{2}}{2b_{3}}-\frac{\pi}{2}\right)\!\mathrm{ d}t_{1}\mathrm{d}t_{2}\mathrm{d}t_{3},\] \[\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)\}_{eoe}(\omega_{1},\omega_{2},\omega_{3})\] \[=\frac{1}{(2\pi)^{\frac{3}{2}}\sqrt{|b_{1}b_{2}b_{3}|}}\int_{- \infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f_{eoe}(t_{1},t _{2},t_{3})\overline{\Psi_{eoe}(t_{1}-\mu_{1},t_{2}-\mu_{2},t_{3}-\mu_{3})}\] \[\times\cos\left(\frac{a_{1}t_{1}^{2}}{2b_{1}}-\frac{t_{1}\omega_ {1}}{b_{1}}+\frac{d_{1}\omega_{1}^{2}}{2b_{1}}-\frac{\pi}{2}\right)\sin\left( \frac{a_{2}t_{2}^{2}}{2b_{2}}-\frac{t_{2}\omega_{2}}{b_{2}}+\frac{d_{2}\omega _{2}^{2}}{2b_{2}}-\frac{\pi}{2}\right)\] \[\times\cos\left(\frac{a_{3}t_{3}^{2}}{2b_{3}}-\frac{t_{3}\omega_ {3}}{b_{3}}+\frac{d_{3}\omega_{3}^{2}}{2b_{3}}-\frac{\pi}{2}\right)\!\mathrm{ d}t_{1}\mathrm{d}t_{2}\mathrm{d}t_{3},\]
\[\{ \mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)\}_{eoo}(\omega_{1},\omega_{2},\omega_{3})\] \[=\frac{1}{(2\pi)^{\frac{3}{2}}\sqrt{|b_{1}b_{2}b_{3}|}}\int_{- \infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f_{eoo}(t_{1},t_ {2},t_{3})\overline{\Psi}_{eoo}(t_{1}-\mu_{1},t_{2}-\mu_{2},t_{3}-\mu_{3})\] \[\times\cos\bigg{(}\frac{a_{1}t_{1}^{2}}{2b_{1}}-\frac{t_{1}\omega _{1}}{b_{1}}+\frac{d_{1}\omega_{1}^{2}}{2b_{1}}-\frac{\pi}{2}\bigg{)}\cos \bigg{(}\frac{a_{2}t_{2}^{2}}{2b_{2}}-\frac{t_{2}\omega_{2}}{b_{2}}+\frac{d_{2 }\omega_{2}^{2}}{2b_{2}}-\frac{\pi}{2}\bigg{)}\] \[\times\sin\bigg{(}\frac{a_{3}t_{3}^{2}}{2b_{3}}-\frac{t_{3} \omega_{3}}{b_{3}}+\frac{d_{3}\omega_{3}^{2}}{2b_{3}}-\frac{\pi}{2}\bigg{)} \mathrm{d}t_{1}\mathrm{d}t_{2}\mathrm{d}t_{3},\] \[\{ \mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)\}_{oeo}(\omega_{1},\omega_{2},\omega_{3})\] \[=\frac{1}{(2\pi)^{\frac{3}{2}}\sqrt{|b_{1}b_{2}b_{3}|}}\int_{- \infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f_{oeo}(t_{1},t _{2},t_{3})\overline{\Psi}_{oeo}(t_{1}-\mu_{1},t_{2}-\mu_{2},t_{3}-\mu_{3})\] \[\times\sin\bigg{(}\frac{a_{1}t_{1}^{2}}{2b_{1}}-\frac{t_{1}\omega _{1}}{b_{1}}+\frac{d_{1}\omega_{1}^{2}}{2b_{1}}-\frac{\pi}{2}\bigg{)}\cos \bigg{(}\frac{a_{2}t_{2}^{2}}{2b_{2}}-\frac{t_{2}\omega_{2}}{b_{2}}+\frac{d_{ 2}\omega_{2}^{2}}{2b_{2}}-\frac{\pi}{2}\bigg{)}\] \[\times\sin\bigg{(}\frac{a_{3}t_{3}^{2}}{2b_{3}}-\frac{t_{3} \omega_{3}}{b_{3}}+\frac{d_{3}\omega_{3}^{2}}{2b_{3}}-\frac{\pi}{2}\bigg{)} \mathrm{d}t_{1}\mathrm{d}t_{2}\mathrm{d}t_{3},\]
\[\{ \mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)\}_{eoo}(\omega_{1},\omega_{2},\omega_{3})\] \[=\frac{1}{(2\pi)^{\frac{3}{2}}\sqrt{|b_{1}b_{2}b_{3}|}}\int_{- \infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f_{eoo}(t_{1},t _{2},t_{3})\overline{\Psi}_{eoo}(t_{1}-\mu_{1},t_{2}-\mu_{2},t_{3}-\mu_{3})\] \[\times\cos\bigg{(}\frac{a_{1}t_{1}^{2}}{2b_{1}}-\frac{t_{1}\omega _{1}}{b_{1}}+\frac{d_{1}\omega_{1}^{2}}{2b_{1}}-\frac{\pi}{2}\bigg{)}\sin \bigg{(}\frac{a_{2}t_{2}^{2}}{2b_{2}}-\frac{t_{2}\omega_{2}}{b_{2}}+\frac{d_{ 2}\omega_{2}^{2}}{2b_{2}}-\frac{\pi}{2}\bigg{)}\] \[\times\sin\bigg{(}\frac{a_{3}t_{3}^{2}}{2b_{3}}-\frac{t_{3}\omega _{3}}{b_{3}}+\frac{d_{3}\omega_{3}^{2}}{2b_{3}}-\frac{\pi}{2}\bigg{)}\mathrm{ d}t_{1}\mathrm{d}t_{2}\mathrm{d}t_{3},\]
and
\[\{ \mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)\}_{ooo}(\omega_{1},\omega_{2},\omega_{3})\] \[=\frac{1}{(2\pi)^{\frac{3}{2}}\sqrt{|b_{1}b_{2}b_{3}|}}\int_{- \infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f_{ooo}(t_{1},t _{2},t_{3})\overline{\Psi}_{ooo}(t_{1}-\mu_{1},t_{2}-\mu_{2},t_{3}-\mu_{3})\] \[\times\sin\bigg{(}\frac{a_{1}t_{1}^{2}}{2b_{1}}-\frac{t_{1}\omega _{1}}{b_{1}}+\frac{d_{1}\omega_{1}^{2}}{2b_{1}}-\frac{\pi}{2}\bigg{)}\sin \bigg{(}\frac{a_{2}t_{2}^{2}}{2b_{2}}-\frac{t_{2}\omega_{2}}{b_{2}}+\frac{d_{ 2}\omega_{2}^{2}}{2b_{2}}-\frac{\pi}{2}\bigg{)}\] \[\times\sin\bigg{(}\frac{a_{3}t_{3}^{2}}{2b_{3}}-\frac{t_{3}\omega _{3}}{b_{3}}+\frac{d_{3}\omega_{3}^{2}}{2b_{3}}-\frac{\pi}{2}\bigg{)}\mathrm{ d}t_{1}\mathrm{d}t_{2}\mathrm{d}t_{3}.\]
**Proposition 3.1** (Linearity property for 3D WOCLCT).: _The WOCLCT of an octonion-valued signal (or function) \(f,g\in L^{2}(\mathbb{R}^{3},\mathbb{O})\) with respect to OW signal (or function) \(\Psi\in L^{2}(\mathbb{R}^{3},\mathbb{O})\setminus\{0\}\), then_
\[\{ \mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}(\eta f +\lambda g,\Psi)\}((\omega_{1},\omega_{2},\omega_{3}),(\mu_{1},\mu_{2},\mu_{3}))\] \[=\eta\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)\}((\omega_{1},\omega_{2},\omega_{3}),(\mu_{1},\mu_{2}, \mu_{3}))+\lambda\{\mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}(g, \Psi)\}((\omega_{1},\omega_{2},\omega_{3}),(\mu_{1},\mu_{2},\mu_{3})),\]
_where \(\eta\) and \(\lambda\) are any arbitrary octonion constants._
**Proposition 3.2** (Parity for 3D WOCLCT).: _The WOCLCT of an octonion-valued signal (or function) \(f\in L^{2}(\mathbb{R}^{3},\mathbb{O})\) with respect to OW signal (or function) \(\Psi\in L^{2}(\mathbb{R}^{3},\mathbb{O})\setminus\{0\}\), then_
\[\{\mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}(Pf,P\Psi)\}(( \omega_{1},\omega_{2},\omega_{3}),(\mu_{1},\mu_{2},\mu_{3}))=-\{\mathcal{G}_{ \mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}^{\mathbb{O}}(f,\Psi)\}(- \omega,-\mu),\]
_where \(Pf(t_{1},t_{2},t_{3})=f(-t_{1},-t_{2},-t_{3})\)._
Proof.: For every \(f\in L^{2}(\mathbb{R}^{3},\mathbb{O})\), by direct calculation, we get
\[\{\mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}( Pf,P\Psi)\}((\omega_{1},\omega_{2},\omega_{3}),(\mu_{1},\mu_{2},\mu_{3}))\] \[=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}f(-t_{1},-t_{2},-t_{3})\overline{\Psi(-(t_{1}-\mu_{1},t_{2}-\mu_{2},t_ {3}-\mu_{3}))}\kappa_{\mathcal{N}_{1}}^{e_{1}}(t_{1},\omega_{1})\] \[\times\kappa_{\mathcal{N}_{2}}^{e_{2}}(t_{2},\omega_{2})\kappa_ {\mathcal{N}_{3}}^{e_{4}}(t_{3},\omega_{3})\mathrm{d}t_{1}\mathrm{d}t_{2} \mathrm{d}t_{3}. \tag{15}\]
On setting \(-(t_{1},t_{2},t_{3})=(x_{1},x_{2},x_{3})\), then we can write the right-hand side of equation (15) as follows:
\[=-\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}f(x_{1},x_{2},x_{3})\overline{\Psi(x_{1}+\mu_{1},x_{2}+\mu_{2},x_{3}+ \mu_{3}))}\kappa_{\mathcal{N}_{1}}^{e_{1}}(t_{1},-\omega_{1})\] \[\times\kappa_{\mathcal{N}_{2}}^{e_{2}}(t_{2},-\omega_{2})\kappa_ {\mathcal{N}_{3}}^{e_{4}}(t_{3},-\omega_{3})\mathrm{d}t_{1}\mathrm{d}t_{2} \mathrm{d}t_{3}\] \[=-\{\mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}} ^{\mathbb{O}}(f,\Psi)\}(-\omega,-\mu).\]
Hence, we get the desired result.
**Proposition 3.3** (Shifting property for 3D WOCLCT).: _Let \(\mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}^{\mathbb{O}}(f,\Psi)\) be the WOCLCT of the 3D octonion-valued signal (or function) \(f\). Suppose \(\mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}^{\mathbb{O},s_{1} }(f,\Psi)\),\(\mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}^{\mathbb{O},s_{2} }(f,\Psi)\), and \(\mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}^{\mathbb{O},s_{3} }(f,\Psi)\) denote the WOCLCT of \(f(t_{1}-s_{1},t_{2},t_{3}),f(t_{1},t_{2}-s_{2},t_{3})\), and\(f(t_{1},t_{2},t_{3}-s_{3})\) respectively, then_
\[\{\mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}^{ \mathbb{O},s_{1}}(f,\Psi)\}(\omega_{1},\omega_{2},\omega_{3}) =\cos(s_{1}\omega_{1}c_{1}-\frac{a_{1}c_{1}s_{1}^{2}}{2})\{ \mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}^{\mathbb{O}}(f, \Psi)\}(\rho_{1},\omega_{2},\omega_{3})\] \[-\sin(s_{1}\omega_{1}c_{1}-\frac{a_{1}c_{1}s_{1}^{2}}{2})\Delta_{ 1}f(\rho_{1},\omega_{2},\omega_{3}),\]
\[\{\mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}^{ \mathbb{O},s_{2}}(f,\Psi)\}(\omega_{1},\omega_{2},\omega_{3}) =\cos(s_{2}\omega_{2}c_{2}-\frac{a_{2}c_{2}s_{2}^{2}}{2})\{ \mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}^{\mathbb{O}}(f, \Psi)\}(\omega_{1},\rho_{2},\omega_{3})\] \[-\sin(s_{2}\omega_{2}c_{2}-\frac{a_{2}c_{2}s_{2}^{2}}{2})\Delta_{ 2}f(\omega_{1},\rho_{2},\omega_{3}),\]
_and_
\[\{\mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}^{ \mathbb{O},s_{3}}(f,\Psi)\}(\omega_{1},\omega_{2},\omega_{3}) =\cos(s_{3}\omega_{3}c_{3}-\frac{a_{3}c_{3}s_{3}^{2}}{2})\{ \mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}^{\mathbb{O}}(f, \Psi)\}(\omega_{1},\omega_{2},\rho_{3})\] \[-\sin(s_{3}\omega_{3}c_{3}-\frac{a_{3}c_{3}s_{3}^{2}}{2})\Delta_{ 3}f(\omega_{1},\omega_{2},\rho_{3}),\]
_where \(\rho_{i}=\mu_{i}-s_{i}\) for \(i=1,2\), and \(3\), we have_
\[\Delta_{1}f =\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)\}_{see}-\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1}, \mathcal{N}_{2},\mathcal{N}_{3}}(f,\Psi)\}_{cee}e_{1}+\{\mathcal{G}^{\mathbb{O }}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}(f,\Psi)\}_{soe}e_{2}\] \[-\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)\}_{coe}e_{3}+\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1 },\mathcal{N}_{2},\mathcal{N}_{3}}(f,\Psi)\}_{sco}e_{4}-\{\mathcal{G}^{ \mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}(f,\Psi)\}_{ceo}e _{5}\] \[+\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)\}_{soo}e_{6}-\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1 },\mathcal{N}_{2},\mathcal{N}_{3}}(f,\Psi)\}_{coo}e_{7},\]
\[\Delta_{2}f =\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)\}_{ese}+\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1}, \mathcal{N}_{2},\mathcal{N}_{3}}(f,\Psi)\}_{ose}e_{1}-\{\mathcal{G}^{\mathbb{O }}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}(f,\Psi)\}_{cee}e_{2}\] \[-\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)\}_{cee}e_{3}+\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1 },\mathcal{N}_{2},\mathcal{N}_{3}}(f,\Psi)\}_{eso}e_{4}+\{\mathcal{G}^{ \mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}(f,\Psi)\}_{oso}e _{5}\] \[-\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)\}_{eco}e_{6}-\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1 },\mathcal{N}_{2},\mathcal{N}_{3}}(f,\Psi)\}_{oco}e_{7},\]
_and_
\[\Delta_{1}f =\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)\}_{ees}+\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1}, \mathcal{N}_{2},\mathcal{N}_{3}}(f,\Psi)\}_{oes}e_{1}+\{\mathcal{G}^{\mathbb{ O}}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}(f,\Psi)\}_{eos}e_{2}\] \[+\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)\}_{oos}e_{3}-\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1 },\mathcal{N}_{2},\mathcal{N}_{3}}(f,\Psi)\}_{ecec}e_{4}-\{\mathcal{G}^{ \mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}(f,\Psi)\}_{ece }e_{5}\] \[-\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)\}_{eoc}e_{6}-\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1 },\mathcal{N}_{2},\mathcal{N}_{3}}(f,\Psi)\}_{ooc}e_{7}.\]
Proof.: We can rewrite the equation (14) for the function \(f^{s_{1}}\) as follows:
\[\{\mathcal{G}^{\mathbb{O},s_{1}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)\}_{eee}(\omega_{1},\omega_{2},\omega_{3})\] \[=\frac{1}{(2\pi)^{\frac{3}{2}}\sqrt{|b_{1}b_{2}b_{3}|}}\int_{- \infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f^{s_{1}}_{eee}( t_{1},t_{2},t_{3})\overline{\Psi_{eee}(t_{1}-\mu_{1},t_{2}-\mu_{2},t_{3}-\mu_{3})}\] \[\times\cos\left(\frac{a_{1}t_{1}^{2}}{2b_{1}}-\frac{t_{1}\omega_{ 1}}{b_{1}}+\frac{d_{1}\omega_{1}^{2}}{2b_{1}}-\frac{\pi}{2}\right)\cos\left( \frac{a_{2}t_{2}^{2}}{2b_{2}}-\frac{t_{2}\omega_{2}}{b_{2}}+\frac{d_{2}\omega_{ 2}^{2}}{2b_{2}}-\frac{\pi}{2}\right)\] \[\times\cos\left(\frac{a_{3}t_{3}^{2}}{2b_{3}}-\frac{t_{3}\omega_{ 3}}{b_{3}}+\frac{d_{3}\omega_{3}^{2}}{2b_{3}}-\frac{\pi}{2}\right)\!\mathrm{d}t _{1}\mathrm{d}t_{2}\mathrm{d}t_{3}\] \[=\frac{1}{(2\pi)^{\frac{3}{2}}\sqrt{|b_{1}b_{2}b_{3}|}}\int_{- \infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f_{eee}(t_{1}-s_{1 },t_{2},t_{3})\overline{\Psi_{eee}(t_{1}-\mu_{1},t_{2}-\mu_{2},t_{3}-\mu_{3})}\] \[\times\cos\left(\frac{a_{1}t_{1}^{2}}{2b_{1}}-\frac{t_{1}\omega_{ 1}}{b_{1}}+\frac{d_{1}\omega_{1}^{2}}{2b_{1}}-\frac{\pi}{2}\right)\cos\left( \frac{a_{2}t_{2}^{2}}{2b_{2}}-\frac{t_{2}\omega_{2}}{b_{2}}+\frac{d_{2}\omega_{ 2}^{2}}{2b_{2}}-\frac{\pi}{2}\right)\] \[\times\cos\left(\frac{a_{3}t_{3}^{2}}{2b_{3}}-\frac{t_{3}\omega_{ 3}}{b_{3}}+\frac{d_{3}\omega_{3}^{2}}{2b_{3}}-\frac{\pi}{2}\right)\!\mathrm{d}t _{1}\mathrm{d}t_{2}\mathrm{d}t_{3}.\]
Now, using the change of variable technique \(x_{1}=t_{1}-s_{1},x_{2}=t_{2},x_{3}=t_{3}\), where \((x_{1},x_{2},x_{3})\in\mathbb{R}^{3}\), then we obtain
\[\{\mathcal{G}^{\mathbb{O},s_{1}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)\}_{eee}(\omega_{1},\omega_{2},\omega_{3})\] \[=\frac{1}{(2\pi)^{\frac{3}{2}}\sqrt{|b_{1}b_{2}b_{3}|}}\int_{- \infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f^{s_{1}}_{eee}(x_{ 1},x_{2},x_{3})\overline{\Psi_{eee}((x_{1}+s_{1})-\mu_{1},x_{2}-\mu_{2},x_{3}-\mu_ {3})}\] \[\times\cos\left(\left(s_{1}\omega_{1}c_{1}-\frac{a_{1}c_{1}s_{1}^ {2}}{2}\right)+\left(\frac{a_{1}x_{1}^{2}}{2b_{1}}-\frac{x_{1}(\omega_{1}-s_{1}a_ {1})}{b_{1}}+\frac{d_{1}(\omega_{1}-s_{1}a_{1})^{2}}{2b_{1}}-\frac{\pi}{2} \right)\right)\] \[\times\cos\left(\frac{a_{2}x_{2}^{2}}{2b_{2}}-\frac{x_{2}\omega_{ 2}}{b_{2}}+\frac{d_{2}\omega_{2}^{2}}{2b_{2}}-\frac{\pi}{2}\right)\cos\left( \frac{a_{3}x_{3}^{2}}{2b_{3}}-\frac{x_{3}\omega_{3}}{b_
setting \(\rho_{1}=\mu_{1}-s_{1},\rho_{2}=\mu_{2}\), and \(\rho_{3}=\mu_{3}\), we have
\[=\frac{1}{(2\pi)^{\frac{3}{2}}\sqrt{|b_{1}b_{2}b_{3}|}}\Bigg{[}\cos \left(s_{1}\omega_{1}c_{1}-\frac{a_{1}c_{1}s_{1}^{2}}{2}\right)\int_{-\infty}^{ \infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f_{eee}(x_{1},x_{2},x_{3})\] \[\times\overline{\Psi_{eee}(x_{1}-\rho_{1},x_{2}-\rho_{2},x_{3}- \rho_{3})}\cos\left(\frac{a_{1}x_{1}^{2}}{2b_{1}}-\frac{x_{1}(\omega_{1}-s_{1} a_{1})}{b_{1}}+\frac{d_{1}(\omega_{1}-s_{1}a_{1})^{2}}{2b_{1}}-\frac{\pi}{2}\right)\] \[\times\cos\left(\frac{a_{2}x_{2}^{2}}{2b_{2}}-\frac{x_{2}\omega_{ 2}}{b_{2}}+\frac{d_{2}\omega_{2}^{2}}{2b_{2}}-\frac{\pi}{2}\right)\cos\left( \frac{a_{3}x_{3}^{2}}{2b_{3}}-\frac{x_{3}\omega_{3}}{b_{3}}+\frac{d_{3}\omega _{3}^{2}}{2b_{3}}-\frac{\pi}{2}\right)\!\mathrm{d}x_{1}\mathrm{d}x_{2}\mathrm{ d}x_{3}\] \[-\sin\left(s_{1}\omega_{1}c_{1}-\frac{a_{1}c_{1}s_{1}^{2}}{2} \right)\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f_{eee}(x_{1},x_{2},x_{3})\] \[\times\overline{\Psi_{eee}(x_{1}-\rho_{1},x_{2}-\rho_{2},x_{3}- \rho_{3})}\sin\left(\frac{a_{1}x_{1}^{2}}{2b_{1}}-\frac{x_{1}(\omega_{1}-s_{1 }a_{1})}{b_{1}}+\frac{d_{1}(\omega_{1}-s_{1}a_{1})^{2}}{2b_{1}}-\frac{\pi}{2}\right)\] \[\times\cos\left(\frac{a_{2}x_{2}^{2}}{2b_{2}}-\frac{x_{2}\omega_{ 2}}{b_{2}}+\frac{d_{2}\omega_{2}^{2}}{2b_{2}}-\frac{\pi}{2}\right)\cos\left( \frac{a_{3}x_{3}^{2}}{2b_{3}}-\frac{x_{3}\omega_{3}}{b_{3}}+\frac{d_{3}\omega _{3}^{2}}{2b_{3}}-\frac{\pi}{2}\right)\!\mathrm{d}x_{1}\mathrm{d}x_{2}\mathrm{ d}x_{3},\]
Now, we assume that
\[\{\mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}^{ \mathbb{O}}(f,\Psi)\}_{see}(\rho_{1},\omega_{2},\omega_{3})=\frac{1}{(2\pi)^{ \frac{3}{2}}\sqrt{|b_{1}b_{2}b_{3}|}}\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}f_{eee}(x_{1},x_{2},x_{3})\] \[\times\overline{\Psi_{eee}(x_{1}-\rho_{1},x_{2}-\rho_{2},x_{3}- \rho_{3})}\sin\left(\frac{a_{1}x_{1}^{2}}{2b_{1}}-\frac{x_{1}(\omega_{1}-s_{1 }a_{1})}{b_{1}}+\frac{d_{1}(\omega_{1}-s_{1}a_{1})^{2}}{2b_{1}}-\frac{\pi}{2}\right)\] \[\times\cos\left(\frac{a_{2}x_{2}^{2}}{2b_{2}}-\frac{x_{2}\omega_{ 2}}{b_{2}}+\frac{d_{2}\omega_{2}^{2}}{2b_{2}}-\frac{\pi}{2}\right)\cos\left( \frac{a_{3}x_{3}^{2}}{2b_{3}}-\frac{x_{3}\omega_{3}}{b_{3}}+\frac{d_{3}\omega _{3}^{2}}{2b_{3}}-\frac{\pi}{2}\right)\!\mathrm{d}x_{1}\mathrm{d}x_{2}\mathrm{ d}x_{3},\]
\[\{\mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}^{ \mathbb{O}}(f,\Psi)\}_{cee}(\rho_{1},\omega_{2},\omega_{3})=\frac{1}{(2\pi)^{ \frac{3}{2}}\sqrt{|b_{1}b_{2}b_{3}|}}\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}f_{eee}(x_{1},x_{2},x_{3})\] \[\times\overline{\Psi_{eee}(x_{1}-\rho_{1},x_{2}-\rho_{2},x_{3}- \rho_{3})}\cos\left(\frac{a_{1}x_{1}^{2}}{2b_{1}}-\frac{x_{1}(\omega_{1}-s_{1 }a_{1})}{b_{1}}+\frac{d_{1}(\omega_{1}-s_{1}a_{1})^{2}}{2b_{1}}-\frac{\pi}{2}\right)\] \[\times\cos\left(\frac{a_{2}x_{2}^{2}}{2b_{2}}-\frac{x_{2}\omega_{ 2}}{b_{2}}+\frac{d_{2}\omega_{2}^{2}}{2b_{2}}-\frac{\pi}{2}\right)\cos\left( \frac{a_{3}x_{3}^{2}}{2b_{3}}-\frac{x_{3}\omega_{3}}{b_{3}}+\frac{d_{3}\omega _{3}^{2}}{2b_{3}}-\frac{\pi}{2}\right)\!\mathrm{d}x_{1}\mathrm{d}x_{2}\mathrm{ d}x_{3},\]
\[\{\mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}^{ \mathbb{O}}(f,\Psi)\}_{see}(\rho_{1},\omega_{2},\omega_{3})=\frac{1}{(2\pi)^{ \frac{3}{2}}\sqrt{|b_{1}b_{2}b_{3}|}}\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}f_{eee}(x_{1},x_{2},x_{3})\] \[\times\overline{\Psi_{eee}(x_{1}-\rho_{1},x_{2}-\rho_{2},x_{3}- \rho_{3})}\sin\left(\frac{a_{1}x_{1}^{2}}{2b_{1}}-\frac{x_{1}(\omega_{1}-s_{1 }a_{1})}{b_{1}}+\frac{d_{1}(\omega_{1}-s_{1}a_{1})^{2}}{2b_{1}}-\frac{\pi}{2}\right)\] \[\times\sin\left(\frac{a_{2}x_{2}^{2}}{2b_{2}}-\frac{x_{2}\omega_{ 2}}{b_{2}}+\frac{d_{2}\omega_{2}^{2}}{2b_{2}}-\frac{\pi}{2}\right)\cos\left( \frac{a_{3}x_{3}^{2}}{2b_{3}}-\frac{x_{3}\omega_{3}}{b_{3}}+\frac{d_{3}\omega _{3}^{2}}{2b_{3}}-\frac{\pi}{2}\right)\!\mathrm{d}x_{1}\mathrm{d}x_{2}\mathrm{ d}x_{3},\]
\[\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)\}_{coo}(\rho_{1},\omega_{2},\omega_{3})=\frac{1}{(2\pi)^ {\frac{3}{2}}\sqrt{|b_{1}b_{2}b_{3}|}}\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}\int_{-\infty}^{\infty}f_{ooe}(x_{1},x_{2},x_{3})\] \[\times\overline{\Psi_{ooe}(x_{1}-\rho_{1},x_{2}-\rho_{2},x_{3}- \rho_{3})}\cos\left(\frac{a_{1}x_{1}^{2}}{2b_{1}}-\frac{x_{1}(\omega_{1}-s_{1} a_{1})}{b_{1}}+\frac{d_{1}(\omega_{1}-s_{1}a_{1})^{2}}{2b_{1}}-\frac{\pi}{2}\right)\] \[\times\sin\left(\frac{a_{2}x_{2}^{2}}{2b_{2}}-\frac{x_{2}\omega_{ 2}}{b_{2}}+\frac{d_{2}\omega_{2}^{2}}{2b_{2}}-\frac{\pi}{2}\right)\cos\left( \frac{a_{3}x_{3}^{2}}{2b_{3}}-\frac{x_{3}\omega_{3}}{b_{3}}+\frac{d_{3}\omega _{3}^{2}}{2b_{3}}-\frac{\pi}{2}\right)\!\mathrm{d}x_{1}\mathrm{d}x_{2}\mathrm{ d}x_{3},\]
\[\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)\}_{seq}(\rho_{1},\omega_{2},\omega_{3})=\frac{1}{(2 \pi)^{\frac{3}{2}}\sqrt{|b_{1}b_{2}b_{3}|}}\int_{-\infty}^{\infty}\int_{- \infty}^{\infty}\int_{-\infty}^{\infty}f_{oeo}(x_{1},x_{2},x_{3})\] \[\times\overline{\Psi_{oeo}(x_{1}-\rho_{1},x_{2}-\rho_{2},x_{3}- \rho_{3})}\sin\left(\frac{a_{1}x_{1}^{2}}{2b_{1}}-\frac{x_{1}(\omega_{1}-s_{1 }a_{1})}{b_{1}}+\frac{d_{1}(\omega_{1}-s_{1}a_{1})^{2}}{2b_{1}}-\frac{\pi}{2 }\right)\] \[\times\cos\left(\frac{a_{2}x_{2}^{2}}{2b_{2}}-\frac{x_{2}\omega_{ 2}}{b_{2}}+\frac{d_{2}\omega_{2}^{2}}{2b_{2}}-\frac{\pi}{2}\right)\sin\left( \frac{a_{3}x_{3}^{2}}{2b_{3}}-\frac{x_{3}\omega_{3}}{b_{3}}+\frac{d_{3}\omega _{3}^{2}}{2b_{3}}-\frac{\pi}{2}\right)\!\mathrm{d}x_{1}\mathrm{d}x_{2}\mathrm{ d}x_{3},\]
\[\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)\}_{ceo}(\rho_{1},\omega_{2},\omega_{3})=\frac{1}{(2 \pi)^{\frac{3}{2}}\sqrt{|b_{1}b_{2}b_{3}|}}\int_{-\infty}^{\infty}\int_{- \infty}^{\infty}\int_{-\infty}^{\infty}f_{eeo}(x_{1},x_{2},x_{3})\] \[\times\overline{\Psi_{eeo}(x_{1}-\rho_{1},x_{2}-\rho_{2},x_{3}- \rho_{3})}\cos\left(\frac{a_{1}x_{1}^{2}}{2b_{1}}-\frac{x_{1}(\omega_{1}-s_{1 }a_{1})}{b_{1}}+\frac{d_{1}(\omega_{1}-s_{1}a_{1})^{2}}{2b_{1}}-\frac{\pi}{2}\right)\] \[\times\cos\left(\frac{a_{2}x_{2}^{2}}{2b_{2}}-\frac{x_{2}\omega_{ 2}}{b_{2}}+\frac{d_{2}\omega_{2}^{2}}{2b_{2}}-\frac{\pi}{2}\right)\sin\left( \frac{a_{3}x_{3}^{2}}{2b_{3}}-\frac{x_{3}\omega_{3}}{b_{3}}+\frac{d_{3}\omega _{3}^{2}}{2b_{3}}-\frac{\pi}{2}\right)\!\mathrm{d}x_{1}\mathrm{d}x_{2}\mathrm{ d}x_{3},\]
\[\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)\}_{soo}(\rho_{1},\omega_{2},\omega_{3})=\frac{1}{(2\pi)^{ \frac{3}{2}}\sqrt{|b_{1}b_{2}b_{3}|}}\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}\int_{-\infty}^{\infty}f_{eoo}(x_{1},x_{2},x_{3})\] \[\times\overline{\Psi_{eoo}(x_{1}-\rho_{1},x_{2}-\rho_{2},x_{3}- \rho_{3})}\sin\left(\frac{a_{1}x_{1}^{2}}{2b_{1}}-\frac{x_{1}(\omega_{1}-s_{1 }a_{1})}{b_{1}}+\frac{d_{1}(\omega_{1}-s_{1}a_{1})^{2}}{2b_{1}}-\frac{\pi}{2}\right)\] \[\times\sin\left(\frac{a_{2}x_{2}^{2}}{2b_{2}}-\frac{x_{2}\omega_{ 2}}{b_{2}}+\frac{d_{2}\omega_{2}^{2}}{2b_{2}}-\frac{\pi}{2}\right)\sin\left( \frac{a_{3}x_{3}^{2}}{2b_{3}}-\frac{x_{3}\omega_{3}}{b_{3}}+\frac{d_{3} \omega_{3}^{2}}{2b_{3}}-\frac{\pi}{2}\right)\!\mathrm{d}x_{1}\mathrm{d}x_{2} \mathrm{d}x_{3},\]
and
\[\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)\}_{coo}(\rho_{1},\omega_{2},\omega_{3})=\frac{1}{(2 \pi)^{\frac{3}{2}}\sqrt{|b_{1}b_{2}b_{3}|}}\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}\int_{-\infty}^{\infty}f_{ooo}(x_{1},x_{2},x_{3})\] \[\times\overline{\Psi_{ooo}(x_{1}-\rho_{1},x_{2}-\rho_{2},x_{3}- \rho_{3})}\cos\left(\frac{a_{1}x_{1}^{2}}{2b_{1}}-\frac{x_{1}(\omega_{1}-s_{1}a_{1 })}{b_{1}}+\frac{d_{1}(\omega_{1}-s_{1}a_{1})^{2}}{2b_{1}}-\frac{\pi}{2}\right)\] \[\times\sin\left(\frac{a_{2}x_{2}^{2}}{2b_{2}}-\frac{x_{2}\omega_{ 2}}{b_{2}}+\frac{d_{2}\omega_{2}^{2}}{2b_{2}}-\frac{\pi}{2}\right)\sin\left( \frac{a_{3}x_{3}^{2}}{2b_{3}}-\frac{x_{3}\omega_{3}}{b_{3}}+\frac{d_{3}\omega_{3} ^{2}}{2b_{3}}-\frac{\pi}{2}\right)\!\mathrm{d}x_{1}\mathrm{d}x_{2}\mathrm{d}x_{3},\]
then, we have
\[\{\mathcal{G}^{\mathbb{O},s_{1}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)\}_{eee}(\omega_{1},\omega_{2},\omega_{3})=\cos\left(s_{1} \omega_{1}c_{1}-\frac{a_{1}c_{1}s_{1}^{2}}{2}\right)\{\mathcal{G}^{\mathbb{O}}_{ \mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}(f,\Psi)\}_{eee}\] \[\times(\rho_{1},\omega_{2},\omega_{3})-\sin\left(s_{1}\omega_{1}c_{1 }-\frac{a_{1}c_{1}s_{1}^{2}}{2}\right)\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1}, \mathcal{N}_{2},\mathcal{N}_{3}}(f,\Psi)\}_{see}(\rho_{1},\omega_{2},\omega_{3}),\]
where \(\rho_{1}=\mu_{1}-s_{1}\). Similarly, we can prove that
\[\{\mathcal{G}^{\mathbb{O},s_{1}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)\}_{oee}(\omega_{1},\omega_{2},\omega_{3})\] \[=\frac{1}{(2\pi)^{\frac{3}{2}}\sqrt{|b_{1}b_{2}b_{3}|}}\int_{- \infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f_{oee}(x_{1},x_ {2},x_{3})\overline{\Psi_{oee}(x_{1}-\rho_{1},x_{2}-\rho_{2},x_{3}-\rho_{3})}\] \[\times\sin\left(\bigg{(}s_{1}\omega_{1}c_{1}-\frac{a_{1}c_{1}s_{1 }^{2}}{2}\bigg{)}+\bigg{(}\frac{a_{1}x_{1}^{2}}{2b_{1}}-\frac{x_{1}(\omega_{1} -s_{1}a_{1})}{b_{1}}+\frac{d_{1}(\omega_{1}-s_{1}a_{1})^{2}}{2b_{1}}-\frac{ \pi}{2}\bigg{)}\right)\] \[\times\cos\bigg{(}\frac{a_{2}x_{2}^{2}}{2b_{2}}-\frac{x_{2} \omega_{2}}{b_{2}}+\frac{d_{2}\omega_{2}^{2}}{2b_{2}}-\frac{\pi}{2}\bigg{)} \cos\bigg{(}\frac{a_{3}x_{3}^{2}}{2b_{3}}-\frac{x_{3}\omega_{3}}{b_{3}}+\frac{ d_{3}\omega_{3}^{2}}{2b_{3}}-\frac{\pi}{2}\bigg{)}\mathrm{d}x_{1}\mathrm{d}x_{2} \mathrm{d}x_{3}\] \[=\frac{1}{(2\pi)^{\frac{3}{2}}\sqrt{|b_{1}b_{2}b_{3}|}}\Bigg{[} \sin\bigg{(}s_{1}\omega_{1}c_{1}-\frac{a_{1}c_{1}s_{1}^{2}}{2}\bigg{)}\int_{- \infty}^{\infty}\int_{-\infty}^{\infty}f_{oee}(x_{1},x_{2},x_{3})\overline{ \Psi_{oee}(x_{1}}\] \[\times\overline{-\rho_{1},x_{2}-\rho_{2},x_{3}-\rho_{3})}\cos \bigg{(}\frac{a_{1}x_{1}^{2}}{2b_{1}}-\frac{x_{1}(\omega_{1}-s_{1}a_{1})}{b_{ 1}}+\frac{d_{1}(\omega_{1}-s_{1}a_{1})^{2}}{2b_{1}}-\frac{\pi}{2}\bigg{)}\] \[\times\cos\bigg{(}\frac{a_{2}x_{2}^{2}}{2b_{2}}-\frac{x_{2} \omega_{2}}{b_{2}}+\frac{d_{2}\omega_{2}^{2}}{2b_{2}}-\frac{\pi}{2}\bigg{)} \cos\bigg{(}\frac{a_{3}x_{3}^{2}}{2b_{3}}-\frac{x_{3}\omega_{3}}{b_{3}}+\frac{ d_{3}\omega_{3}^{2}}{2b_{3}}-\frac{\pi}{2}\bigg{)}\mathrm{d}x_{1}\mathrm{d}x_{2} \mathrm{d}x_{3}\] \[+\cos\bigg{(}s_{1}\omega_{1}c_{1}-\frac{a_{1}c_{1}s_{1}^{2}}{2} \bigg{)}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f_{oee}(x_{1},x_{2},x_{3 })\overline{\Psi_{oee}(x_{1}}\] \[\times\overline{-\rho_{1},x_{2}-\rho_{2},x_{3}-\rho_{3})}\sin \bigg{(}\frac{a_{1}x_{1}^{2}}{2b_{1}}-\frac{x_{1}(\omega_{1}-s_{1}a_{1})}{b_{ 1}}+\frac{d_{1}(\omega_{1}-s_{1}a_{1})^{2}}{2b_{1}}-\frac{\pi}{2}\bigg{)}\] \[\times\cos\bigg{(}\frac{a_{2}x_{2}^{2}}{2b_{2}}-\frac{x_{2} \omega_{2}}{b_{2}}+\frac{d_{2}\omega_{2}^{2}}{2b_{2}}-\frac{\pi}{2}\bigg{)} \cos\bigg{(}\frac{a_{3}x_{3}^{2}}{2b_{3}}-\frac{x_{3}\omega_{3}}{b_{3}}+\frac {d_{3}\omega_{3}^{2}}{2b_{3}}-\frac{\pi}{2}\bigg{)}\mathrm{d}x_{1}\mathrm{d}x_{ 2}\mathrm{d}x_{3}\Bigg{]}.\]
\[= \cos\bigg{(}s_{1}\omega_{1}c_{1}-\frac{a_{1}c_{1}s_{1}^{2}}{2} \bigg{)}\left\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)\right\}_{oee}(\rho_{1},\omega_{2},\omega_{3})+\sin \bigg{(}s_{1}\omega_{1}c_{1}-\frac{a_{1}c_{1}s_{1}^{2}}{2}\bigg{)}\] \[\times\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)\}_{cee}(\rho_{1},\omega_{2},\omega_{3}).\]
Summarizing, we have
\[\{\mathcal{G}^{\mathbb{O},s_{1}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)\}_{eee}(\omega_{1},\omega_{2},\omega_{3}) =\cos\bigg{(}s_{1}\omega_{1}c_{1}-\frac{a_{1}c_{1}s_{1}^{2}}{2} \bigg{)}\left\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)\right\}_{eee}\] \[\times(\rho_{1},\omega_{2},\omega_{3})-\sin\bigg{(}s_{1}\omega_{1}c _{1}-\frac{a_{1}c_{1}s_{1}^{2}}{2}\bigg{)}\] \[\times\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)\}_{see}(\rho_{1},\omega_{2},\omega_{3}),\]
\[\{\mathcal{G}^{\mathbb{O},s_{1}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)\}_{oee}(\omega_{1},\omega_{2},\omega_{3}) =\cos\bigg{(}s_{1}\omega_{1}c_{1}-\frac{a_{1}c_{1}s_{1}^{2}}{2} \bigg{)}\left\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)\right\}_{oee}\] \[\times(\rho_{1},\omega_{2},\omega_{3})+\sin\bigg{(}s_{1}\omega_{1}c _{1}-\frac{a_{1}c_{1}s_{1}^{2}}{2}\bigg{)}\] \[\times\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)\}_{cee}(\rho_{1},\omega_{2},\omega_{3}),\]
\[\{\mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}^{ \mathbb{O},s_{1}}(f,\Psi)\}_{eoo}(\omega_{1},\omega_{2},\omega_{3}) =\cos\left(s_{1}\omega_{1}c_{1}-\frac{a_{1}c_{1}s_{1}^{2}}{2} \right)\{\mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}^{ \mathbb{O}}(f,\Psi)\}_{eoo}\] \[\times(\rho_{1},\omega_{2},\omega_{3})-\sin\left(s_{1}\omega_{1} c_{1}-\frac{a_{1}c_{1}s_{1}^{2}}{2}\right)\] \[\times\{\mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_ {3}}^{\mathbb{O}}(f,\Psi)\}_{soo}(\rho_{1},\omega_{2},\omega_{3}),\]
\[\{\mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}^{ \mathbb{O},s_{1}}(f,\Psi)\}_{ooo}(\omega_{1},\omega_{2},\omega_{3}) =\cos\left(s_{1}\omega_{1}c_{1}-\frac{a_{1}c_{1}s_{1}^{2}}{2} \right)\{\mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}^{ \mathbb{O}}(f,\Psi)\}_{ooo}\] \[\times\{\mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N} _{3}}^{\mathbb{O}}(f,\Psi)\}_{coo}(\rho_{1},\omega_{2},\omega_{3}),\]
\[\{\mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}^{ \mathbb{O},s_{1}}(f,\Psi)\}_{eoo}(\omega_{1},\omega_{2},\omega_{3}) =\cos\left(s_{1}\omega_{1}c_{1}-\frac{a_{1}c_{1}s_{1}^{2}}{2} \right)\{\mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}^{ \mathbb{O}}(f,\Psi)\}_{eoo}\] \[\times\{\mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N} _{3}}^{\mathbb{O}}(f,\Psi)\}_{eoo}(\rho_{1},\omega_{2},\omega_{3}),\]
\[\{\mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}^{ \mathbb{O},s_{1}}(f,\Psi)\}_{eoo}(\omega_{1},\omega_{2},\omega_{3}) =\cos\left(s_{1}\omega_{1}c_{1}-\frac{a_{1}c_{1}s_{1}^{2}}{2} \right)\{\mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}^{ \mathbb{O}}(f,\Psi)\}_{oeo}\] \[\times\{\mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N} _{3}}^{\mathbb{O}}(f,\Psi)\}_{eoo}(\rho_{1},\omega_{2},\omega_{3}),\]
\[\{\mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}^{ \mathbb{O},s_{1}}(f,\Psi)\}_{eoo}(\omega_{1},\omega_{2},\omega_{3}) =\cos\left(s_{1}\omega_{1}c_{1}-\frac{a_{1}c_{1}s_{1}^{2}}{2} \right)\{\mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}^{ \mathbb{O}}(f,\Psi)\}_{eoo}\] \[\times\{\mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N} _{3}}^{\mathbb{O}}(f,\Psi)\}_{eoo}(\rho_{1},\omega_{2},\omega_{3}),\]
and
\[\{\mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}^{ \mathbb{O},s_{1}}(f,\Psi)\}_{ooo}(\omega_{1},\omega_{2},\omega_{3}) =\cos\left(s_{1}\omega_{1}c_{1}-\frac{a_{1}c_{1}s_{1}^{2}}{2} \right)\{\mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}^{ \mathbb{O}}(f,\Psi)\}_{oooo}\] \[\times(\rho_{1},\omega_{2},\omega_{3})+\sin\left(s_{1}\omega_{1}c _{1}-\frac{a_{1}c_{1}s_{1}^{2}}{2}\right)\] \[\times\{\mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N} _{3}}^{\mathbb{O}}(f,\Psi)\}_{coo}(\rho_{1},\omega_{2},\omega_{3}).\]
Using from (14), we obtain
\[\mathcal{G}^{\mathbb{O},s_{1}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)(\omega_{1},\omega_{2},\omega_{3}) =\cos\left(s_{1}\omega_{1}c_{1}-\frac{a_{1}c_{1}s_{1}^{2}}{2} \right)\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_ {3}}(f,\Psi)\}(\rho_{1},\omega_{2},\omega_{3})\] \[-\sin\left(s_{1}\omega_{1}c_{1}-\frac{a_{1}c_{1}s_{1}^{2}}{2} \right)\Delta_{1}f(\rho_{1},\omega_{2},\omega_{3}),\]
where
\[\Delta_{1}f =\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)\}_{see}-\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1}, \mathcal{N}_{2},\mathcal{N}_{3}}(f,\Psi)\}_{cee}e_{1}+\{\mathcal{G}^{\mathbb{O }}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}(f,\Psi)\}_{soe}e_{2}\] \[-\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)\}_{coe}e_{3}+\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N} _{1},\mathcal{N}_{2},\mathcal{N}_{3}}(f,\Psi)\}_{seo}e_{4}-\{\mathcal{G}^{ \mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}(f,\Psi)\}_{coo}e _{5}\] \[+\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}(f,\Psi)\}_{soo}e_{6}-\{\mathcal{G}^{\mathbb{O}}_{\mathcal{N} _{1},\mathcal{N}_{2},\mathcal{N}_{3}}(f,\Psi)\}_{coo}e_{7}.\]
Similarly, one can prove the shifting property with respect to the variables \(t_{2}\) and \(t_{3}\). Hence, we get the desired result.
## 4 Sharp inequalities and the associated Uncertainty principles for the 3D WOLCT
In this section, we focus proving the main results, such as sharp Pitt's inequality, sharp Young-Hausdorff inequality, logarithmic uncertainty principle, Heisenberg's uncertainty principle, and Donoho-Stark's uncertainty principle.
**Proposition 4.1**.: _[Sharp Pitt's inequality for the 3D OCLCT] For \(f(t_{1},t_{2},t_{3})\in\mathbb{S}(\mathbb{R}^{3},\mathbb{O}),\,0\leq\beta<3,\)_
\[\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}\left|(\omega_{1},\omega_{2},\omega_{3})\right|^{-\beta}\left|( \mathcal{L}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}f) (\omega_{1},\omega_{2},\omega_{3})\right|^{2}\mathrm{d}\omega_{1}\mathrm{d} \omega_{2}\mathrm{d}\omega_{3}\] \[\leq\frac{M_{\beta}}{2\pi|b_{3}||b_{1}b_{2}|^{\beta}}\int_{- \infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\left|(t_{1},t _{2},t_{3})\right|^{\beta}\left|f(t_{1},t_{2},t_{3})\right|^{2}\mathrm{d}t_{1 }\mathrm{d}t_{2}\mathrm{d}t_{3}, \tag{16}\]
_where \(M_{\beta}=\left(\frac{\Gamma(\frac{3-\beta}{4})}{\Gamma(\frac{3+\beta}{4})} \right)^{2}\)._
**Theorem 4.1** (Sharp Pitt's inequality for the 3D WOLCT).: _For \(f(t_{1},t_{2},t_{3})\in\mathbb{S}(\mathbb{R}^{3},\mathbb{O})\) with respect to OW signal (or function) \(\Psi\in L^{2}(\mathbb{R}^{3},\mathbb{O})\backslash\{0\}\), \(0\leq\beta<3,\) then we have_
\[\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \left|(\omega_{1},\omega_{2},\omega_{3})\right|^{-\beta}\left|\{\mathcal{G}^{ \mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}(f,\Psi)\}( \omega_{1},\omega_{2},\omega_{3})\right|^{2}\mathrm{d}\omega_{1}\mathrm{d} \omega_{2}\] \[\times\mathrm{d}\omega_{3}\mathrm{d}\mu_{1}\mathrm{d}\mu_{2} \mathrm{d}\mu_{3}\leq\frac{M_{\beta}||\Psi||_{2}^{2}}{2\pi|b_{3}||b_{1}b_{2}|^{ \beta}}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \left|(t_{1},t_{2},t_{3})\right|^{\beta}\left|f(t_{1},t_{2},t_{3})\right|^{2} \mathrm{d}t_{1}\mathrm{d}t_{2}\mathrm{d}t_{3}. \tag{17}\]
Proof.: Replace \(f(t_{1},t_{2},t_{3})\) by \(f(t_{1},t_{2},t_{3})\overline{\Psi}(t_{1}-\mu_{1},t_{2}-\mu_{2},t_{3}-\mu_{3})\) in Proposition 4.1 and integrate with respect to \(\mu_{1},\mu_{2}\), and \(\mu_{3}\), we get
\[\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \Big{|}(\omega_{1},\omega_{2},\omega_{3})\Big{|}^{-\beta}\,\Big{|}\{\mathcal{G }^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}(f,\Psi)\}( \omega_{1},\omega_{2},\omega_{3})\Big{|}^{2}\,\mathrm{d}\omega_{1}\mathrm{d} \omega_{2}\] \[\times\mathrm{d}\omega_{3}\mathrm{d}\mu_{1}\mathrm{d}\mu_{2} \mathrm{d}\mu_{3}\leq\frac{M_{\beta}}{2\pi|b_{3}||b_{1}b_{2}|^{\beta}}\int_{- \infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^ {\infty}\Big{|}(t_{1},t_{2},t_{3})\Big{|}^{\beta}\Big{|}f(t_{1},t_{2},t_{3}) \] \[\times\overline{\Psi}(t_{1}-\mu_{1},t_{2}-\mu_{2},t_{3}-\mu_{3}) \Big{|}^{2}\mathrm{d}t_{1}\mathrm{d}t_{2}\mathrm{d}t_{3}\mathrm{d}\mu_{1} \mathrm{d}\mu_{2}\mathrm{d}\mu_{3}\] \[\leq\frac{M_{\beta}}{2\pi|b_{3}||b_{1}b_{2}|^{\beta}}\int_{- \infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^ {\infty}\int_{-\infty}^{\infty}\Big{|}(t_{1},t_{2},t_{3})\Big{|}^{\beta}\] \[\times\Big{|}f(t_{1},t_{2},t_{3})\Big{|}^{2}|\Psi(t_{1}-\mu_{1},t _{2}-\mu_{2},t_{3}-\mu_{3})|^{2}\mathrm{d}t_{1}\mathrm{d}t_{2}\mathrm{d}t_{3} \mathrm{d}\mu_{1}\mathrm{d}\mu_{2}\mathrm{d}\mu_{3}\] \[\leq\frac{M_{\beta}||\Psi||_{2}^{2}}{2\pi|b_{3}||b_{1}b_{2}|^{ \beta}}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \Big{|}(t_{1},t_{2},t_{3})\Big{|}^{\beta}\Big{|}f(t_{1},t_{2},t_{3})\Big{|}^{ 2}\mathrm{d}t_{1}\mathrm{d}t_{2}\mathrm{d}t_{3}.\]
Hence, we obtain the desired result.
**Theorem 4.2**.: _(Logarithmic uncertainty principle for 3D WOCLCT) For \(f(t_{1},t_{2},t_{3})\in\mathbb{S}(\mathbb{R}^{3},\mathbb{O})\) with respect to OW signal (or function) \(\Psi\in L^{2}(\mathbb{R}^{3},\mathbb{O})\setminus\{0\}\), then we have_
\[2\pi|b_{3}|\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{- \infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}\ln\Big{|}(\omega_{1},\omega_{2},\omega_{3})\Big{|}\Big{|}\{\mathcal{G }^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}(f,\Psi)\}( \omega_{1},\omega_{2},\omega_{3})\Big{|}^{2}\] \[\times\mathrm{d}\omega_{1}\mathrm{d}\omega_{2}\mathrm{d}\omega_{3 }\mathrm{d}\mu_{1}\mathrm{d}\mu_{2}\mathrm{d}\mu_{3}+||\Psi||_{2}^{2}\int_{- \infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\ln\Big{|}(t_{1},t_{2},t_{3})\Big{|}\Big{|}f(t_{1},t_{2},t_{3})\Big{|}^{2}\mathrm{d}t_{1} \mathrm{d}t_{2}\mathrm{d}t_{3}\] \[\geq K_{0}^{{}^{\prime}}||\Psi||_{2}^{2}\int_{-\infty}^{\infty} \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\Big{|}f(t_{1},t_{2},t_{3}) \Big{|}^{2}\mathrm{d}t_{1}\mathrm{d}t_{2}\mathrm{d}t_{3},\]
_where \(K_{0}^{{}^{\prime}}=\frac{\mathrm{d}}{\mathrm{d}\beta}\left(\frac{-M_{\beta}} {|b_{1}b_{2}|^{\beta}}\right)\) at \(\beta=0\)._
Proof.: We can rewrite the equation (17) in the following form
\[2\pi|b_{3}|\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{- \infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}\Big{|}(\omega_{1},\omega_{2},\omega_{3})\Big{|}^{-\beta}\,\Big{|}\{ \mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}(f, \Psi)\}(\omega_{1},\omega_{2},\omega_{3})\Big{|}^{2}\] \[\times\mathrm{d}\omega_{1}\mathrm{d}\omega_{2}\mathrm{d}\omega_ {3}\mathrm{d}\mu_{1}\mathrm{d}\mu_{2}\mathrm{d}\mu_{3}\leq\frac{M_{\beta}|| \Psi||_{2}^{2}}{|b_{1}b_{2}|^{\beta}}\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}\int_{-\infty}^{\infty}\Big{|}(t_{1},t_{2},t_{3})\Big{|}^{\beta}\Big{|} f(t_{1},t_{2},t_{3})\Big{|}^{2}\mathrm{d}t_{1}\mathrm{d}t_{2}\mathrm{d}t_{3}.\]
Now, for every \(0\leq\beta<3\), define
\[C(\beta)=\] \[2\pi|b_{3}|\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{- \infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}\Big{|}(\omega_{1},\omega_{2},\omega_{3})\Big{|}^{-\beta}\,\Big{|}\{ \mathcal{G}^{\mathbb{O}}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}(f, \Psi)\}(\omega_{1},\omega_{2},\omega_{3})\Big{|}^{2}\] \[\times\mathrm{d}\omega_{1}\mathrm{d}\omega_{2}\mathrm{d}\omega_{3} \mathrm{d}\mu_{1}\mathrm{d}\mu_{2}\mathrm{d}\mu_{3}-\frac{M_{\beta}||\Psi||_{2}^{2 }}{|b_{1}b_{2}|^{\beta}}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{- \infty}^{\infty}\Big{|}(t_{1},t_{2},t_{3})\Big{|}^{\beta}\Big{|}f(t_{1},t_{2},t_ {3})\Big{|}^{2}\mathrm{d}t_{1}\mathrm{d}t_{2}\mathrm{d}t_{3}\leq 0. \tag{18}\]
Differentiating equation (18) with respect to \(\beta\), we have
\[C^{{}^{\prime}}(\beta)=-2\pi|b_{3}|\int_{-\infty}^{\infty}\int_{- \infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}\int_{-\infty}^{\infty}\Big{|}(\omega_{1},\omega_{2},\omega_{3})\Big{|} ^{-\beta}\ln\Big{|}(\omega_{1},\omega_{2},\omega_{3})\Big{|}\] \[\times\Big{|}(t_{1},t_{2},t_{3})\Big{|}^{\beta}\ln\Big{|}(t_{1},t _{2},t_{3})\Big{|}\Big{|}f(t_{1},t_{2},t_{3})\Big{|}^{2}\mathrm{d}t_{1}\mathrm{ d}t_{2}\mathrm{d}t_{3}-E_{\beta}^{{}^{\prime}}||\Psi||_{2}^{2}\int_{-\infty}^{ \infty}\int_{-\infty}^{\infty}\] \[\times\Big{|}(t_{1},t_{2},t_{3})\Big{|}^{\beta}\Big{|}f(t_{1},t_{ 2},t_{3})\Big{|}^{2}\mathrm{d}t_{1}\mathrm{d}t_{2}\mathrm{d}t_{3}\leq 0, \tag{19}\]
where
\[E_{\beta}=\frac{M_{\beta}}{|b_{1}b_{2}|^{\beta}}\ \ \text{and}\ \ E_{\beta}^{{}^{\prime}}=\frac{M_{\beta}^{{}^{\prime}}-\ln|b_{1}b_{2}|}{|b_ {1}b_{2}|^{\beta}}\] \[M_{\beta}^{{}^{\prime}}=\frac{\Gamma(\frac{3+\beta}{4})\Gamma( \frac{3-\beta}{4})\Gamma^{{}^{\prime}}(\frac{3-\beta}{4})+\Big{(}\Gamma(\frac {3-\beta}{4})\Big{)}^{2}\,\Gamma^{{}^{\prime}}(\frac{3+\beta}{4})}{\Big{(} \Gamma(\frac{3+\beta}{4})\Big{)}^{3}}. \tag{20}\]
Setting \(\beta=0\) in equation (19) and equation (20), we get
\[2\pi|b_{3}|\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{- \infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^ {\infty}\ln\Big{|}(\omega_{1},\omega_{2},\omega_{3})\Big{|}\Big{|}\{\mathcal{G }_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}^{\mathcal{O}}(f,\Psi)\}( \omega_{1},\omega_{2},\omega_{3})\Big{|}^{2}\] \[\times\mathrm{d}\omega_{1}\mathrm{d}\omega_{2}\mathrm{d}\omega_{3 }\mathrm{d}\mu_{1}\mathrm{d}\mu_{2}\mathrm{d}\mu_{3}+||\Psi||_{2}^{2}\int_{- \infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\ln\Big{|}(t_{1 },t_{2},t_{3})\Big{|}\Big{|}f(t_{1},t_{2},t_{3})\Big{|}^{2}\mathrm{d}t_{1} \mathrm{d}t_{2}\mathrm{d}t_{3}\] \[\geq K_{0}^{{}^{\prime}}||\Psi||_{2}^{2}\int_{-\infty}^{\infty} \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\Big{|}f(t_{1},t_{2},t_{3})\Big{|} ^{2}\mathrm{d}t_{1}\mathrm{d}t_{2}\mathrm{d}t_{3}.\]
Hence, the desired result has been obtained.
**Theorem 4.3** (Sharp Young-Hausdorff inequality for 3D Woclact).: _Let \(1\leq p<2\) and \(q\) be such that \(\frac{1}{p}+\frac{1}{q}=1\), then we have_
\[||\{\mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}^{\mathcal{O}} (f,\Psi)\}(\omega_{1},\omega_{2},\omega_{3})||_{L^{q}(\mathbb{R}^{3},\mathcal{O })}\leq A||f(t_{1},t_{2},t_{3})||_{L^{1}(\mathbb{R}^{3},\mathcal{O})}||\Psi|| _{L^{p}(\mathbb{R}^{3},\mathcal{O})},\]
_where \(A=(2\pi)^{\frac{1}{q}-\frac{1}{p}-\frac{1}{2}}|b_{3}|^{-\frac{1}{2}}|b_{1}b_{2} |^{\frac{1}{q}-\frac{1}{2}}\left(\frac{p^{\frac{1}{p}}}{q^{\frac{1}{q}}}\right)\)._
Proof.: From the sharp Young-Hausdorff inequality for 3D OCLCT, as per our notations, is given by
\[\left(\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{- \infty}^{\infty}\Big{|}f(t_{1},t_{2},t_{3})\kappa_{\mathcal{N}_{1}}^{e_{1}}(t_ {1},\omega_{1})\kappa_{\mathcal{N}_{2}}^{e_{2}}(t_{2},\omega_{2})\kappa_{ \mathcal{N}_{3}}^{e_{4}}(t_{3},\omega_{3})\mathrm{d}t_{1}\mathrm{d}t_{2} \mathrm{d}t_{3}\Big{|}^{q}\right)^{\frac{1}{q}}\] \[\leq A\left(\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{- \infty}^{\infty}\Big{|}f(t_{1},t_{2},t_{3})\mathrm{d}t_{1}\mathrm{d}t_{2} \mathrm{d}t_{3}\Big{|}^{p}\right)^{\frac{1}{p}}. \tag{21}\]
Now, replace \(f(t_{1},t_{2},t_{3})\) by \(f(t_{1},t_{2},t_{3})\overline{\Psi(t_{1}-\mu_{1},t_{2}-\mu_{2},t_{3}-\mu_{3})}\) in inequality
\[\left(\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty} ^{\infty}\Big{|}f(t_{1},t_{2},t_{3})\overline{\Psi(t_{1}-\mu_{1},t_{2}-\mu_{2}, t_{3}-\mu_{3})}\kappa_{\mathcal{N}_{1}}^{e_{1}}(t_{1},\omega_{1})\kappa_{ \mathcal{N}_{2}}^{e_{2}}(t_{2},\omega_{2})\kappa_{\mathcal{N}_{3}}^{e_{4}}(t_{ 3},\omega_{3})\right.\] \[\left.\times\mathrm{d}t_{1}\mathrm{d}t_{2}\mathrm{d}t_{3}\Big{|} ^{q}\right)^{\frac{1}{q}}\leq A\left(\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}\int_{-\infty}^{\infty}\Big{|}f(t_{1},t_{2},t_{3})\overline{\Psi(t_{1} -\mu_{1},t_{2}-\mu_{2},t_{3}-\mu_{3})}\mathrm{d}t_{1}\mathrm{d}t_{2}\mathrm{d }t_{3}\Big{|}^{p}\right)^{\frac{1}{p}}. \tag{22}\]
Now, using [20, Theorem 1.3, pp.3] on the right-hand side of the inequality (22), we get
\[||\{\mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}^{\mathcal{O} }(f,\Psi)\}(\omega_{1},\omega_{2},\omega_{3})||_{L^{q}(\mathbb{R}^{3}, \mathcal{O})}\leq A||f(t_{1},t_{2},t_{3})||_{L^{1}(\mathbb{R}^{3},\mathcal{O}) }||\Psi||_{L^{p}(\mathbb{R}^{3},\mathcal{O})}.\]
**Theorem 4.4** (Heisenberg's uncertainty principle for 3D WOCLCT).: _Suppose \(f\in L^{1}(\mathbb{R}^{3},\mathbb{O})\cap L^{2}(\mathbb{R}^{3},\mathbb{O})\), then the following inequality is satisfied:_
\[\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}\Big{|}(t_{1},t_{2},t_{3})\Big{|}^{2}\Big{|}f(t_{1},t_{2},t_{3})\Big{|} ^{2}\mathrm{d}t_{1}\mathrm{d}t_{2}\mathrm{d}t_{3}\int_{-\infty}^{\infty}\int _{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty }^{\infty}\Big{|}(\omega_{1},\omega_{2},\omega_{3})\Big{|}^{2}\] \[\times\Big{|}\{\mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}^{\mathcal{O}}(f,\Psi)\}(\omega_{1},\omega_{2},\omega_{3}) \Big{|}^{2}\mathrm{d}\omega_{1}\mathrm{d}\omega_{2}\mathrm{d}\omega_{3} \mathrm{d}\mu_{1}\mathrm{d}\mu_{2}\mathrm{d}\mu_{3}\geq\frac{2}{\pi|b_{3}|}b_ {1}^{2}b_{2}^{2}||f(t_{1},t_{2},t_{3})||_{2}^{2}.\]
Proof.: From [18], Heisenberg's uncertainty principle for 3D OCLCT can be represented with our notations as follows:
\[\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}\Big{|}(t_{1},t_{2},t_{3})\Big{|}^{2}\Big{|}f(t_{1},t_{2},t_{3})\Big{|} ^{2}\mathrm{d}t_{1}\mathrm{d}t_{2}\mathrm{d}t_{3}\int_{-\infty}^{\infty}\int _{-\infty}^{\infty}\int_{-\infty}^{\infty}\Big{|}(\omega_{1},\omega_{2}, \omega_{3})\Big{|}^{2}\] \[\times\Big{|}(\mathcal{L}_{\mathcal{N}_{1},\mathcal{N}_{2}, \mathcal{N}_{3}}^{\mathcal{O}}f)(\omega_{1},\omega_{2},\omega_{3})\Big{|}^{2} \mathrm{d}\omega_{1}\mathrm{d}\omega_{2}\mathrm{d}\omega_{3}\] \[\geq\frac{2}{\pi|b_{3}|}b_{1}^{2}b_{2}^{2}\int_{-\infty}^{\infty} \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\Big{|}f(t_{1},t_{2},t_{3}) \Big{|}^{2}\mathrm{d}t_{1}\mathrm{d}t_{2}\mathrm{d}t_{3}. \tag{23}\]
Now, replace \(f(t_{1},t_{2},t_{3})\) by \(f(t_{1},t_{2},t_{3})\overline{\Psi(t_{1}-\mu_{1},t_{2}-\mu_{2},t_{3}-\mu_{3})}\) and integrating with respect to \(\mu_{1},\mu_{2}\), and \(\mu_{3}\) in equation (23), we obtain
\[\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\Big{|}(t_{1},t_{2},t_{3}) \Big{|}^{2}\Big{|}f(t_{1},t_{2},t_{3})\overline{\Psi(t_{1}-\mu_{1},t_{2}-\mu_{ 2},t_{3}-\mu_{3})}\Big{|}^{2}\] \[\times\mathrm{d}t_{1}\mathrm{d}t_{2}\mathrm{d}t_{3}\mathrm{d}\mu _{1}\mathrm{d}\mu_{2}\mathrm{d}\mu_{3}\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \int_{-\infty}^{\infty}\Big{|}(\omega_{1},\omega_{2},\omega_{3})\Big{|}^{2} \Big{|}\{\mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}^{ \mathcal{O}}(f,\Psi)\}\] \[\Big{|}f(t_{1},t_{2},t_{3})\overline{\Psi(t_{1}-\mu_{1},t_{2}-\mu_ {2},t_{3}-\mu_{3})}\Big{|}^{2}\mathrm{d}t_{1}\mathrm{d}t_{2}\mathrm{d}t_{3} \mathrm{d}\mu_{1}\mathrm{d}\mu_{2}\mathrm{d}\mu_{3}.\]
\[||\Psi||_{2}^{2}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{- \infty}^{\infty}\Big{|}(t_{1},t_{2},t_{3})\Big{|}^{2}\Big{|}f(t_{1},t_{2},t_{3}) \Big{|}^{2}\mathrm{d}t_{1}\mathrm{d}t_{2}\mathrm{d}t_{3}\int_{-\infty}^{\infty }\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{- \infty}^{\infty}\int_{-\infty}^{\infty}\] \[\Big{|}(\omega_{1},\omega_{2},\omega_{3})\Big{|}^{2}\Big{|}\{ \mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}^{0}(f,\Psi)\}( \omega_{1},\omega_{2},\omega_{3})\Big{|}^{2}\mathrm{d}\omega_{1}\mathrm{d} \omega_{2}\mathrm{d}\omega_{3}\mathrm{d}\mu_{1}\mathrm{d}\mu_{2}\mathrm{d} \mu_{3}\] \[\geq\frac{2}{\pi|b_{3}|}b_{1}^{2}b_{2}^{2}||\Psi||_{2}^{2}||f(t_{ 1},t_{2},t_{3})||_{2}^{2}\]
\[\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}\Big{|}(t_{1},t_{2},t_{3})\Big{|}^{2}\Big{|}f(t_{1},t_{2},t_{3})\Big{|} ^{2}\mathrm{d}t_{1}\mathrm{d}t_{2}\mathrm{d}t_{3}\int_{-\infty}^{\infty}\int _{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{- \infty}^{\infty}\] \[\Big{|}(\omega_{1},\omega_{2},\omega_{3})\Big{|}^{2}\Big{|}\{ \mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}^{0}(f,\Psi)\}( \omega_{1},\omega_{2},\omega_{3})\Big{|}^{2}\mathrm{d}\omega_{1}\mathrm{d} \omega_{2}\mathrm{d}\omega_{3}\mathrm{d}\mu_{1}\mathrm{d}\mu_{2}\mathrm{d} \mu_{3}\] \[\geq\frac{2}{\pi|b_{3}|}b_{1}^{2}b_{2}^{2}||f(t_{1},t_{2},t_{3})|| _{2}^{2}.\]
Hence, we get the desired result.
**Theorem 4.5** (Donoho-Stark's uncertainty principle for 3D WOLLCT).: _Let \(\sigma\) and \(\tau\) be two measurable subset of \(\mathbb{R}^{3}\) and \(f\in L^{1}(\mathbb{R}^{3},\mathbb{O})\cap L^{2}(\mathbb{R}^{3},\mathbb{O})\). If \(f(t_{1},t_{2},t_{3})\) is \(\varepsilon_{\sigma}\)- concentration to \(\sigma\) in \(L^{1}(\mathbb{R}^{3},\mathbb{O})\)- norm and \(\{\mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}^{0}(f,\Psi)\}( \omega_{1},\omega_{2},\omega_{3})\) is \(\varepsilon_{\tau}\)- concentration to \(\tau\) in \(L^{2}(\mathbb{R}^{3},\mathbb{O})\)- norm, then_
\[\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \int_{-\infty}^{\infty}\Big{|}\{\mathcal{G}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}^{0}(f,\Psi)\}(\omega_{1},\omega_{2},\omega_{3})\Big{|}^{2} \mathrm{d}\omega_{1}\mathrm{d}\omega_{2}\mathrm{d}\omega_{3}\mathrm{d}\mu_{1} \mathrm{d}\mu_{2}\mathrm{d}\mu_{3}\] \[=\frac{|\sigma||\tau|}{8\pi^{3}|b_{1}b_{2}b_{3}|[(1-\varepsilon_{ \sigma})(1-\varepsilon_{\tau})]^{2}}||\Psi||_{2}^{2}||f(t_{1},t_{2},t_{3})||_{2 }^{2}.\]
Proof.: From [18], the Donoho-Stark's uncertainty principle for 3D OCLCT can be represented with our notations as follows:
\[\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}\Big{|}(\mathcal{L}_{\mathcal{N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}^{ 0}f)(\omega_{1},\omega_{2},\omega_{3})\Big{|}^{2}\mathrm{d}\omega_{1}\mathrm{d }\omega_{2}\mathrm{d}\omega_{3}\] \[=\frac{|\sigma||\tau|}{8\pi^{3}|b_{1}b_{2}b_{3}|[(1-\varepsilon_{ \sigma})(1-\varepsilon_{\tau})]^{2}}\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}\int_{-\infty}^{\infty}\Big{|}f(t_{1},t_{2},t_{3})\Big{|}^{2}\mathrm{d }t_{1}\mathrm{d}t_{2}\mathrm{d}t_{3}. \tag{24}\]
Now replace \(f(t_{1},t_{2},t_{3})\) by \(f(t_{1},t_{2},t_{3})\overline{\Psi(t_{1}-\mu_{1},t_{2}-\mu_{2},t_{3}-\mu_{3})}\) and integrating with respect to \(\mu_{1},\mu_{2}\), and \(\mu_{3}\) in equation (24), we obtain
\[\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\Big{|}\{\mathcal{G}_{\mathcal{ N}_{1},\mathcal{N}_{2},\mathcal{N}_{3}}^{0}(f,\Psi)\}(\omega_{1},\omega_{2},\omega_{3}) \Big{|}^{2}\mathrm{d}\omega_{1}\mathrm{d}\omega_{2}\mathrm{d}\omega_{3}\mathrm{d} \mu_{1}\mathrm{d}\mu_{2}\mathrm{d}\mu_{3}\] \[=\frac{|\sigma||\tau|}{8\pi^{3}|b_{1}b_{2}b_{3}|[(1-\varepsilon_{ \sigma})(1-\varepsilon_{\tau})]^{2}}\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{- \infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\Big{|}f(t_{1},t_ {2},t_{3})\] \[\times\overline{\Psi(t_{1}-\mu_{1},t_{2}-\mu_{2},t_{3}-\mu_{3})} \Big{|}^{2}\mathrm{d}t_{1}\mathrm{d}t_{2}\mathrm{d}t_{3}\mathrm{d}\mu_{1} \mathrm{d}\mu_{2}\mathrm{d}\mu_{3}=\frac{|\sigma||\tau|}{8\pi^{3}|b_{1}b_{2}b_{ 3}|[(1-\varepsilon_{\sigma})(1-\varepsilon_{\tau})]^{2}}\] \[\times\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{- \infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}\Big{|}f(t_{1},t_{2},t_{3})\Big{|}^{2}|\Psi(t_{1}-\mu_{1},t_{2}-\mu_{2},t _{3}-\mu_{3})|^{2}\] \[\times\mathrm{d}t_{1}\mathrm{d}t_{2}\mathrm{d}t_{3}\mathrm{d}\mu_ {1}\mathrm{d}\mu_{2}\mathrm{d}\mu_{3}=\frac{|\sigma||\tau|}{8\pi^{3}|b_{1}b_{2}b_ {3}|[(1-\varepsilon_{\sigma})(1-\varepsilon_{\tau})]^{2}}||\Psi||_{2}^{2}||f(t_{1},t _{2},t_{3})||_{2}^{2}.\]
## 5 Potential applications of 3D Woclct
As discussed in the introduction, the real-life applications of hyper-complex algebra-based transforms are important tools in the modern age or the cutting-edge science and engineering field. Many scientists have paid much attention to the applications of octonions, such as structural design, predicting earthquakes using seismic signals, computer graphics, aerospace engineering, quantum mechanics, time-frequency analysis, optics, signal processing, image processing and enhancement, pattern recognition, artificial intelligence, etc. Many applications in the literature use OCLCT, which deals with only multi-channel stationary signals and is not compatible with multi-channel non-stationary signals. In contrast, the 3D WoclCT is an important and relevant tool that deals with multi-channel stationary and non-stationary signals. In a real-life application, the advantage of this WoclCT tool is to handle signals by using a window function that simultaneously localizes the hyper-complex signal in the time and frequency domain.
## 6 Conclusion
In this work, we investigated octonion algebra using window linear canonical transform (WLCT), defined as WoclCT. We have provided a new definition of 3D WoclCT and constructed its inversion formula. Following the present technique, we have derived many interesting properties (linearity, parity, and shifting properties) of 3D WoclCT, including a relationship between 3D WoclCT and OCLCT. Further, the main contribution of this work is to obtain inequalities and uncertainty principles (such as sharp Pitt's inequality, Sharp Young-Hausdorff inequality, logarithmic uncertainty principle, Heisenberg's and Donoho-Stark's uncertainty principles for 3D WoclCT) are derived. The potential applications of 3D WoclCT are also discussed. The results obtained in this paper are presumably new and very useful in theoretical and mathematical physics, including signal processing and optics.
## Acknowledgments
The second-named author is grateful to BITS-Pilani, Hyderabad Campus, for providing research funds (ID No. 2022PHXP0424H).
## Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported
in this paper.
## Data availability
No data was used for the research described in the article.
|
2306.01648 | Federated Multi-Sequence Stochastic Approximation with Local
Hypergradient Estimation | Stochastic approximation with multiple coupled sequences (MSA) has found
broad applications in machine learning as it encompasses a rich class of
problems including bilevel optimization (BLO), multi-level compositional
optimization (MCO), and reinforcement learning (specifically, actor-critic
methods). However, designing provably-efficient federated algorithms for MSA
has been an elusive question even for the special case of double sequence
approximation (DSA). Towards this goal, we develop FedMSA which is the first
federated algorithm for MSA, and establish its near-optimal communication
complexity. As core novelties, (i) FedMSA enables the provable estimation of
hypergradients in BLO and MCO via local client updates, which has been a
notable bottleneck in prior theory, and (ii) our convergence guarantees are
sensitive to the heterogeneity-level of the problem. We also incorporate
momentum and variance reduction techniques to achieve further acceleration
leading to near-optimal rates. Finally, we provide experiments that support our
theory and demonstrate the empirical benefits of FedMSA. As an example, FedMSA
enables order-of-magnitude savings in communication rounds compared to prior
federated BLO schemes. | Davoud Ataee Tarzanagh, Mingchen Li, Pranay Sharma, Samet Oymak | 2023-06-02T16:17:43Z | http://arxiv.org/abs/2306.01648v1 | # Federated Multi-Sequence Stochastic Approximation with Local Hypergradient Estimation
###### Abstract
Stochastic approximation with multiple coupled sequences (MSA) has found broad applications in machine learning as it encompasses a rich class of problems including bilevel optimization (BLO), multi-level compositional optimization (MCO), and reinforcement learning (specifically, actor-critic methods). However, designing provably-efficient federated algorithms for MSA has been an elusive question even for the special case of double sequence approximation (DSA). Towards this goal, we develop FedMSA which is the first federated algorithm for MSA, and establish its near-optimal communication complexity. As core novelties, (i) FedMSA enables the provable estimation of hypergradients in BLO and MCO via local client updates, which has been a notable bottleneck in prior theory, and (ii) our convergence guarantees are sensitive to the heterogeneity-level of the problem. We also incorporate momentum and variance reduction techniques to achieve further acceleration leading to near-optimal rates. Finally, we provide experiments that support our theory and demonstrate the empirical benefits of FedMSA. As an example, FedMSA enables order-of-magnitude savings in communication rounds compared to prior federated BLO schemes. Code is available at [https://github.com/ucr-optml/FedMSA](https://github.com/ucr-optml/FedMSA).
## 1 Introduction
Stochastic approximation (SA) methods [85] are iterative techniques widely used in machine learning (ML) to estimate zeros of functions when only noisy function value estimates are available. Initially, SA focused on asymptotic convergence for simple problems, such as finding solutions to \(\mathbf{g}(\mathbf{x})=0\) or minimizing \(f(\mathbf{x})\). However, recent years have seen increased interest in more complex applications, including bilevel and multi-level stochastic optimization problems, leading to the development of double-sequence [9] and multi-sequence SA [90] techniques to address these challenges. For example, the bilevel problem (BLO) can be effectively tackled using double-sequence stochastic approximation (DSA). By imposing appropriate smoothness conditions, such as the strong convexity of \(g\) and the differentiability of \(f\) and \(g\), we are able to derive the first-order optimality conditions: if \((\mathbf{x},\mathbf{w})\) is a local minimum of (BLO), there exists a unique \(\mathbf{v}\) such that
\[\left.\begin{aligned} \nabla_{\mathbf{x}}f(\mathbf{x},\mathbf{w})+\nabla_{ \mathbf{x},\mathbf{w}}^{2}g(\mathbf{x},\mathbf{w})\mathbf{v}=\mathbf{0},\\ \nabla_{\mathbf{w}}^{2}g(\mathbf{x},\mathbf{w})\mathbf{v}+\nabla_{\mathbf{w}}f(\mathbf{x},\mathbf{w})=\mathbf{0},\\ \nabla_{\mathbf{w}}g(\mathbf{x},\mathbf{w})=\mathbf{0}.\end{aligned}\right\} \tag{1}\]
###### Abstract
We consider a network comprising \(M\) clients, each possessing their own local mappings \(\mathbb{P}^{m}\) and \(\{\mathbb{S}^{m,n}\}_{n}\), where \(m\in[M]\). The objective is to find the optimal values of \(\mathbf{x}\), \(\mathbf{z}^{1,}\), \(\ldots\), \(\mathbf{z}^{N}\), such that
\[\sum_{m=1}^{M}\mathbb{P}^{m}\left(\mathbf{x},\mathbf{z}^{1},\ldots,\mathbf{z}^{N}\right)= \mathbf{0},\] (Fed-MSA) \[\sum_{m=1}^{M}\mathbb{S}^{m,n}\left(\mathbf{z}^{n-1},\mathbf{z}^{n}\right)= \mathbf{0},\forall\ n\in[N].\]
## 1 Introduction
The problem of computing the optimal value of \(\mathbf{x}\) is to find the optimal values of \(\mathbf{x}\), \(\mathbf{z}^{1,}\), \(\ldots\), \(\mathbf{z}^{N}\), such that
\[\sum_{m=1}^{M}\mathbb{P}^{m}\left(\mathbf{x},\mathbf{z}^{1},\ldots,\mathbf{z}^{N}\right)= \mathbf{0}, \tag{1}\]
where \(\mathbb{P}\) is the set of all \(n\in[N]\). The objective is to find the optimal values of \(\mathbf{x}\), \(\mathbf{z}^{1,}\), \(\ldots\), \(\mathbf{z}^{N}\), such that
\[\sum_{m=1}^{M}\mathbb{P}^{m}\left(\mathbf{x},\mathbf{z}^{1},\ldots,\mathbf{z}^{N}\right)= \mathbf{0}, \tag{2}\]
where \(\mathbb{P}\) is the set of all \(n\in[N]\). The objective is to find the optimal values of \(\mathbf{x}\), \(\mathbf{z}^{1,}\), \(\ldots\), \(\mathbf{z}^{N}\), such that
\[\sum_{m=1}^{M}\mathbb{P}^{m}\left(\mathbf{x},\mathbf{z}^{1},\ldots,\mathbf{z}^{N}\right)= \mathbf{0}, \tag{3}\]
where \(\mathbb{P}\) is the set of all \(n\in[N]\). The objective is to find the optimal values of \(\mathbf{x}\), \(\mathbf{z}^{1,}\), \(\ldots\), \(\mathbf{z}^{N}\), such that
\[\sum_{m=1}^{M}\mathbb{P}^{m}\left(\mathbf{x},\mathbf{z}^{1},\ldots,\mathbf{z}^{N}\right)= \mathbf{0}, \tag{4}\]
where \(\mathbb{P}\) is the set of all \(n\in[N]\). The objective is to find the optimal values of \(\mathbf{x}\), \(\mathbf{z}^{1,}\), \(\ldots\), \(\mathbf{z}^{N}\), such that
\[\sum_{m=1}^{M}\mathbb{P}^{m}\left(\mathbf{x},\mathbf{z}^{1},\ldots,\mathbf{z}^{N}\right)= \mathbf{0}, \tag{5}\]
where \(\mathbb{P}\) is the set of all \(n\in[N]\). The objective is to find the optimal values of \(\mathbf{x}\), \(\mathbf{z}^{1,}\), \(\ldots\), \(\mathbf{z}^{N}\), such that
\[\sum_{m=1}^{M}\mathbb{P}^{m}\left(\mathbf{x},\mathbf{z}^{1},\ldots,\mathbf{z}^{N}\right)= \mathbf{0}, \tag{6}\]
where \(\mathbb{P}\) is the set of all \(n\in[N]\). The objective is to find the optimal values of \(\mathbf{x}\), \(\mathbf{z}^{1,}\), \(\ldots\), \(\mathbf{z}^{N}\), such that
\[\sum_{m=1}^{M}\mathbb{P}^{m}\left(\mathbf{x},\mathbf{z}^{1},\ldots,\mathbf{z}^{N}\right)= \mathbf{0}, \tag{7}\]
where \(\mathbb{P}\) is the set of all \(n\in[N]\). The objective is to find the optimal values of \(\mathbf{x}\), \(\mathbf{z}^{1,}\), \(\ldots\), \(\mathbf{z}^{N}\), such that
\[\sum_{m=1}^{M}\mathbb{P}^{m}\left(\mathbf{x},\mathbf{z}^{1},\ldots,\mathbf{z}^{N}\right)= \mathbf{0}, \tag{8}\]
where \(\mathbb{P}\) is the set of all \(n\in[N]\). The objective is to find the optimal values of \(\mathbf{x}\), \(\mathbf{z}^{1,}\), \(\ldots\), \(\mathbf{z}^{N}\), such that
\[\sum_{m=1}^{M}\mathbb{P}^{m}\left(\mathbf{x},\mathbf{z}^{1},\ldots,\mathbf{z}^{N}\right)= \mathbf{0}, \tag{9}\]
where \(\mathbb{P}\) is the set of all \(n\in[N]\). The objective is to find the optimal values of \(\mathbf{x}\), \(\mathbf{z}^{1,}\), \(\ldots\), \(\mathbf{z}^{N}\), such that
\[\sum_{m=1}^{M}\mathbb{P}^{m}\left(\mathbf{x},\mathbf{z}^{1},\ldots,\mathbf{z}^{N}\right)= \mathbf{0}, \tag{10}\]
where \(\mathbb{P}\) is the set of all \(n\in[N]\). The objective is to find the optimal values of \(\mathbf{x}\), \(\mathbf{z}^{1,}\), \(\ldots\), \(\mathbf{z}^{N}\), such that
\[\sum_{m=1}^{M}\mathbb{P}^{m}\left(\mathbf{x},\mathbf{z}^{1},\ldots,\mathbf{z}^{N}\right)= \mathbf{0}, \tag{11}\]
where \(\mathbb{P}\) is the set of all \(n\in[N]\). The objective is to find the optimal values of \(\mathbf{x}\), \(\mathbf{z}^{1,}\), \(\ldots\), \(\mathbf{z}^{N}\), such that
\[\sum_{m=1}^{M}\mathbb{P}^{m}\left(\mathbf{x},\mathbf{z}^{1},\ldots,\mathbf{z}^{N}\right)= \mathbf{0}, \tag{12}\]
where \(\mathbb{P}\) is the set of all \(n\in[N]\). The objective is to find the optimal values of \(\mathbf{x}\), \(\mathbf{z}^{1,}\), \(\ldots\), \(\mathbf{z}^{N}\), such that
\[\sum_{m=1}^{M}\mathbb{P}^{m}\left(\mathbf{x},\mathbf{z}^{1},\ldots,\mathbf{z}^{N}\right)= \mathbf{0}, \tag{13}\]
where \(\mathbb{P}\) is the set of all \(n\in[N]\). The objective is to find the optimal values of \(\mathbf{x}\), \(\mathbf{z}^{1,}\), \(\ldots\), \(\mathbf{z}^{N}\), such that
\[\sum_{m=1}^{M}\mathbb{P}^{m}\left(\mathbf{x},\mathbf{z}^{1},\ldots,\mathbf{z}^{N}\right)= \mathbf{0}, \tag{14}\]
where \(\mathbb{P}\) is the set of all \(n\in[N]\). The objective is to find the optimal values of \(\mathbf{x}\), \(\mathbf{z}^{1,}\), \(\ldots\), \(\mathbf{z}^{N}\), such that
\[\sum_{m=1}^{M}\mathbb{P}^{m}\left(\mathbf{x},\mathbf{z}^{1},\ldots,\mathbf{z}^{N}\right)= \mathbf{0}, \tag{15}\]
where \(\mathbb{P}\) is the set of all \(n\in[N]\). The objective is to find the optimal values of \(\mathbf{x}\), \(\mathbf{z}^{1,}\), \(\ldots\), \(\mathbf{z}^{N}\), such that
\[\sum_{m=1}^{M}\mathbb{P}^{m}\left(\mathbf{x},\mathbf{z}^{1},\ldots,\mathbf{z}^{N}\right)= \mathbf{0}, \tag{16}\]
where \(\mathbb{P}\) is the set of all \(n\in[N]\). The objective is to find the optimal values of \(\mathbf{x}\), \(\mathbf{z}^{1,}\), \(\ldots\), \(\mathbf{z}^{N}\), such that
\[\sum_{m=1}^{M}\mathbb{P}^{m}\left(\mathbf{x},\mathbf{z}^{1},\ldots,\mathbf{z}^{N}\right)= \mathbf{0}, \tag{17}\]
where \(\mathbb{P}\) is the set of all \(n\in[N]\). The objective is to find the optimal values of \(\mathbf{x}\), \(\mathbf{z}^{1,}\), \(\ldots\), \(\mathbf{z}^{N}\), such that
\[\sum_{m=1}^{M}\mathbb{P}^{m}\left(\mathbf{x},\mathbf{z}^{1},\ldots,\mathbf{z}^{N}\right)= \mathbf{0}, \tag{18}\]
where \(\mathbb{P}\) is the set of all \(n\in[N]\). The objective is to find the optimal values of \(\mathbf{x}\), \(\mathbf{z}^{1,}\), \(\ldots\), \(\mathbf{z}^{N}\), such that
\[\sum_{m=1}^{M}\mathbb{P}^{m}\left(\mathbf{x},\mathbf{z}^{1},\ldots,\mathbf{z}^{N}\right)= \mathbf{0}, \tag{19}\]
where \(\mathbb{P}\) is the set of all \(n\in[N]\). The objective is to find the optimal values of \(\mathbf{x}\), \(\mathbf{z}^{1,}\), \(\ldots\), \(\mathbf{z}^{N}\), such that
\[\sum_{m=1}^{M}\mathbb{P}^{m}\left(\mathbf{x},\mathbf{z}^{1},\ldots,\mathbf{z}^{N}\right)= \mathbf{0}, \tag{20}\]
where \(\mathbb{P
and its optimality conditions (1). Computing the local hypergradient (i.e., mapping \(\mathbb{P}^{m}\)) requires calculating the global Hessian, while each client \(m\) only has access to their local Hessian. Existing approaches address this challenge by maintaining a fixed global Hessian during local iterations, resulting in an inexact local hypergradient [96, 106, 39, 43, 105]; please refer to Sec. 2.2 for further discussions.
Contributions:In this work, we address these fundamental challenges surrounding federated MSA through the following innovations.
* **Federated Local Mapping and Hypergradient Estimation:** Our novel strategy enables the federated estimation of local mappings through local iterations. In contrast to previous approaches on bilevel [96, 106, 39, 43, 105] and compositional [96, 40] optimization, our method successfully updates the indirect component of the hypergradient within the local iteration of FL resulting in significant benefits with an increased value of \(K\); see Figure 0(b).
* **A New Algorithm: FedMSA.** We introduce FedMSA, a novel federated stochastic approximation algorithm with near-optimal communication complexity. The convergence guarantees of our algorithm depend on the level of problem heterogeneity and offer significant speedup in convergence. By integrating momentum and client-drift/variance reduction techniques, FedMSA achieves state-of-the-art convergence rates, even for standard non-federated problems; see Theorem 3.1.
* **Bilevel Optimization (Sec 2.2).** In addition to achieving faster convergence rates, our approach addresses limitations observed in previous works on BLO during the update of the inner and outer variables (\(\mathbf{w}\) and \(\mathbf{x}\)) [96, 106, 39, 43, 105]. Specifically, FedMSA performs simultaneous updates of the inner problem solution, the linear system, and the outer variable, resulting in improved communication efficiency; see Figure 0(a).
* **Compositional Optimization (Sec 2.3).** Existing federated methods for MCO focus solely on the double sequence case (\(N=1\)) [42, 96, 40]. In contrast, our approach extends these results to the multi-level scenario, offering enhanced communication complexity analysis for arbitrary values of \(N\); see Table 1.
Figure 1: Loss function tuning on an imbalanced dataset [57] (details in Sec. 4). Our findings are as follows: **(a)** FedMSA achieves a significant reduction of 10x in communication rounds compared to FedNest [96]. **(b)** Besides enabling much faster convergence, FedMSA also enjoys higher eventual accuracy: When the number of local updates is large, FedMSA successfully updates the indirect component of hypergradient in local iterations, whereas FedNest fails to update the indirect hypergradient, resulting in inferior performance.
## 2 Our Setting and Proposed Algorithm
In this section, we first introduce some notations and definitions that will be used in our analysis. \(\mathbb{N}\) and \(\mathbb{R}\) denotes the set of natural and real numbers, respectively. We consider distributed optimization over \(M\) clients and we denote \([M]:=\{1,\ldots,M\}\). For vectors \(\mathbf{v}\in\mathbb{R}^{d}\) and matrix \(\mathbf{M}\in\mathbb{R}^{d\times d}\), we denote \(\|\mathbf{v}\|\) and \(\|\mathbf{M}\|\) the respective Euclidean and spectral norms. Following the literature on single-level stochastic [25, 55, 114] and federated [72, 50, 78] gradient-based methods for finding a stationary point of an optimization problem, we consider stochastic optimization problems that access \((\mathbb{P}^{m},\{\mathbb{S}^{m,n}\}_{n})\) via
\[\begin{split}\mathbb{P}^{m}\left(\mathbf{x},\mathbf{Z}\right)& :=\mathbb{E}_{\xi\sim\mathcal{A}^{m}}\left[\mathbf{p}^{m}\left(\mathbf{x},\mathbf{Z};\xi\right)\right],\\ \mathbb{S}^{m,n}(\mathbf{z}^{n-1},\mathbf{z}^{n})&:= \mathbb{E}_{\zeta^{n}\sim\mathcal{B}^{m,n}}\left[\mathbf{s}^{m,n}(\mathbf{z}^{n-1},\bm {z}^{n};\zeta^{n})\right],\forall n\in[N].\end{split} \tag{4}\]
Here, \(\mathbf{Z}=[\mathbf{z}^{1},\ldots,\mathbf{z}^{N}]\), \((\xi,\{\zeta^{n}\}_{n})\sim(\mathcal{A}^{m},\{\mathcal{B}^{m,n}\}_{n})\) denote the stochastic samples for the \(m^{\text{th}}\) client.
**Definition 2.1**.: _The mappings \(\mathbf{p}(\mathbf{w})\) and \(\mathbf{p}(\mathbf{w};\xi)\) are called \(L\)-Lipschitz and \(\bar{L}\)-mean Lipschitz if for any \(\mathbf{w},\bar{\mathbf{w}}\in\mathbb{R}^{d}\), \(\|\mathbf{p}(\mathbf{w})-\mathbf{p}(\bar{\mathbf{w}})\|\leq L\|\mathbf{w}-\bar{\mathbf{w}}\|\) and \(\mathbb{E}_{\xi\sim\mathcal{D}}\|\mathbf{p}(\mathbf{w};\xi)-\mathbf{p}(\bar{\mathbf{w}};\xi)\| \leq\bar{L}\|\mathbf{w}-\bar{\mathbf{w}}\|\), respectively._
**Definition 2.2**.: \(\{\mathbf{p}^{m}(\mathbf{w})\}_{m}\) _are called \(\upsilon\)-Heterogeneous if \(\sup_{m\in[M],\mathbf{w}}\|\nabla\mathbf{p}^{m}(\mathbf{w})-\nabla\mathbf{p}(\mathbf{w})\|\leq\upsilon\) for some \(\upsilon\). Here, \(\mathbf{p}=\sum_{m=1}^{M}\mathbf{p}^{m}\)._
### The Proposed Framework: FedMSA
Our proposed approach to federated multi-sequence approximation (abbreviated as FedMSA) is described in Algorithm 1. In Line 6, given global update directions from the previous round
\begin{table}
\begin{tabular}{|c|l|c|c|} \cline{3-4} \multicolumn{1}{c}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{\begin{tabular}{c} **Sample** \\ **Complexity** \\ \end{tabular} } & \multicolumn{1}{c|}{
\begin{tabular}{c} **Communication** \\ **Complexity** \\ \end{tabular} } \\ \hline
**MSA** & STSA[90] & \(\epsilon^{-2}\) & N.A. \\ \cline{2-4} & FedMSA & \(\epsilon^{-1.5}\) & \(\tau\epsilon^{-1}\) \\ \hline \multirow{4}{*}{**BLO**} & FedNest [96] & \(\epsilon^{-2}\) & \(\epsilon^{-2}\) \\ \cline{2-4} & FedMBO [43] & \(\epsilon^{-2}\) & \(\epsilon^{-2}\) \\ \cline{2-4} & AdaFBiO [39] & \(\epsilon^{-1.5}\) & \(\epsilon^{-1.5}\) \\ \cline{2-4} & FedMSA & \(\epsilon^{-1.5}\) & \(\tau\epsilon^{-1}\) \\ \hline \multirow{4}{*}{**MCO**} & ComFedL (\(N=1\)) [42] & \(\epsilon^{-2}\) & \(\epsilon^{-2}\) \\ \cline{2-4} & FedNest (\(N=1\)) [96] & \(\epsilon^{-2}\) & \(\epsilon^{-2}\) \\ \cline{2-4} & AdaMFCGD (\(N=1\)) [40] & \(\epsilon^{-1.5}\) & \(\epsilon^{-1.5}\) \\ \cline{1-1} \cline{2-4} & FedMSA (\(N\geq 1\)) & \(\epsilon^{-1.5}\) & \(\tau\epsilon^{-1}\) \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of sample complexity and communication complexity among various algorithms to achieve an \(\epsilon\)-stationary point of MSA, BLO and MCO problems. Here, \(\tau\) factor controls the benefit we can obtain from the small heterogeneity of the mappings. Discussions are provided below Theorem 3.1.
\((\mathbf{h}_{r-1},\{\mathbf{q}_{r-1}^{n}\}_{n})\), each client \(m\in[M]\) computes the local update directions \((\mathbf{h}_{r}^{m},\{\mathbf{q}_{r}^{m,n}\}_{n})\) via the following momentum rule:
\[\mathbf{h}_{r}^{m} =\mathbf{p}^{m}\left(\mathbf{x}_{r},\mathbf{Z}_{r};\xi_{r}^{m}\right)+(1-\rho) \left(\mathbf{h}_{r-1}-\mathbf{p}^{m}\left(\mathbf{x}_{r-1},\mathbf{Z}_{r-1},\xi_{r}^{m}\right) \right), \tag{5a}\] \[\mathbf{q}_{r}^{m,n} =\mathbf{s}^{m,n}(\mathbf{z}_{r}^{n-1},\mathbf{z}_{r}^{n};\zeta_{r}^{m,n})+(1- \rho)\left(\mathbf{q}_{r-1}^{n}-\mathbf{s}^{m,n}(\mathbf{z}_{r-1}^{n-1},\mathbf{z}_{r-1}^{n}; \zeta_{r}^{m,n}\right),\ \forall\ n\in[N]. \tag{5b}\]
Next, \((\mathbf{h}_{r}^{m},\{\mathbf{q}_{r}^{m,n}\}_{n})\) are communicated by all the clients to the server, which averages them to get the new global mappings \((\mathbf{h}_{r},\{\mathbf{q}_{r}^{n}\}_{n})\), and broadcasts them to the client \(\tilde{m}\) which is chosen randomly from a uniform distribution over \(M\) (Line 9).
At client \(\tilde{m}\sim\text{Unif}\ [M]\), we initialize \((\mathbf{h}_{r,0}^{\tilde{m}},\mathbf{q}_{r,0}^{\tilde{m},1},\ldots,\mathbf{q}_{r,0}^{ \tilde{m},N}):=(\mathbf{h}_{r},\mathbf{q}_{r}^{1},\ldots,\mathbf{q}_{r}^{N})\), \(\mathbf{x}_{r+1,0}^{\tilde{m}}:=\mathbf{x}_{r}\), and \(\mathbf{Z}_{r+1,1}^{\tilde{m}}:=\mathbf{Z}_{r+1,0}^{\tilde{m}}:=\mathbf{Z}_{r}\). We then compute the local mapping estimators by recalling (4), for all \(k\in[K]\) as follows
\[\mathbf{h}_{r,k}^{\tilde{m}} =\mathbf{p}^{\tilde{m}}\left(\mathbf{x}_{r+1,k}^{\tilde{m}},\mathbf{Z}_{r+1,k }^{\tilde{m}};\xi_{r,k}^{m}\right)+\mathbf{h}_{r,k-1}^{\tilde{m}}-\mathbf{p}^{\tilde{ m}}\left(\mathbf{x}_{r+1,k-1}^{\tilde{m}},\mathbf{Z}_{r+1,k-1}^{\tilde{m}};\xi_{r,k}^{ \tilde{m}}\right), \tag{6a}\] \[\mathbf{q}_{r,k}^{\tilde{m},n} =\mathbf{s}^{\tilde{m},n}\left(\mathbf{z}_{r+1,k}^{\tilde{m},n-1},\mathbf{z}_ {r+1,k}^{\tilde{m},n};\zeta_{r,k}^{\tilde{m},n}\right)+\mathbf{q}_{r,k-1}^{\tilde{ m},n}-\mathbf{s}^{\tilde{m}.n}\left(\mathbf{z}_{r+1,k-1}^{\tilde{m},n-1},\mathbf{z}_{r+1,k-1}^{ \tilde{m},n};\zeta_{r,k}^{\tilde{m},n}\right). \tag{6b}\]
These local estimators are then used to update the local variables \(\mathbf{x}_{r+1,k}^{\tilde{m}}\) and \(\mathbf{Z}_{r+1,k}^{\tilde{m}}\) (Lines 14-15). At the end of \(K\) local steps, the client \(\tilde{m}\) transmits its updated local models to the server, and the server aggregate them to find the global models \((\mathbf{x}_{r+1},\mathbf{Z}_{r+1})\).
**Variance Reduction, Momentum, and Client Selection.** The momentum-based estimators in (5) are inspired by gradient estimators from [14] and [78] for stochastic single-level non-FL and
single-level FL problems, respectively. Additionally, the local update directions in (6) employ SARAH or SPIDER-like estimators [77, 25], originally proposed for stochastic single-level problems. The client selection process, where \(\tilde{m}\) is chosen randomly from a uniform distribution over \(M\), combined with variance reduction techniques, plays a crucial role in our analysis and hypergradient estimation. This combination draws inspiration from federated single-level variance reduction methods [72, 50, 68, 78]. It is important to note that while variance reduction and momentum have been studied for bilevel and compositional problems [96, 39, 56, 29], our proposed approaches distinguish themselves through the introduction of probable local hypergradient estimation, multiple sequence analysis (\(N\geq 1\)), and heterogeneous-aware theoretical analysis.
**Communication**. Algorithm 1 requires two rounds of communication between the server and all clients for each iteration, i.e., there are two back-and-forth communications involved in each iteration. Our method uses the extra round of communication, i.e., Line 9, to update the variance/client drift reduced map \((\mathbf{h}^{m},\{\mathbf{q}^{m,n}\}_{n})\) using the current and previous server models \((\mathbf{x}_{r},\mathbf{Z}_{r})\) and \((\mathbf{x}_{r-1},\mathbf{Z}_{r-1})\), respectively.
### Federated Bilevel Optimization
In this section we apply our generic FedMSA to bilevel optimization. In federated bilevel learning, we consider the following optimization problem
\[\begin{array}{ll}\min_{\mathbf{x}\in\mathbb{R}^{d_{1}}}&f(\mathbf{x}):=\frac{1}{M} \sum_{m=1}^{M}f^{m}\left(\mathbf{x},\mathbf{w}^{\star}(\mathbf{x})\right)\\ \text{s.t.}&\mathbf{w}^{\star}(\mathbf{x})\in\operatorname*{argmin}_{\mathbf{w}\in\mathbb{ R}^{d_{2}}}\;\;g\left(\mathbf{x},\mathbf{w}\right):=\frac{1}{M}\sum_{m=1}^{M}g^{m} \left(\mathbf{x},\mathbf{w}\right).\end{array}\] (Fed-BLO)
Each client in our model (\(M\) total clients) can have its own individual outer and inner functions \((f^{m},g^{m})\) to capture objective heterogeneity. We use a stochastic oracle model, where access to local functions \((f_{m},g_{m})\) is obtained through stochastic sampling:
\[f^{m}(\mathbf{x},\mathbf{w}):=\mathbb{E}_{\xi\sim\mathcal{A}^{m}}\left[f^{m}(\mathbf{x}, \mathbf{w};\xi)\right],\;\;g^{m}(\mathbf{x},\mathbf{w}):=\mathbb{E}_{\zeta\sim\mathcal{B}^ {m}}\left[g^{m}(\mathbf{x},\mathbf{w};\zeta)\right],\]
where \((\xi,\zeta)\sim(\mathcal{A}^{m},\mathcal{B}^{m})\) are stochastic samples at the \(m^{\text{th}}\) client.
Under suitable assumptions, the function \(g^{m}(\mathbf{x},\mathbf{w})\) is differentiable. By applying the chain rule and the implicit function theorem, we obtain the following _local_ gradient for any \(\mathbf{x}\in\mathbb{R}^{d}\)[96, Lemma 2.1]:
\[\nabla f^{m}(\mathbf{x})=\nabla_{\mathbf{x}}f^{m}\left(\mathbf{x},\mathbf{w}^{\star}(\mathbf{x}) \right)+\nabla_{\mathbf{x}\mathbf{w}}^{2}g(\mathbf{x},\mathbf{w}^{\star}(\mathbf{x}))\mathbf{v}^{m, \star}(\mathbf{x}), \tag{8}\]
where \(\mathbf{v}^{m,\star}(\mathbf{x})\in\mathbb{R}^{d_{2}}\) is the solution to the following linear system of equations:
\[\nabla_{\mathbf{w}}^{2}g(\mathbf{x},\mathbf{w}^{\star}(\mathbf{x}))\mathbf{v}=-\nabla_{\mathbf{w}}f^{ m}(\mathbf{x},\mathbf{w}^{\star}(\mathbf{x})). \tag{9}\]
In light of (8) and (9), it becomes apparent that the computation of the local gradient of \(f\) at each iteration involves two subproblems: 1) approximate solution of the inner problem \(\mathbf{w}^{\star}(\mathbf{x})\in\operatorname*{argmin}_{\mathbf{w}\in\mathbb{R}^{d_{2}}} \;g\left(\mathbf{x},\mathbf{w}\right)\); and 2) approximate solution of the linear system in (9). This poses challenges in practical implementation of federated methods, such as gradient descent, for solving (9). Specifically, note that the stochastic approximation of \(\mathbf{v}^{m,\star}(\mathbf{x})\) in (9) involves the _global_ Hessian \(\nabla_{\mathbf{w}}^{2}g(\mathbf{x},\mathbf{w}^{\star}(\mathbf{x}))\) in a nonlinear manner, which is not available at any single client. Existing federated bilevel algorithms [96, 106, 39, 43, 105] share the limitation of using an inexact hypergradient approximation for federated optimization problems. This means that the indirect gradient component \(-\nabla_{\mathbf{x}\mathbf{w}}^{2}g(\mathbf{x},\mathbf{w}^{\star}(\mathbf{x}))\mathbf{v}^{m,\star}(\mathbf{ x})\) in equation (8) remains fixed during local training.
As a result, these methods are unable to update the local indirect gradient. We introduce our federated framework in which the solution of the inner problem \(\mathbf{w}^{\star}(\mathbf{x})\), the solutions \(\{\mathbf{v}^{m,\star}(\mathbf{x})\}_{m}\) of the linear systems in (9), and the outer variable \(\mathbf{x}\) all evolve at the same time. This is influenced by the non-FL bilevel optimization framework introduced in [15]. To do so, we define \(\mathbf{z}:=[\mathbf{w}\ \ \mathbf{v}]\) and consider (Fed-MSA) with the mappings (10):
\[\mathbb{S}^{m}(\mathbf{x},\mathbf{z}) =\begin{bmatrix}\nabla_{\mathbf{w}}g^{m}(\mathbf{x},\mathbf{w})\\ \nabla_{\mathbf{w}}^{2}g^{m}(\mathbf{x},\mathbf{w})\mathbf{v}-\nabla_{\mathbf{x}}f^{m}(\mathbf{x},\bm {w})\end{bmatrix}, \tag{10a}\] \[\mathbb{P}^{m}(\mathbf{x},\mathbf{z}) =\nabla_{\mathbf{x}}f^{m}(\mathbf{x},\mathbf{w})-\nabla_{\mathbf{x}\mathbf{w}}^{2}g^ {m}(\mathbf{x},\mathbf{w})\mathbf{v}. \tag{10b}\]
Note that comparing (Fed-MSA) and (10), since \(N=1\), we omit the index \(n\) to simplify notations. These maps are motivated by the fact that we have \(\nabla f^{m}(\mathbf{x})=\mathbb{P}^{m}(\mathbf{x},\mathbf{w}^{\star}(\mathbf{x}),\mathbf{v}^{m, \star}(\mathbf{x}))\). This provides us with a federated BLO where we plug Equation (10) in Algorithm 1.
### Federated Multi-Level Compositional Optimization
In this section, we consider the federated multi-level compositional optimization problem
\[\min_{\mathbf{x}\in\mathbb{R}^{d_{0}}}f(\mathbf{x}):=\frac{1}{M}\sum_{i=1}^{M}f_{i}^{ N}(\frac{1}{M}\sum_{i=1}^{M}f_{i}^{N-1}(\dots\frac{1}{M}\sum_{i=1}^{m}f_{i}^{0}( \mathbf{x})\dots).\] (Fed-MCO)
where \(f^{m,n}:\mathbb{R}^{d_{n}}\mapsto\mathbb{R}^{d_{n+1}}\) for \(m\in[M]\), \(n=0,1,\dots,N\) with \(d_{N+1}=1\). Only stochastic evaluations of each layer function are accessible, i.e.,
\[f^{m,n}(\mathbf{x}):=\mathbb{E}_{\zeta^{m,n}}[f^{m,n}(\mathbf{x};\zeta^{m,n})],\ m\in[M],n=0,1, \dots,N.\]
where \(\{\zeta^{m,n}\}_{m,n}\) are random variables. Here, we slightly overload the notation and use \(f^{m,n}(\mathbf{x};\zeta^{m,n})\) to represent the stochastic version of the mapping.
To solve (Fed-MCO), a natural scheme is to use SGD with the gradient given by
\[\nabla f(\mathbf{x})=\nabla f^{0}(\mathbf{x})\nabla f^{1}(f^{0}(\mathbf{x}))\dots\nabla f^ {N}(f^{N-1}(\dots f^{0}(\mathbf{x})\dots)),\]
where we use
\[\nabla f^{n}(f^{n-1}(\dots f^{0}(\mathbf{x})\dots))=\nabla f^{n}(\mathbf{x})|_{\mathbf{x} =f^{n-1}(\dots f^{0}(\mathbf{x})\dots)}.\]
To obtain a stochastic estimator of \(\nabla f(\mathbf{x})\), we will need to obtain the stochastic estimators for \(\nabla f^{n}(f^{n-1}(...f^{0}(\mathbf{x})...))\) for each \(n\). For example, when \(n=1\), one need the estimator of \(\nabla f^{1}(\mathbb{E}_{\zeta^{0}}[f^{0}(\mathbf{x};\zeta^{0})])\). However, due to the possible non-linearity of \(\nabla f^{1}(\cdot)\), the natural candidate \(\nabla f^{1}(f^{0}(\mathbf{x};\zeta^{0}))\) is not an unbiased estimator of \(\nabla f^{1}(\mathbb{E}_{\zeta^{0}}[f^{0}(\mathbf{x};\zeta^{0})])\). To tackle this issue, a popular method is to directly track \(\mathbb{E}_{\zeta^{n}}[f^{n}(\cdot;\zeta^{n})]\) by variable \(\mathbf{z}^{n},n=0,1,\dots,N\). The mappings take the following form:
\[\mathbb{S}^{m}(\mathbf{z}^{n-1},\mathbf{z}^{n})=\mathbf{z}^{n}-f^{m,n-1}(\mathbf{ z}^{n-1})\ \ \text{for all}\ \ n=1,\dots,N. \tag{11a}\] \[\mathbb{P}^{m}(\mathbf{x},\mathbf{Z})=\nabla f^{m,0}(\mathbf{x})\nabla f^{m, 1}(\mathbf{z}^{1})\dots,\nabla f^{m,N}(\mathbf{z}^{N}). \tag{11b}\]
This provides us with a second algorithm, FedMCO, where we plug Equation (11) in Algorithm 1. Existing federated methods for multi-objective optimization (MCO) primarily focus on the \(N=1\)[42, 96]. In contrast, our approach extends these methods to the multi-level federated setting, offering improved communication complexity for any \(N\geq 1\).
Convergence Analysis
In this section, we provide the convergence guarantees of Algorithm 1. Throughout, we set \(\mathbb{P}(\mathbf{x}):=\mathbb{P}(\mathbf{x},\mathbf{z}^{1,*}(\mathbf{x}),\ldots,\mathbf{z}^{N,*}( \ldots\mathbf{z}^{2,*}(\mathbf{z}^{1,*}(\mathbf{x}))\ldots))\). We make the following assumption on the fixed points and mappings.
**Assumption A**.: _For any \(m\in[M]\), \(n\in\{0\}\cup[N]\) and \(\mathbf{z}^{n-1}\in\mathbb{R}^{d_{n-1}}\):_
**A1**.: _There exists a unique \(\mathbf{z}^{n,*}(\mathbf{z}^{n-1})\in\mathbb{R}^{d_{n}}\) such that \(\mathbb{S}^{n}(\mathbf{z}^{n-1},\mathbf{z}^{n,*}(\mathbf{z}^{n-1}))=0\)._
**A2**.: \(\mathbf{z}^{n,*}(\mathbf{z}^{n-1})\) _and \(\nabla\mathbf{z}^{n,*}(\mathbf{z}^{n-1})\) are \(L_{\mathbf{z},n}\) and \(L_{\mathbf{z},n}^{{}^{\prime}}\)-Lipschitz continuous, respectively._
**A3**.: \(\mathbb{P}^{m}(\mathbf{x})\) _, \(\mathbb{P}^{m}(\cdot,\mathbf{Z})\), and \(\mathbb{S}^{m,n}(\cdot,\mathbf{z}^{n})\), are \(L_{p}\), \(L_{z}\), and \(L_{s,n}\) Lipschitz continuous._
**A4**.: \(\mathbf{p}^{m}(\mathbf{x};\xi)\)_, \(\mathbf{p}^{m}(\cdot,\mathbf{Z};\xi)\) and \(\mathbf{s}^{m,n}(\cdot,\mathbf{z}^{n};\xi)\) are \(\bar{L}_{p}\), \(\bar{L}_{z}\) and \(\bar{L}_{s,n}\)-mean Lipschitz continuous._
**A5**.: \(\mathbb{S}^{m,n}(\mathbf{z}^{n-1},\mathbf{z}^{n})\) _is one-point strongly monotone on_ \(\mathbf{z}^{n,*}(\mathbf{z}^{n-1})\) _given any_ \(\mathbf{z}^{n-1}\)_; that is_
\[\left\langle\mathbf{z}^{n}\!-\!\mathbf{z}^{n,*}(\mathbf{z}^{n-1}),\mathbb{S}^{m,n}(\mathbf{z}^ {n-1},\mathbf{z}^{n})\right\rangle\leq\lambda_{n}\left\|\mathbf{z}^{n}\!-\!\mathbf{z}^{n, *}(\mathbf{z}^{n-1})\right\|^{2},\ \ \text{for some }\lambda_{n}>0.\]
**Assumption B** (Bias and variance).: _For all \((n,m)\in[N]\times[M]\)_
\[\mathbb{E}_{\xi\sim\mathcal{A}^{m}}\left\|\mathbf{p}^{m}\left(\mathbf{x}, \mathbf{Z};\xi\right)-\mathbb{P}^{m}\left(\mathbf{x},\mathbf{Z}\right)\right\|^{2} \leq\sigma^{2},\] \[\mathbb{E}_{\xi\sim\mathcal{B}^{m,n}}\left\|\mathbf{s}^{m,n}(\mathbf{z}^ {n-1},\mathbf{z}^{n};\zeta)-\mathbb{S}^{m,n}(\mathbf{z}^{n-1},\mathbf{z}^{n})\right\|^{2} \leq\sigma_{n}^{2},\ \text{for }n\in[N]\]
**Assumption C** (Heterogeneity).: _For all \((n,m)\in[N]\times[M]\), the set of mappings \(\{\mathbb{P}^{m}\}\) and \(\{\mathbb{S}^{m,n}\}\) are \(\tau_{0}\) and \(\tau_{n}\)-Heterogeneous, respectively._
Note that Assumptions A- C are widely used in the analysis of non-federated MSA [90] federated single SA [50]. Assumption C relates the mappings of different clients to one another and are used in the analysis of single-seqeunce SA [50, 72]. Assumption **A5.** has also been used in previous linear [49] and nonlinear SA [22].
We now present the convergence guarantee of Algorithm 1.
**Theorem 3.1**.: _Suppose Assumptions A-C hold. Further, assume \(\rho=\Theta(\frac{1}{R})\), \(\alpha=\mathcal{O}(\frac{1}{\tau K})\), and \(\beta_{n}=\mathcal{O}(\frac{1}{\tau K})\) for \(n\in[N]\), then_
\[\mathbb{E}\left\|\mathbb{P}(\tilde{\mathbf{x}})\right\|^{2}+\sum_{n=1}^{N} \mathbb{E}\left\|\tilde{\mathbf{z}}^{n}-\mathbf{z}^{n,*}(\tilde{\mathbf{z}}^{n-1})\right\|^ {2}\leq\mathcal{O}\left(\frac{\tau}{R}+\frac{1}{\sqrt{KR}}+\frac{\sigma^{2}}{MKR }+\left(\frac{\sigma}{MKR}\right)^{2/3}\right).\]
_Here, \(\sigma:=\max(\sigma_{1},\cdots,\sigma_{N})\) and \(\tau=\max(\tau_{0},\cdots,\tau_{N})\), and \(\mathcal{O}\) hides problem dependent constants of a polynomial of \(N\)._
**Comparison with previous MSA, BLO, and MCO results**: The convergence rate of FedMSA is guaranteed to be \(\frac{\tau}{R}+\frac{1}{\sqrt{KR}}+\frac{\sigma^{2}}{MKR}+\left(\frac{\sigma}{ MKR}\right)^{2/3}\), outperforming previous non-federated MSA [90] algorithm with the rate \(\frac{1}{\sqrt{R}}\). It also achieves a notable improvement in the communication complexity of BLO and MCO, with an upper bound of \(\mathcal{O}(\tau\epsilon^{-1})\), compared to existing results in bilevel [96, 106, 39, 43, 105] and compositional [42, 96, 40] optimization. These advantages are particularly significant when \(\tau\) is small, as shown in Table 1.
## 4 Numerical Experiments
In the experiments, we conduct experiments for the Fed-BLO and Fed-MCO problems.
We first apply Algorithm 1 to solve Fed-MCO problem. Our example is specifically chosen from the field of risk-averse stochastic optimization, which involves multilevel stochastic composite optimization problems. It can be formulated as follows:
\[\min_{\mathbf{x}}\mathbb{E}[U(\mathbf{x},\xi)]+\lambda\sqrt{\mathbb{E}[\max(0,U(\mathbf{x}, \xi)-\mathbb{E}[U(\mathbf{x},\xi)])^{2}]}.\]
This problem is a stochastic three-level (\(N=2\)) composition optimization problem [5] with
\[f_{0}(\mathbf{x}) =(\mathbf{x},\mathbb{E}[U(\mathbf{x},\xi)]),\] \[f_{1}(\mathbf{x},\mathbf{y}) =(\mathbf{y},\mathbb{E}[\max(0,U(\mathbf{x},\xi)-\mathbf{y})]),\] \[f_{2}(\mathbf{x},\mathbf{y}) =\mathbf{x}+\lambda\sqrt{\mathbf{y}+\delta}.\]
The loss function can be written in the compositional form as \(f_{2}(f_{1}(f_{0}(\mathbf{x})))\). We also note that to be consistent with [5], in experiment, we also define \(U(\mathbf{x},\xi)=(b-g(\mathbf{a}^{\top}\mathbf{x}))^{2}\), and \(g(\mathbf{x})=\mathbf{x}^{2}\). For the experiment, we assume \(\mathbf{a}\in\mathbb{R}^{d}\) is a zero-mean Gaussian random vector with a random covariance matrix \(\Sigma_{i,j}\sim\mathcal{N}(0,1)\), and \(\zeta\sim\mathcal{N}(0,0.001\times\mathbf{I}_{d})\) is the noise. The true parameter \(\mathbf{x}^{*}\in\mathbb{R}^{d}\) is drawn from a standard Gaussian distribution and fixed. We set \(d=10\), and the total number of examples in the experiment is set to \(1,000\).
In Figure 2, we present the results of our experiment where we conduct 1,000 iterations and compare the performance of our proposed method, FedMSA, with centralized training. In order to ensure a fair comparison in terms of the number of updates, we set the hyperparameters of FedMSA to \(R=200\) and \(K=5\), resulting in a total of 1,000 updates. Our analysis reveals that FedMSA exhibits a significantly faster convergence rate compared to the centralized method, demonstrating improvements in both communication efficiency and the number of updates required to reach convergence.
We now focus on Fed-BLO. Our experiment follows the setup of loss function tuning on an imbalanced dataset, as described in [96]. The objective is to maximize the class-balanced validation accuracy while training on the imbalanced dataset. We adopt the same network architecture, long-tail MNIST dataset, and train-validation strategy as [96]. However, unlike their approach of partitioning the dataset into 100 clients using FedAvg [66] with either i.i.d. or non-i.i.d. distribution, we introduce fine-grained control over partition heterogeneity inspired by [72].
Figure 3: At the final of training, with the same number of server/global epochs, FedMSA benefits from local hypergradient estimation which can update the indirect component of hypergradient in local iterations.
Figure 2: On Fed-MCO problem, FedMSA achieves faster convergence compared to its centralized counterpart.
In a dataset with \(C\) imbalanced classes, each containing \(n_{i}\) samples, we aim to achieve a specified heterogeneity level \(q\in[0,1]\). The dataset is divided into \(C\) clients, each designed to have an equal number of samples, \(C^{-1}\sum_{i=1}^{C}n_{i}\). For each client \(i\), we include \(q\times 100\%\) of the data from class \(i\), or all \(n_{i}\) examples if the class size is insufficient. If a client has fewer samples than \(C^{-1}\sum_{i=1}^{C}n_{i}\), we fill the gap by uniformly sampling from the remaining samples across all classes. This ensures an equal sample count across clients and a specified level of class heterogeneity, even within the context of an imbalanced dataset. In our experiments, we first split the imbalanced MNIST dataset into \(C=10\) clients according to \(q\), and then split each client into \(10\) smaller clients to match the \(100\) clients used in [96, 66]. Throughout the experiment section, we choose \(10\) clients randomly from \(100\) clients for local updates to coincide with the literature.
In Figure 3, FedMSA's ability in local hypergradient update is demonstrated. Since the outer objective \(f\) solely depends on optimal model parameters \(\mathbf{w}^{*}(\mathbf{x})\) for this problem, the direct hypergradient \(\nabla_{\mathbf{x}}f^{m}\left(\mathbf{x},\mathbf{w}^{*}(\mathbf{x})\right)\) remains zero. FedNest does not observe any change in local updates without updating the global Hessian-Gradient-Product. However, FedMSA benefits from local hypergradient estimation and updates the indirect hypergradient in local iterations, resulting in improved performance with larger local update \(K\). In Figure 4, the test performance is shown for different heterogeneity levels and communication rounds. FedMSA stops at \(1,000\) rounds as further training does not decrease loss. FedNest requires over \(2,000\) rounds to reach the same accuracy as FedMSA's \(90\%\) test accuracy achieved in approximately \(250\) rounds. The efficiency of Algo.1 is highlighted by the significant reduction in communication rounds.
## 5 Related Work
We gather the related work under two topics: multi-sequence stochastic approximation and federated nested optimization. A more in-depth discussion is provided in Appendix.
**Multi-Sequence Stochastic Approximation**. DSA and MSA have found widespread applications in various domains, including stochastic control [23], bilevel/multi-level optimization [86, 110, 5, 95], minimax optimization [89, 88, 17], and actor-critic reinforcement learning [104, 4]. Recent literature has proposed and analyzed several DSA and MSA methods to address these problems [23, 36, 110, 86, 90]. While many analyses of DSA focus on the linear case with linear mappings \(v(\mathbf{x},\mathbf{y})\) and \(h(\mathbf{x},\mathbf{y})\), notable results for Two-time-scale (TTS) linear SA have been achieved [54, 16, 49], proving an iteration complexity of \(\mathcal{O}(\epsilon^{-1})\) for \(\epsilon\)-accuracy. TTS nonlinear SA has also been analyzed, with
Figure 4: The test accuracy during training, with fixed local updates (\(K=12\)) and different heterogeneity levels \(q\), demonstrates the early stopping of FedMSA at \(1000\) communication rounds compared to the extended training of FedNest.
[70] establishing finite-time convergence rate under asymptotic convergence of the two sequences, and [23] relaxing this assumption and showing an iteration complexity of \(\mathcal{O}(\epsilon^{-1.5})\), which is larger than the \(\mathcal{O}(\epsilon^{-1})\) complexity of TTS linear SA. In a federated setting, these problems have not been explored due to fundamental challenges, such as accounting for problem heterogeneity and developing hyper-gradient estimators. Our work, closely related to [91], considers the assumption that all but the main sequence have strongly monotone increments, providing an iteration complexity of \(\mathcal{O}(\epsilon^{-2})\).
**Federated Nested Optimization.** Classical federated learning algorithms, such as FedAvg [67], encounter convergence issues due to _client drift_[51, 38], where local client models deviate from globally optimal models due to _objective heterogeneity_ across clients. To mitigate this drift, variance reduction methods have been proposed to maintain client estimates of the true gradient [72, 50, 68, 78]. These methods draw inspiration from variance-reduced gradient estimators, such as [18, 48, 77, 14]. Recent works, including [96, 106, 39, 43, 105], have developed bilevel optimization approaches for _homogeneous_ and general _heterogeneous_ federated settings. Additionally, [47] investigated asynchronous distributed bilevel optimization, while [111, 13, 62, 30, 97, 73] developed bilevel programming over decentralized networks. Our FedMSA not only provides faster rates but also addresses some shortcomings of prior works on bilevel optimization when approximating local hypergradients.
## 6 Conclusions and Discussion
In this work, we have developed FedMSA, a novel federated algorithm for stochastic approximation with multiple coupled sequences (MSA) in the context of bilevel optimization and multi-level compositional optimization. FedMSA improves upon prior theory by enabling the estimation of local mappings and hypergradients through local client updates. It achieves near-optimal communication complexity and incorporates momentum and variance reduction techniques for accelerated convergence rates. Experimental results demonstrate the empirical benefits of FedMSA, including significant reductions in communication rounds compared to previous federated BLO schemes. However, one limitation of our work is its dependence on low heterogeneity levels for fast rates and communication complexity. Furthermore, we believe that a more general result regarding local hypergradient estimation with more than one client selection can be proven, as indicated by the empirical success of our generalized FedMSA algorithm. Nevertheless, even in its current form, FedMSA with local hypergradient estimation and near-optimal communication complexity represents a state-of-the-art algorithm and paves the way for efficient decentralized algorithms for nested problems.
|
2301.05489 | A Residual Diffusion Model for High Perceptual Quality Codec
Augmentation | Diffusion probabilistic models have recently achieved remarkable success in
generating high quality image and video data. In this work, we build on this
class of generative models and introduce a method for lossy compression of high
resolution images. The resulting codec, which we call DIffuson-based Residual
Augmentation Codec (DIRAC), is the first neural codec to allow smooth traversal
of the rate-distortion-perception tradeoff at test time, while obtaining
competitive performance with GAN-based methods in perceptual quality.
Furthermore, while sampling from diffusion probabilistic models is notoriously
expensive, we show that in the compression setting the number of steps can be
drastically reduced. | Noor Fathima Ghouse, Jens Petersen, Auke Wiggers, Tianlin Xu, Guillaume Sautière | 2023-01-13T11:27:26Z | http://arxiv.org/abs/2301.05489v3 | # A Residual Diffusion Model for High Perceptual Quality Codec Augmentation
###### Abstract
Diffusion probabilistic models have recently achieved remarkable success in generating high quality image and video data. In this work, we build on this class of generative models and introduce a method for lossy compression of high resolution images. The resulting codec, which we call Diffuson-based Residual Augmentation Codec (DIRAC), is the first neural codec to allow smooth traversal of the rate-distortion-perception tradeoff at test time, while obtaining competitive performance with GAN-based methods in perceptual quality. Furthermore, while sampling from diffusion probabilistic models is notoriously expensive, we show that in the compression setting the number of steps can be drastically reduced.
+
Footnote †: Preprint: Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc. \(\dagger\): Work completed during an internship at Qualcomm AI Research. \(\ast\): Equal contribution.
## 1 Introduction
Denoising diffusion probabilistic models (DDPMs) [57] have recently shown incredible performance in the generation of high-resolution images with high perceptual quality. For example, they have powered large text-to-image models such as DALL-E 2 [49] and Imagen [55], which are capable of producing realistic high-resolution images based on arbitrary text prompts. Likewise, diffusion models have demonstrated impressive results on image-to-image tasks such as super-resolution [56, 20], deblurring [68] or inpainting [54], in many cases outperforming generative adversarial networks (GANs) [13]. Our goal in this work is to leverage these capabilities in the context of learned compression.
Neural codecs, which learn to compress from example data, are typically trained to minimize distortion between an input and a reconstruction, as well as the bitrate used to transmit the data [63]. However, optimizing for rate-distortion may result in blurry reconstructions. A recent class of generative models focuses instead on improving perceptual quality of reconstructions, either with end-to-end trained neural codecs [4, 39]--we refer to such techniques as _generative compression_--or by using a receiver-side _perceptual enhancement_ model [27] to augment the output of standard codecs. Either approach will usually come at a cost in fidelity, as there is a fundamental tradeoff between fidelity and perceptual quality [8]. Finding a good operating point for this tradeoff is not trivial and likely application-dependent. Ideally, one would like to be able to select this operating point at test time. But while adaptive rate control is commonly used, few neural image codecs allow trading off distortion and perceptual quality dynamically [25, 3].
In this work, we present a method that allows users to navigate the full rate-distortion-perception tradeoff at test time with a single model. Our approach, called _Diffusion Residual Augmentation Codec_ (_DIRAC_), uses a base codec to produce an initial reconstruction with minimal distortion, and then improves its perceptual quality using a de
Figure 1: Base reconstruction (left) and the DIRAC-enhanced version (right). Our model combines a base codec with a receiver-side enhancement model, and can smoothly interpolate between near-state-of-the-art fidelity (PSNR, higher better) and near-state-of-the-art perceptual quality (FID, lower better). For JPEG (\(QF=5\)) specifically, we achieve a drastic improvement in perceptual quality without loss in PSNR. Best viewed digitally, PSNR measured on the shown example, FID/256 measured on the full CLIC 2020 test dataset.
noising diffusion probabilistic model that predicts residuals, see Fig. 1 for an example. The intermediate samples correspond to a smooth traversal between high fidelity and high perceptual quality, so that sampling can be stopped when a desired tradeoff is reached. Recent work [72, 46] already demonstrates that a diffusion-based image codec is feasible in practice, but we show that different design choices allow us to outperform their models by a large margin, and enable faster sampling as well as selection of the distortion-perception operating point. Our contributions are:
* We demonstrate a practical and flexible diffusion-based model that can be combined with any image codec to achieve high perceptual quality compression. Paired with a neural base codec, it can interpolate between high fidelity and high perceptual quality while being competitive with the state of the art in both.
* Our model can be used as a drop-in enhancement model for traditional codecs, where we achieve strong perceptual quality improvements. For JPEG specifically, we improve FID/256 by up to \(78\%\) without loss in PSNR.
* We present techniques that make the diffusion sampling procedure more efficient: we show that in our setting we need no more than 20 sampling steps, and we introduce _rate-dependent thresholding_, which improves performance for multi-rate base codecs.
## 2 Related work
Neural data compressionNeural network based codecs are systems that learn to compress data from examples. These codecs have seen major advances in both the image [41, 51, 5, 40] and video domain [69, 36, 52, 17, 2, 24, 23]. Most modern neural codecs are variations of _compressive autoencoders_[63], which transmit data \(\mathbf{x}\) using an autoencoder-like architecture, resulting in a reconstruction \(\mathbf{\tilde{x}}\). These systems are typically optimized using a rate-distortion loss, i.e. a combination of distortion and a rate loss that are balanced with a rate-parameter \(\lambda_{\text{rate}}\).
Recent work identifies the importance of a third objective: _perceptual quality_, ideally meaning that reconstructions look like real data according to human observers. Blau and Michali [8] formalize perceptual quality as a distance between the image distribution \(p(\mathbf{x})\) and the distribution of reconstructions \(p(\mathbf{\hat{x}})\), and show that rate, distortion and perception are in a triple tradeoff. To optimize for perceptual quality, a common choice is to train the decoder as a (conditional) generative adversarial network by adding a GAN loss term to the rate-distortion loss and training a discriminator [4, 39, 75, 3, 43].
It is impractical to train and deploy one codec per rate-distortion-perception operating point. A common choice is therefore to condition the network on the bitrate trade-off parameter \(\lambda_{\text{rate}}\), and vary this parameter during training [70, 60, 50]. Recent GAN-based works use similar techniques to trade off fidelity and perceptual quality, either using control parameters, or by masking the transmitted latent [70, 25, 3]. In particular, the work by Agustsson _et al_. [3] uses receiver side conditioning to trade-off distortion and realism at a particular bitrate. However, a codec that can navigate all axes of the rate-distortion-perception trade-off simultaneously using simple control parameters does not exist yet.
Diffusion probabilistic modelsDenoising diffusion probabilistic models (DDPMs) [57, 19] are latent variable models in which the latents \(\mathbf{x_{1}},...,\mathbf{x_{T}}\) are defined as a \(T\)-step Markov chain with Gaussian transitions. Through this Markov chain, the _forward process_ gradually corrupts the original data \(\mathbf{x_{0}}\). The key idea of DDPMs is that, if the forward process permits efficient sampling of any variable in the chain, we can construct a generative model by learning to reverse the forward process. To generate a sample, the reverse model is applied iteratively. For a thorough description of DDPMs, we refer the reader to the appendix or Sohl-Dickstein _et al_. [57].
DDPMs have seen success in various application areas, including image and video generation [20, 21, 73, 49, 55], representation learning [48, 47], and image-to-image tasks such as super-resolution [56] and deblurring [68]. Recent work applies this model class in the context of data compression as well. Hoogeboom _et al_. [22] show that DDPMs can be used to perform lossless compression. In the lossy compression setting, Ho _et al_. [19] show that if continuous latents could be transmitted, a DDPM enables progressive coding. Theis _et al_. [62] make this approach feasible by using reverse channel coding, yet it remains impractical for high resolution images due to its high computational cost.
More practical diffusion-based approaches for lossy image compression exist, too. Yang and Mandt [72] propose a codec where a conditional DDPM takes the role of the decoder, directly producing a reconstruction from a latent variable. Pan _et al_. [46] similarly use a pretrained text-conditioned diffusion model as decoder, and let the encoder extract a text embedding. However, both approaches still require a large number of sampling steps, or encoder-side optimization of the compressed latent.
Standard codec restorationA common approach is to take a standard codec such as JPEG, and enhance or _restore_ its reconstructions. Until recently, most restoration works focused on improving distortion metrics such as PSNR or SSIM [14, 32, 34, 15, 71, 74]. However, distortion
optimized restoration typically leads to blurry images, as blur get rids of compression artifacts such as blocking and ringing. Consequently, a recent category of work on _perceptual enhancement_[27, 28, 54, 67, 59] focuses mainly on realism of the enhanced image. This is usually measured by perceptual distortion metrics such as LPIPS [76] or distribution-based metrics like FID [18]. Although enhanced images are less faithful to the original than the non-enhanced version, they may be rated as more realistic by human observers. In this setting, DDPMs have mostly been applied to JPEG restoration [28, 54, 59]. However, these methods have not shown test-time control of perception-distortion tradeoff, and were only tested for a single standard codec (JPEG) on low-resolution data.
## 3 Method
In this work, we introduce DIRAC, a diffusion-based image compression approach for high-resolution images. It combines a (potentially learned) base codec with a residual diffusion model that performs iterative enhancement. This setup is shown in Fig. 2. By design, we obtain both a high fidelity initial reconstruction, and a high perceptual quality enhanced reconstruction. The enhancement is performed on the receiver side and can be stopped at any time, enabling test-time control over the distortion-perception tradeoff.
### Residual diffusion models
Diffusion-based enhancement is typically achieved by conditioning the reverse model on the image-to-enhance \(\mathbf{\tilde{x}}\), effectively modeling the conditional distribution \(p(\mathbf{x_{0}}|\mathbf{\tilde{x}})\)[20, 54, 59]. Following Whang _et al_. [68], we instead opt to model the distribution of residuals \(p(\mathbf{r_{0}}|\mathbf{\tilde{x}})\), where \(\mathbf{r_{0}}=\mathbf{x}-\mathbf{\tilde{x}}\) and the index is for conceptual diffusion time. From an information theory perspective modeling residuals is equivalent to modeling images, but residuals follow an approximately Gaussian distribution, which we believe can be easier to model. More details on this choice are given in the appendix.
For training, we use the common loss parametrization where the model learns to predict the initial sample instead of the noise that was added to it. Yang and Mandt [72] note that optimizing the perceptual distortion metric LPIPS [76] contributes to perceptual performance, and we adopt a similar practice here by adding a loss term, so that our final loss becomes:
\[\mathcal{L}(\mathbf{x},\mathbf{\tilde{x}})=\mathop{\mathbb{E}}_{t,\mathbf{r_{ t}}}\left[w_{t}||\mathbf{r_{0}}-\mathbf{r_{0}}^{\prime}||^{2}\right.+\lambda_{ \text{LPIPS}}\,d_{\text{LPIPS}}(\mathbf{x},\mathbf{\tilde{x}}+\mathbf{r_{0}}^ {\prime})\right], \tag{1}\]
where \(\mathbf{r_{0}}^{\prime}=g_{\theta}(\mathbf{r_{t}},t)\) is the prediction from our model. \(w_{t}\) is a weighting factor for which the theoretically derived terms become very large for small \(t\) (see appendix for the derivation), so we choose to set \(w_{t}=1\) to balance all loss terms evenly, similar to how Ho _et al_. [19] use a weighted variational objective in practice.
### Distortion-perception traversal
The choice to enhance a base codec reconstruction with a generative model has a compelling advantage over approaches that learn to trade off rate, distortion and perception in an end-to-end manner: in theory, it gives us access to an initial reconstruction \(\mathbf{\tilde{x}}\) with maximum fidelity, and an enhanced reconstruction \(\mathbf{\hat{x}}\) with maximum perceptual quality. First, for a perfect encoder and decoder, \(\mathbf{\tilde{x}}\) has the lowest distortion in expectation. Second, if the encoder and decoder are deterministic, and the enhancement model learns \(p(\mathbf{x}|\mathbf{\tilde{x}})\) exactly, then we have \(p(\mathbf{\tilde{x}})=p(\mathbf{x})\). This means perfect quality under the definition of Blau and Michali [8].
Figure 2: Overview of our architecture. Given an input image \(\mathbf{x}\) and target rate factor \(\lambda_{rate}\), we obtain a base codec reconstruction \(\mathbf{\tilde{x}}\). Our DDPM is conditioned on \(\mathbf{\tilde{x}}\) and learns to model a reverse diffusion process that generates residuals \(r_{0}\) from sampled gaussian noise latents \(r_{T}\). The enhanced reconstruction \(\mathbf{\hat{x}}\) is then obtained by adding the predicted residual to \(\mathbf{\tilde{x}}\)
One can also view the decoder and enhancement model as one joint stochastic decoder, which is required for perfect quality at any bitrate [65]. In this picture, the diffusion steps will then gradually move the prediction from the mean of the learned distribution--which would be 0 for the residuals of an optimal base model--to a sample, corresponding to a transition from high fidelity to high perceptual quality.
### Sampling improvements
In this work we make use of the noise schedule and sampling procedure introduced by Denoising Diffusion Implicit Models (DDIM) [58]. While we use \(T=1000\) diffusion steps during training, the number of sampling steps can be reduced to 100 at test time at negligible cost to performance, by redistributing the timesteps based on the scheme described by Nichol and Dhariwal [44]. As explained in the previous section, the sampling procedure can be stopped at any point, e.g. when the desired perceptual quality is achieved or when a compute budget is reached. To indicate how many sampling steps are performed, we refer to our model as DIRAC-n, going from DIRAC-1 to DIRAC-100.
We further improve sampling efficiency and performance through two contributions: late-start sampling and _rate-dependent thresholding_.
First, we demonstrate in Section 5.3 that we can skip \(80\%\) of the 100 sampling steps, instead starting sampling at DIRAC-80 with noise as model input which is scaled according to the diffusion model's noise schedule. We are not the first to introduce late-start sampling [38], but we can do it while sampling directly from a scaled standard Gaussian as opposed to a more complex distribution, simplifying the approach. We further explain the effectiveness of this approach by showing that the sampling trajectory has very small curvature in the early steps.
Second, we make use of a method we dub _rate-dependent thresholding_. Like most diffusion works, we scale our data (which here are residuals) to the range \([-1;1]\) and clip all intermediate predictions \(\mathbf{r_{0}}^{\prime}\) to this range. However, the distribution of residuals strongly depends on the bit rate of the base codec, with high rate resulting in small residuals between original and reconstruction, and vice versa (see appendix for an analysis of residual distributions). Inspired by Saharia _et al_. [55], who introduce _dynamic thresholding_, we analyze the training data distribution and define a value range for each rate (more precisely, for each \(\lambda_{rate}\) we evaluate). Empirically we found that choosing a range such that \(95\%\) of values fall within it works best. During sampling, intermediate predictions are then clipped to the range for the given rate parameter instead of \([-1;1]\). We hypothesize that this reduces outlier values that disproportionately affect PSNR. In Section 5.3 we show that it does indeed improve PSNR, without affecting perceptual quality. Contrary to Saharia _et al_. we only perform clipping, but not rescaling of intermediate residuals.
We only apply rate-dependent thresholding in the generative compression setting, where we have access to \(\lambda_{rate}\) on the receiver-side, but not for the enhancement of traditional codecs as access to the quality factor is not guaranteed.
## 4 Experiments
We evaluate DIRAC both as a generative compression model by enhancing a strong neural base codec, and in the perceptual enhancement setting by using traditional codecs as base codec. In the generative compression setting, we evaluate both rate-distortion and rate-perception performance in comparison to prior work in neural compression, and demonstrate that DIRAC can smoothly traverse the entire rate-distortion-perception tradeoff. In the enhancement setting, we demonstrate DIRAC's flexibility by comparing it with task-specific methods from the literature, focusing on both distortion and perceptual quality. Finally, we present experiments that elucidate why our proposed sampling improvements--extremely late sampling start and rate-dependent thresholding--can be successful in the residual enhancement setting.
BaselinesFor the generative compression setting, we focus on strong GAN-based baselines. One of the strongest perceptual codecs is _HiFiC_[39], a GAN-based codec trained for a specific rate-distortion-perception tradeoff point. _MultiRealism_, a followup work [3], allows navigating the distortion-perception tradeoff by sharing decoder weights and conditioning the decoder on the tradeoff parameter [3] Additionally, _MS-ILLM_[43] show that better discriminator design can further improve perceptual scores [43]. Finally, Yang and Mandt [72] propose a codec where a conditional DDPM takes the role of the decoder, directly producing a reconstruction from a latent variable.
In the JPEG restoration setting, we compare to _DDRM_[28], which recently outperformed the former state-of-the-art method QGAC [15]. They use a pre-trained image-to-image diffusion model and relax the diffusion process to nonlinear degradation, as introduced in [27], to enable JPEG restoration for low resolution images.
Other relevant diffusion-based baselines include _Palette_[54] and ITGDM [59], which explicitly train for JPEG restoration, yet only report perceptual quality on low resolution datasets. Finally, for VTM restoration, we consider _ArabicPerceptual_[67], however it is trained and evaluated on different datasets and metrics. Due to these differences, comparison to these methods can be found in the appendix.
Our modelsCreating a DIRAC model consists of two stages: (1) training or defining a multi-rate base codec, and (2) training a diffusion model to enhance this base codec.
In the generative compression setting, _i.e_. when the base codec is neural, we use the SwinT-ChARM [77] model. It is a near-state-of-the-art compressive autoencoder based on the Swin Transformer architecture [35]. We adapt this codec to support multiple bitrates using a technique known as _latent scaling_, see details in appendix.
In the enhancement setting, we couple DIRAC with two standard codec as base model: the intra codec of VTM 17.0 [9], as it is one of the best performing standard codecs in the low bitrate regime, and the widely-used JPEG [66] codec. Later, we refer to SwinT-ChARM+DIRAC as just DIRAC, while we explicitly refer to VTM+DIRAC and JPEG+DIRAC.
Metrics and evaluationWe evaluate our method using both distortion metrics and perceptual quality metrics. We always evaluate on full resolution RGB images: we replicate-pad the network input so that all sides are multiple of the total downsampling factor of the network, and crop the output back to the original resolution. We repeat scores as reported in the respective publications.
To measure distortion, we use the common PSNR metric. We also include the full-reference LPIPS [76] metric as it has been shown to align well with human judgment of visual quality. As perceptual quality metrics, we primarily use a variation of the Frechet Inception Distance (FID) [18] metric, which measures the distance between the target distribution \(p(\mathbf{x})\) and the distribution of reconstructions \(p(\mathbf{\hat{x}})\). FID requires resizing of input images to a fixed resolution, which for high-resolution images will destroy generated details. We therefore follow procedure of previous compression work and use half-overlapping \(256\times 256\) crops [39, 3, 43] for high resolution datasets, this metric is referred to as FID/256.
When we report bitrates, we perform entropy coding and take the file size. This leads to no more than \(0.5\%\) overhead compared to the theoretical bitrate given by the prior.
DatasetsTo train the SwinT-ChARM base model and residual diffusion models, we use the training split of the high-resolution CLIC2020 dataset [64] (1633 images of varying resolutions). For DDPM training, we follow the three-step preprocessing pipeline of HiFiC [39]: for each image, randomly resize according to a scale factor uniformly sampled from the range \([0.5,1.0]\), then take a random \(256\times 256\) crop, then perform a horizontal flip with probability 0.5. For validation and model selection, we use the CLIC 2020 validation set (102 images).
We evaluate on two common image compression benchmark datasets: the CLIC2020 test set (428 images) and the Kodak dataset [31] (24 images). To enable comparison with enhancement literature, we evaluate on the low resolution ImageNet-val1k [12, 45]. We follow the preprocessing procedure from Kawar _et al_. [28] where images are center cropped along the long edge and then resized to 256.
Implementation detailsFor the base codec, we implement SwinT-ChARM as described in the original paper. We first train a single rate model for 2M iterations on \(256\times 256\) CLIC 2020 train crops, then finetune it for multiple bitrates for 0.5M iterations. The standard base codecs are evaluated using the VTM reference software, CompressAI framework [6] and libjpeg in Pillow. We provide full details on the implementation and hyperparameters in the appendix.
The diffusion residual model is based off DDPM's official open source implementation [13], and we base most of our default architecture settings on the \(256\times 256\) DDPM from Preechakul _et al_. [48], which uses a U-Net architecture [53]. Conditioning on the base codec reconstruction \(\mathbf{\tilde{x}}\) is achieved by concatenating \(\mathbf{\tilde{x}}\) and the DDPM latent \(\mathbf{r_{t}}\) in each step. Our model has 108.4 million parameters. For context, the HiFiC baseline has 181.5 million. As HiFiC requires only one forward pass to create a reconstruction, it is typically less expensive than DIRAC. We provide more detail on computational cost in the appendix.
Finally DIRAC and VTM+DIRAC diffusion models are trained for 650k steps, using the Adam optimizer [29] with a learning rate of \(10^{-4}\) and no learning rate decay. The JPEG+DIRAC model was trained for 1M iterations, as JPEG degradations are much more severe than those of VTM and SwinT-ChARM.
## 5 Results
### Generative compression
We visualize the rate-distortion and rate-perception tradeoffs in Fig. 3. We show our model in two configurations: DIRAC-100 (100 sampling steps) has maximum perceptual quality, and DIRAC-1 (single sampling step) has minimal distortion.
Along the distortion axis, _i.e_. PSNR, DIRAC-1 is close to VTM and _MultiRealism[3]_ at \(\beta=0\), which in turn is competitive with the state of the art. On the perceptual quality side, HiFiC is the current state of the art of peer-reviewed works. DIRAC-100 matches HiFiC in FID/256 with better PSNR on both test datasets. Likewise, we match _MultiRealism_ at \(\beta=2.56\) in FID/256. Note that between DIRAC and _Multirealism_, no model is strictly better than the other, _i.e_. better on both distortion and perception axes at the same time. This is reflected in the examples in Fig. 4, where DIRAC-1, our high-fidelity model, looks a bit sharper than [3], while examples with high perceptual quality are hardly distinguishable.
MS-ILLM [43] achieves a new state of the art in FID/256 and is unmatched by all other methods, but upon qualitative comparison in Fig. 5 we observe that even at a lower bitrate compared to MS-ILLM, DIRAC-100 is able to generate perceptually relevant details in a more meaningful manner. Of course, this is only a single datapoint, and stronger claims require a thorough perceptual comparison. Similar to Multi-realism [3], our model can target a wide range of distortion-perception tradeoffs at test time, indicated by the shaded area in Fig. 3. Finally, we compare to the diffusion-based codec of Yang and Mandt [72] on Kodak. Our model outperforms theirs by a large margin in terms of LPIPS. More visual examples can be found in the appendix.
Figure 4: CLIC 2020 test reconstructions comparing our model to _MultiRealism[3]_. We show original (top left), Swint-ChARM base codec (bottom left), DIRAC-1 (high fidelity) and DIRAC-100 (high perceptual quality) in center column, _MultiRealism_ counterparts in right column. Shown scores are for full image. Best viewed electronically.
Figure 3: Rate-distortion (left) and rate-perception (right) curves for the CLIC2020 test set (top) and Kodak dataset (bottom). The Kodak dataset has too few samples for FID/256 evaluation, instead we evaluate LPIPS, a perceptual distortion metric.
### Enhancement of standard codecs
We evaluate enhancement of two standard codecs: JPEG and VTM. In Fig. 6 we compare JPEG+DIRAC to literature on the low-resolution dataset ImageNet-1K (left panels) and evaluate JPEG+DIRAC and VTM+DIRAC on the high-resolution dataset CLIC test 2020 (right panels).
When comparing to enhancement literature (left panels in Fig. 6), we compare to QGAC and DDRM, specifically their scores resulting from averaging 8 independent samples, denoted DDRM (A). JPEG+DIRAC-1 slightly outperforms the competing methods in the low rate regime in terms of PSNR, while improving LPIPS by a large margin. Further sampling allows JPEG+DIRAC-100 to improve LPIPS, at the cost of PSNR. While the difference in LPIPS seem small, qualitatively the textures in JPEG+DIRAC-100 are much better than in JPEG+DIRAC-1, as can be seen in the appendix.
When evaluating JPEG+DIRAC and VTM+DIRAC on the high-resolution dataset CLIC test 2020 (right panels in Fig. 6), we can see that both VTM+DIRAC-1 and JPEG+DIRAC-1 outperform their base codec in fidelity. In the perceptual enhancement setting, both VTM+DIRAC-100 and JPEG+DIRAC-100 far outperform their base codec in FID/256, specifically at the lowest rate, with a 81% and 78% improvement respectively. DIRAC offers a consistent boost in perceptual quality, even as one improves the base codec from JPEG to VTM. We show visual examples of both systems in the middle panels, showing a drastic improvement in visual quality. Notice the lack of texture on the top samples, which are from distortion-optimized codecs. For VTM, the bottom sample has higher distortion (i.e. lower PSNR), yet looks far better to the human observer.
### Sampling analysis
Reverse sampling in diffusion models is equivalent to integrating a stochastic differential equation [61]. The error incurred in the numerical approximation of the true solution trajectory will generally be proportional to its curvature [26], meaning parts with low curvature can be integrated with few and large update steps.
In Fig. 7 (left panel) we show the average curvature of sampling trajectories for our model on the CLIC 2020 val dataset, using 100 DDIM steps [58]. Because computing the Hessian is not feasible for the number of dimensions our model operates in, we approximate it with the angle between consecutive update vectors \(c=\cos^{-1}(\mathbf{u_{t}u_{t-1}}/||\mathbf{u_{t}}||\cdot||\mathbf{u_{t-1}}||)\), where \(u_{t}\propto(\mathbf{r_{0}}^{\prime}(t)-\mathbf{r_{t}})\) points from the current diffusion latent to the current prediction of the residual. We find that the curvature is small along a vast majority of steps in the sampling trajectory, meaning it is indeed possible to take a single large update step and only incur a small error.
Moreover, we find that instead of starting from standard normal noise at time \(T\) and taking a large integration step to time \(t<<T\), it is sufficient to start directly at \(t\), using noise as input to the model that is scaled according to the diffusion model's noise schedule. In our experiments, starting sampling at \(t=20\) was a good tradeoff, with final performance almost identical to the full 100 steps (as seen in the center and right panels of Fig. 7), but saving \(80\%\) of required compute. One might suspect that the above is due to a suboptimal noise schedule (we use the popular _linear_ schedule), but we explored several different schedules as well as noise schedule learning [30] and found no improvement in performance.
Besides showing that our model can work efficiently using at most 20 sampling steps, in the generative compression setting we also introduce a concept we call _rate-dependent thresholding_, which we detail in Section 3.3. By clipping each intermediate residual prediction to a percentile-range obtained from the training data (we define the range to include \(95\%\) of the data at a given rate), we find that we can improve PSNR while not affecting FID. This can be seen in the center and right panels of Fig. 7, which also shows how our model performs a smooth traversal between high fidelity (high PSNR) and high perceptual quality (low FID/256).
Figure 5: CLIC 2020 test reconstruction by DIRAC-100 and MS-ILLM, crop location chosen based on [43].
## 6 Discussion and Limitations
In this work, we propose a new neural image compression method called Diffusion-based Residual Augmentation Codec (DIRAC). Our approach uses a variable bitrate base codec to transmit an initial reconstruction with high fidelity to the original input, and then uses a diffusion probabilistic model to improve its perceptual quality. We show that this design choice enables fine control over the rate-distortion-perception tradeoff at test time, which for example enables users to choose if an image should be decoded with high fidelity or high perceptual quality. Paired with a strong neural codec as base model, we can smoothly interpolate between performance that is competitive with the state of the art in either fidelity or perceptual quality. Our model can also work as a receiver-side enhancement model for traditional codecs, drastically improving perceptual quality at sometimes no cost in PSNR. Finally, we demonstrate that our model can work with 20 sampling steps or less, and propose _rate-dependent thresholding_, which improves PSNR of the diffusion model without affecting perceptual quality in the multi-rate setting.
LimitationsAlthough our model gives the user control over the amount of hallucinated content, we currently do not control _where_ such hallucinations occur. Similar to GAN-based codecs, we observe that increasing perception sometimes harms fidelity in small regions with semantically important content, such as faces and text. Addressing this limitation is an important next step for generative codecs. Additionally, it is fairly expensive to use a DDPM on the receiver side. Although we drastically reduce the number of sampling steps, HiFiC and its variations [39, 3] are less expensive to run. We provide more details on computational cost in the appendix. On the other hand, sampling efficiency of diffusion models is a major research direction, and we expect our approach to benefit from these advances.
#### Acknowledgments
We thank Johann Brehmer, Taco Cohen, Yunfan Zhang, Hoang Le for useful discussions and reviews of early drafts of the paper. Thanks to Fabian Mentzer for instructions on reproducing HiFiC, and to Matthew Muckley for providing the MS-ILLM reconstructions and scores.
Figure 6: Quantitative results for JPEG+ and VTM+DIRAC on ImageNet-val1k (left) and CLIC test 2020 (right) respectively. We show rate-distortion (top) and rate-perception (bottom) curves. Qualitative sample is image “3f273e” in CLIC 2020 test.
Figure 7: Analysis of the curvature of the sampling trajectory (approximated by the angle between update vectors), as well as the change in PSNR and FID/256 during sampling. All evaluations done on the CLIC 2020 val subset. |
2301.04797 | Faithful tropicalization and Skeleton of $\overline{M}_{0,n}$ | We propose a comparison between the Berkovich skeleton of Berkovich
analytification of $(\overline{\textsf{M}}_{0,n},{\overline{\textsf{M}}_{0,n}
\setminus \textsf{M}_{0,n}})$ and faithful tropicalization of
$\textsf{M}_{0,n}$ over a complete discrete valued field. In particular, we
proved the two combinatorial structures are the same in terms of valuation in
$\overline{\textsf{M}}^{\textsf{an}}_{0,n}$. | Jiachang Xu | 2023-01-12T03:57:52Z | http://arxiv.org/abs/2301.04797v4 | # Faithful tropicalization and skeleton of \(\mathsf{M}_{0,n}\)
###### Abstract.
we propose a comparison between the Berkovich skeleton of Berkovich analytification of \((\overline{\mathsf{M}}_{0,n},\overline{\mathsf{M}}_{0,n}\,\mbox{\vrule width 1px 0.5pt height 0.5pt depth 0.0pt}\,\mbox{\vrule width 1px 0.
**Fact 0.2**.: The Berkovich skeleton \(\mathsf{Sk}(\mathcal{X}^{+}_{0,n})\) is independent with the model we choose.
To see this, let's denote the dual complex for the cofficient \(1\) part of a dlt pair by \(\mathcal{D}^{=1}\), the open subset of \(\mathcal{D}^{=1}\) corresponding to the strata supported on the special fiber is denoted by \(\mathcal{D}^{=1}_{0}\). By the following proposition, we can get the fact above:
**Proposition 0.3**.: _[_4_, Proposition 5.1.7]_ _Let \((X,\Delta_{X})\) be a dlt pair with \(K_{X}+\Delta_{X}\) semiample and \((\mathcal{X},\Delta_{\mathcal{X}})\) is a good dlt minimal model of \((X,\Delta_{X})\) over \(K^{\circ}\), then \(\mathcal{D}^{=1}_{0}(\mathcal{X},\Delta_{\mathcal{X}})=\mathsf{Sk}^{\mathsf{ ess}}(X,\Delta_{X})\)._
**Lemma 0.4**.: _Let \(X^{+}\) be a log regular scheme over log trait \(S^{+}\), then_
\[\mathsf{Sk}(X^{+})\cong\Delta^{1}_{F(X^{+})}\]
_as compact conical polyhedral complexes, where \(\Delta^{1}_{F(X^{+})}\) is an conical polyhedral complex with an integral structure associated to a toroidal embedding without self-intersection[9]._
**Remark 0.5**.: By [15], the tropicalization \(\mathscr{T}X\) of the interior \(X\) of \(X^{+}\) coincides with \(\Delta^{1}_{F(X^{+})}\).
### Relation to the faithful tropicalization
M.A.Cueto, M.Habich and A.Werner [5] proved that the tropical Grassmannian \(\mathscr{T}\mathsf{Gr}(2,n)\) with respect to Plucker embedding is homeomorphic to a closed subset of \(\mathsf{Gr}(2,n)^{\mathsf{an}}\), we can easily generalize this result to the tropicalization of \(\mathsf{M}_{0,n}\) by the Gelfand-MacPherson correspondence [6]. In [14], Speyer and Sturmfels show that \(\mathscr{T}\mathsf{M}_{0,n}\) coincide with the moduli space of \(n\)-marked stable tropical curves \(\mathsf{M}^{\mathsf{trop}}_{0,n}\), thus we have a faithful tropicalization map \(\mathsf{trop}:\mathsf{M}^{\mathsf{an}}_{0,n}\to\mathsf{M}^{\mathsf{trop}}_{0,n}\), in other words, \(\mathsf{M}^{\mathsf{trop}}_{0,n}\) is homeomorphic to a closed subset of \(\mathsf{M}^{\mathsf{an}}_{0,n}\). Since we have two cone complexes \(\mathsf{Sk}(\mathcal{X}^{+}_{0,n})\) and \(\mathscr{T}\mathsf{M}_{0,n}\) in \(\overline{\mathsf{M}}^{\mathsf{an}}_{0,n}\), we ask the following question.
**Question 0.7**.: What's the comparison between the skeleton \(\mathsf{Sk}(\mathcal{X}^{+}_{0,n})\) and \(\mathscr{T}\mathsf{M}_{0,n}\) in terms of valuation?
The direct approach to this question is to give the explicit descriptions for each side, by using Kapranov's description of \(\overline{\mathsf{M}}_{0,n}\), we give the complete explicit description of \(\mathsf{Sk}(\mathcal{X}^{+}_{0,n})\) for \(n\leqslant 5\) and proved \(\mathsf{Sk}(\mathcal{X}^{+}_{0,n})=\sigma(\mathscr{T}\mathsf{M}_{0,n})\) for \(n\leqslant 5\), where \(\sigma(\mathscr{T}\mathsf{M}_{0,n})\) is the image of section map of the tropicalization map. Furthermore, for the case \(n\geqslant 6\), the complete explicit description for \(\mathsf{Sk}(\mathcal{X}^{+}_{0,n})\) becomes quite complicated, note that the forgetful map \(\pi_{n+1}:\overline{\mathsf{M}}_{0,n+1}\to\overline{\mathsf{M}}_{0,n}\) gives the relations of boundary divisors on each side and the fiber of the forgetful map is isomorphic to the curve itself, this turns out that our question could be analyzed for the fiber of the forgetful map. We first study the properties of the skeleton and faithful tropicalization on the forgetful map, more precisely, we have the following theorem:
**Theorem 0.8**.: _The universal curve diagram is commutative:_
(0.9)
**Remark 0.10**.: Part of the commutativity of this diagram has been proved in [1] for the base field \(K\) is a trivial valued field. The proof we present below is relying on the combinatorial description of tropical Grassmannian in [5] and [14].
We firstly showed that the Berkovich skeleton of \(\mathcal{X}_{0,n+1}^{+}\) restricted on the fiber of the analytic forgetful map is equal to the faithful tropicalization of the fiber restricted on the \(\mathscr{T}\mathsf{M}_{0,n}\). Then by using the theorem 0.8 and induction we can recover the two cone complexes associated to \(\overline{\mathsf{M}}_{0,n+1}\) coincide in term of valuation and get the main theorem:
**Theorem 0.11**.: _Let \(\sigma(\mathscr{T}\mathsf{M}_{0,n})\) be the image of \(\mathscr{T}\mathsf{M}_{0,n}\) under the section map of tropicalization, then we have \(\mathsf{Sk}(\mathcal{X}_{0.n}^{+})=\sigma(\mathscr{T}\mathsf{M}_{0,n})\)._
### Plan
In Section 1 we recall the necessary notations and results of log geometry and tropical geometry. In Section 2 we explain how to use faithful tropicalization of Grassmannian developed in [5] to explicitly construct the faithful tropicalization of \(\mathsf{M}_{0,n}\) and give the explicit description of \(\mathscr{T}\mathsf{M}_{0,n}\) for \(n=4,5\). In section 3 give an explicit description of \(\mathsf{Sk}(\mathcal{X}_{0.n}^{+})\) for the cases \(n=4,5\). In section 4, we discuss the relations among forgetful maps, skeletons and sections of tropicalization (see theorem 4.4). Finally, we prove the comparison theorem 4.12.
**Acknowledgment**.: The author is grateful to Morgan Brown for introducing him the work of [4] and [10], suggesting the comparison question, discussion and comments on the earlier draft of this work. The author also would like to thank the comments and intersects of Phillip Griffiths and Ludmil Katzarkov to this project.
## 1. Preliminaries
In this section, we review the notations and results of non-archimedean analytification in sense of Berkovich, log regular log scheme and moduli spaces of tropical curves that will use later.
### Notation
* Let \(K\) be a complete directed valued field with the normalized valuation \(v_{K}\), \(K^{\circ}\) and \(K^{\circ\circ}\) are corresponding valuation ring and maximal ideal. We define by \(|-|_{K}:=\mathsf{exp}(-v_{K})\) the absolute value on \(K\) corresponding to \(v_{K}\).
* All monoids are assumed to be commutative with units and maps of monoids to carry the unit to the unit. The group \(P^{\mathsf{gp}}\) is generated by \(P\), that is, the image of \(P\) under the left adjoint of the inclusion functor from Abelian groups to monoids.. A monoid \(P\) is called _integral_ if the canonical map \(P\to P^{\mathsf{gp}}\) is injective, and _saturated_ if it is integral and for any \(a\in P^{\mathsf{gp}}\), there exists \(n\geqslant 1\) such that \(a^{n}\in P\).
* We denote \((X,\mathscr{M}_{X})\) for a log scheme. All log schemes in this paper are Zariski fs log scheme, for the details we refer to [12].
### Topological description of Berkovich analytification
In [3], Berkovich constructs a non-archimedan analytification functor from the category of \(K\)-variety to \(K\)-analytic space which has similar properties with classic complex \(\mathsf{GAGA}\) functor. In the paper, we will only use its topological description for a given \(K\)-variety \(X\) as follows:
**Definition 1.3**.: \[X^{\mathsf{an}}=\bigg{\{}(x,\left|-\right|_{x})\left|\,x\in X,\left|-\right|_ {x}\text{ valuation on }\kappa(x)\text{ extend valuation }v_{K}\text{ on }K\ \right\}.\]
where \(\kappa(x)\) is the residue field of \(x\).
### Log regular log scheme
Log regularity is introduced by K.Kato in [7] and the Zariski log regular fs log scheme is corresponding to the toroidal embedding without self-intersection. See 3.10 for the details.
**Definition 1.5**.: Let \((X,\mathscr{M}_{X})\) be an fs log scheme, \((X,\mathscr{M}_{X})\) is called _log regular_ at \(x\in X\) if:
1. \(\mathscr{O}_{X,x}/\mathscr{I}_{x,x}\) is a regular local ring.
2. \(\mathsf{dim}(\mathscr{O}_{X,x})=\mathsf{dim}(\mathscr{O}_{X,x}/\mathscr{I}_{x,x})+\mathsf{rank}_{\mathbf{Z}}(\overline{\mathscr{M}_{X,x}^{\mathsf{gp}}})\).
where \(\mathscr{I}_{X,x}\) is the ideal generated by the image of \(\mathscr{M}_{x,x}-\mathscr{O}_{X,x}^{*}\) in \(\mathscr{O}_{X,x}\). \(X\) is _log regular_ if \(X\) is log regular at every point \(x\in X\).
**Definition 1.6**.: Let \(K\) be a complete discrete valued field, \(X\) be a scheme locally finitely presented over \(K^{\circ}\) and \(U\) be an open subscheme of \(X\). A pair \((X,U)\) is called strictly _semi-stable_ over \(K^{\circ}\) of relative dimension \(n\) if there exists an Zariski covering \(\mathscr{V}=\{V\}\) such that for each \(V\):
1. A diagram of etale morphisms: (1.7) \[\begin{array}{c}\includegraphics[width=142.26378pt]{images/.eps}\\ \includegraphics[width=142.26378pt]{images/.eps}\\ \includegraphics[width=142.
**Remark 1.8**.: For the base field \(K\), if the residue field \(\kappa\) is perfect, a scheme of locally of finite presentation over \(K^{\circ}\) is semi-stable if and only if the following conditions are satisfied:
1. X is regular and flat over \(K^{\circ}\), and \(U\) is the complement of a divisor \(D\) with normal crossing.
2. The generic fiber \(X_{\eta}\) is smooth over \(K\), and \(D_{K}\) is a divisor of \(X_{K}\) with normal crossing relative to \(K\).
3. The speical fiber \(X_{s}\) is reduced.
**Lemma 1.9**.: _[_13_]_ _If \((X,U)\) is strictly semi-stable over \(K^{\circ}\), it is log smooth over \(K^{\circ}\)._
**Remark 1.10**.: Note that \((K^{\circ},(\varphi))\) is log regular scheme, then by [7, Theorem 8.2], \((X,U)\) is log regular.
**Example 1.11**.: Let \((\overline{\mathsf{M}}_{0,n},D)\) is the moduli space of \(n\)-marked stable rational curves over a complete discrete valued field \(K\), where \(D=\overline{\mathsf{M}}_{0,n}\smallsetminus\mathsf{M}_{0,n}\) and \(\mathcal{X}_{0,n}:=\overline{\mathcal{M}}_{0,n}\otimes_{\mathbf{Z}}K^{\circ}\), then \((\mathcal{X}_{0,n},\overline{D}+(\mathcal{X}_{0,n})_{s})\) is strictly semi-stable, so it's log regular log scheme. More details are discussed in section 3.9.
### Moduli space of tropical curves
**Definition 1.13**.: _Dual graph of a stable curve_. Let \((C;x_{1},x_{2}\cdots x_{n})\) be a \(n\)-marked stable curve over an algebraic closed field \(k\). The _dual graph_ of \((C;x_{1},x_{2}\cdots x_{n})\) is a vertex-weighted, marked graph \((G,m,w)\) defined as follow:
1. The vertices \(v_{i}\) of \(G\) are in correspondence with the irreducible components \(X_{i}\) of \(C\), with weight function \(w(v_{i})=\mathsf{dim}\,\mathsf{H}^{1}(X_{i},\mathscr{O}_{X_{i}})\).
2. For every node \(p\) of \(C\), there is an edge \(e_{p}\) between \(v_{i}\) and \(v_{j}\) if \(p\) is in both the components \(X_{i}\) and \(X_{j}\).
3. The \(n\)-marking function \(m:\{1,2,\cdots,n\}\to V(G)\) sends \(j\) to the vertex of \(G\) corresponding to the component of \(C\) supporting \(p_{j}\)
4. \(2w(v)-2+n_{v}>0\) where \(n_{v}\) is the number of half-edges and marked points at \(v\).
**Definition 1.14**.: An _tropical curve_ is a graph \((G,m,w)\) equipped with a _length function_\(\ell:E(G)\to\mathbf{R}_{>0}\), now for a given tropical curve \((G,m,w)\), let \(\mathsf{Aut}(G,m,w)\) be the set of all permutations \(\varphi:E(G)\to E(G)\) that arise from automorphism of \(G\) that preserve \(m\) and \(w\). We define
\[\overline{C(G,m,w)}=\mathbf{R}_{\geqslant 0}^{E(G)}/\mathsf{Aut}(G,m,w)\]
**Definition 1.15**.: _The moduli space of \(n\)-marked stable tropical curves \(\mathsf{M}_{g,n}^{\mathrm{trop}}\) is the complexes_
\[\mathsf{M}_{g,n}^{\mathrm{trop}}=\bigsqcup\overline{C(G,m,w)}/\sim\]
where the disjoint union is over all stable graphs with type \((g,n)\), and for two points \(x\) and \(x^{\prime}\), \(x\sim x^{\prime}\) if they are equal after contracting all edges with length \(0\).
The tropical moduli space \(\mathsf{M}_{0,n}^{\mathsf{trop}}\) can be considered as space of phylogenetic trees in following sense:
**Definition 1.16**.: A phylogenetic trees on \(n\) leaves is a real weighted tree \((T,w)\). Where \(T\) is a finite connected graph with no cycles and with no degree-two vertices, together with a labeling of its leaves in bijection with \([n]\). The weight function \(w:E(T)\to\mathbf{R}\) is defined on the set of edges of \(T\). We denote \(\delta_{I}\) to represent the tropical curve in \(\mathsf{M}^{\mathsf{trop}}_{0,n}\) with one edge and one of vertexs has legs index by \(I\).
In order to discuss explicit description the local section of tropicalization of \(\mathsf{M}^{\mathsf{an}}_{0,n}\) with respect to the Pucker embedding, we record the combinatorial types of phylogenetic trees which have been developed in [5]. We can use them to classify tropical curves with genus \(0\) and n-marked points.
**Definition 1.17**.: A tree \(T\) on \(n\) leaves and \(i,j\) are endpoints of leaves is called caterpillar type if all the vertices are within distance \(1\) of central path.
**Definition 1.18**.: [5, Definition 4.6] Let \(i,j\) be a pair of indices, and let \(\leqslant\) be a partial order on the set \([n]\smallsetminus\{i,j\}\). Let \(T\) be a tree on \(n\) leaves arranged as in the right of Figure 1. We say that \(\leqslant\) has the _cherry property on \(T\)_ with respect to \(i\) and \(j\) if the following conditions hold:
1. Two leaves of different subtrees \(T_{a}\) and \(T_{b}\) cannot be compared by \(\leqslant\).
2. The partial order \(\leqslant\) restricts to a total order on the leaf set of each \(T_{a}\), \(a=1,\ldots,m\).
3. If \(k<l<v\), then either \(\{k,l\}\) or \(\{l,v\}\) is a cherry of the quartet \(\{i,k,l,v\}\) (and hence also of \(\{j,k,l,v\}\)).
The following lemma guarantee the existence of partial orders with the cherry property for a given tree \(T\).
**Lemma 1.19**.: _[_5_, Lemma 4.7]_ _Fix a pair of indices \(i,j\) and let \(T\) be a tree with \(n\) leaves, Then there exists a partial order \(\leqslant\) on the set \([n]\smallsetminus\{i,j\}\) that has the cherry property on \(T\) with respect to \(i,j\)._
**Example 1.20**.: The tropical moduli spaces \(\mathsf{M}^{\mathsf{trop}}_{0,4}\) and \(\mathsf{M}^{\mathsf{trop}}_{0,5}\) are identified with the spaces of phylogenetic trees with all combinatorial types of trees are caterpillar. For \(n\geqslant 6\), combinatorial types of trees may not be caterpillar type.
We give a similar embedding defined in [14]:
**Definition 1.21**.: The embedding of \(\mathsf{M}^{\mathsf{trop}}_{0,n}\) into \(\mathbf{R}^{\binom{n}{2}}\) as follows:
\[P:\mathsf{M}^{\mathsf{trop}}_{0,n}\to\mathbf{R}^{\binom{n}{2}}\]
\[x\mapsto(-\frac{1}{2}d(i,j))_{(i,j)}\]
Figure 1. Trees with \(n\) labelled endpoints
where \(d(i,j)\) denotes the distance between the \(i\)-th and \(j\)-th leaf of the tropical curve by the length function. We define a linear map as follows:
\[L:\mathbf{R}^{n} \to\mathbf{R}^{\binom{n}{2}}\] \[(a_{1},\cdots,a_{n}) \mapsto(a_{i}+a_{j})_{(i,j)}\]
Now consider the composition of maps:
\[\mathsf{M}_{0,n}^{\text{\rm trop}}\xrightarrow{P}\mathbf{R}^{\binom{n}{2}} \xrightarrow{\pi}\mathbf{R}^{\binom{n}{2}}/\mathsf{im}(L)\]
Then we can see that the image of \(\pi\circ P\) coincide with the tropicalization of \(\mathsf{M}_{0,n}^{\text{\rm an}}\) through the Plucker embedding and become _a space of phylogenetic trees_[14]. In the rest of this chapter, we will use \(\mathscr{T}\mathsf{M}_{0,n}\) to denote the image of \(\pi\circ P\).
## 2. Faithful tropicalization of \(\mathsf{M}_{0,n}\)
In this section, we describe the faithful tropicalization of \(\mathsf{M}_{0,n}\) following the faithful tropicalization of \(\mathsf{Gr}(2,n)\) in [5]. Specifically, there exists a section \(\sigma\) of the
map \(\mathsf{trop}:\mathsf{M}^{\mathsf{an}}_{0,n}\to\mathscr{T}\mathsf{M}_{0,n}\) and give the explict description of \(\sigma(\mathscr{T}\mathsf{M}_{0,n})\) for the cases \(n=4,5\).
**2.1**.: In the paper[5], the authors construct a section of tropicalization map \(\mathsf{trop}:\mathsf{Gr}(2,n)^{\mathsf{an}}\to\mathscr{T}\mathsf{Gr}(2,n)\), that's to say we have a continuous section \(\sigma:\mathscr{T}\mathsf{Gr}(2,n)\to\mathsf{Gr}(2,n)^{\mathsf{an}}\), so we can consider the cone complex \(\mathscr{T}\mathsf{Gr}(2,n)\) as a closed subset of the \(K\)-analytic space \(\mathsf{Gr}(2,n)^{\mathsf{an}}\). Now by the Gelfand-MacPherson correspondence, we have \(\mathsf{Gr}_{0}(2,n)/\mathsf{G}^{n}_{m,K}\cong\mathsf{M}_{0,n}\), where \(\mathsf{Gr}_{0}(2,n)\) is the affine open subvariety of \(\mathsf{Gr}(2,n)\) with non-vansihing Plucker coordinates and the action of the tours is defined by the following morphism:
\[\mathsf{Gr}_{0}(2,n)\times_{K}\mathbf{G}^{n}_{m,K}\to\mathsf{Gr}_{0}(2,n)\]
\[((p_{kl})_{kl},(t_{i})_{i\in[n]})\mapsto(t_{k}t_{l}p_{kl})_{kl}\]
For \(\mathsf{M}_{0,n}\), we have following result:
**Lemma 2.2**.: _[_5_, Corollary 4]_ _The tropicalization map \(\mathsf{trop}:\mathsf{M}^{\mathsf{an}}_{0,n}\to\mathscr{T}\mathsf{M}_{0,n} \cong\mathscr{T}\mathsf{Gr}_{0}(2,n)/\overline{L}\) is faithful, the section \(\sigma^{\prime}\) is induced by the section \(\sigma\) for the tropical Grassmannian and \(\overline{L}\) is the linearity space of \(\mathscr{T}\mathsf{Gr}_{0}(2,n)\)._
**Remark 2.3**.: In [14], Speyer and Sturmfels show that \(\mathscr{T}\mathsf{M}_{0,n}\) coincide with \(\mathsf{M}^{\mathsf{trop}}_{0,n}\), thus we have a faithful tropicalization map \(\mathsf{trop}:\mathsf{M}^{\mathsf{an}}_{0,n}\to\mathsf{M}^{\mathsf{trop}}_{0,n}\).
**2.4**.: **Local section map of \(\mathsf{trop}:\mathsf{M}^{\mathsf{an}}_{0,n}\to\mathsf{M}^{\mathsf{trop}}_{0,n}\).**
Recall the construction in [5] for grassmannian of planes, let \(\varphi:\mathsf{Gr}(2,n)\hookrightarrow\mathbf{P}^{\binom{n}{2}-1}_{K}= \mathsf{Proj}K[p_{ij}\,|\,ij\in\binom{[n]}{2}]\) be the _Plucker embedding_ and \(\{U_{ij}\}_{ij}\) be an affine open covering of \(\mathsf{Gr}(2,n)\), where \(U_{ij}:=\varphi^{-1}(D_{+}(p_{ij}))\). Let \(\mathsf{Spec}\,R(ij)=U_{ij}\), then we have :
\[R(ij)=K[u_{kl}\,|\,kl\in I(ij)] \tag{2.5}\]
where \(I(ij)=\{il,jl\,|\,l\neq i,j\}\) and \(u_{kl}=p_{kl}/p_{ij}\) for every \(kl\neq ij\). We have \(u_{kl}=u_{ik}u_{jl}-u_{il}u_{jk}\) for \(k,l\notin\{i,j\}\) by the plucker relations.
Consider \(\mathscr{T}U_{ij}\) as the image of \(U^{\mathsf{an}}_{ij}\subseteq\mathsf{Gr}(2,n)^{\mathsf{an}}\) under the tropicalization of projective varieties, we get:
\[\mathscr{T}U_{ij}=\{x\in\mathscr{T}\mathsf{Gr}(2,n)\,|\,x_{ij}\neq-\infty\}\]
Thus for any \(x\in\mathscr{T}\mathsf{Gr}_{0}(2,n)\), we have \(x_{ij}\neq-\infty\) for each component \(x_{ij}\) by \(\mathscr{T}\mathsf{Gr}_{0}(2,n)\subseteq\bigcap_{ij}\mathscr{T}U_{ij}\).
Note that \(\mathscr{T}\mathsf{Gr}_{0}(2,n)\) could be identified with the space of phylogenetic trees, we have \(\mathscr{T}\mathsf{Gr}_{0}(2,n)=\bigcup_{T}\mathscr{C}_{T}\), where \(\mathscr{C}_{T}\) is a cone in \(\mathscr{T}\mathsf{Gr}_{0}(2,n)\) associated to a combinatorial type of a phylogenetic tree \(T\) on \(n\) leaves. Correspondently, we have \(\mathsf{M}^{\mathsf{trop}}_{0,n}=\bigcup_{T}(\mathscr{C}_{T}/\overline{L})\), we denote \(\mathscr{C}_{T}/\overline{L}:=\mathscr{C}_{T}^{\prime}\) and for _caterpillar type_ tree \(T\) we can construct a local section on \(\mathscr{C}_{T}^{\prime}\) via lifting a local section on \(\overline{\mathscr{C}_{T}}\cap\mathscr{T}U_{ij}\); For arbitrary type tree \(T\), we can lift local sections via a stratification \(\{\mathscr{C}_{T,J}^{(ij)}\}_{T,J}\) of \(\mathscr{T}U_{ij}\) which are following datum:
* \(J(x):=\{kl\in\binom{[n]}{2}\,|\,x_{kl}=-\infty\}\), for any \(x\in\mathscr{T}U_{ij}\). \(J(ij):=J(x)\cap I(ij)\).
* monomial prime ideals \(\mathfrak{a}_{J(ij)}=\langle u_{kl}\,|\,kl\in J(ij)\rangle\) of \(R(ij)\).
* \(Y_{J(ij)}=\mathsf{Spec}(\frac{R(ij)}{\mathfrak{a}_{J(ij)}})\hookrightarrow \mathbf{P}^{\binom{n}{2}-1}_{K}\).
* \(\mathscr{C}_{T,J}^{(ij)}:=\overline{\mathscr{C}_{T}}\cap\mathscr{T}Y_{J(ij)}\cap \bigcap_{kl\in I(ij)\smallsetminus J}\mathscr{T}U_{kl}\hookrightarrow\mathscr{T}U _{ij}\).
**Definition 2.6**.: Let \(I=I(ij)\), we first define the projection:
\[\pi_{I}:\mathscr{T}U_{ij}\to\overline{\mathbf{R}}^{I}\]
\[[(x_{kl})_{kl\in\binom{[n]}{2}}]\mapsto(x_{kl}-x_{ij})_{kl\in I}\]
and define a _skeleton_ map for affine \(n\)-space:
\[\delta_{n}:\overline{\mathbf{R}}^{n}\to\mathbf{A}_{k}^{n,\mathsf{an}}\]
\[\rho\mapsto\delta_{n}(\rho)(\sum_{\alpha}c_{\alpha}x^{\alpha})=\mathsf{max}\{|c _{\alpha}|\mathsf{exp}(\sum_{i=1}^{n}\rho_{i}\alpha_{i})\}\]
Now we define a map:
\[\sigma_{I}^{(ij)}:=\delta_{I}\circ\pi_{I}\]
**Lemma 2.7**.: _[_5_, Proposition 1]_ _Let \(T\) be the caterpillar tree on \(n\) leaves with endpoints \(i\) and \(j\), then_
\[\sigma_{T,I}^{(ij)}:=\sigma_{I}^{(ij)}:\overline{\mathscr{C}_{T}}\cap\mathscr{ T}U_{ij}\hookrightarrow\mathscr{T}U_{ij}\to U_{ij}^{\mathsf{an}} \tag{2.8}\]
_is a section of the tropicalization map over \(\overline{\mathscr{C}_{T}}\cap\mathscr{T}U_{ij}\)._
**Lemma 2.9**.: _[_5_, Theorem 4.16]_ _There is a local section of tropicalization over \(\mathscr{C}_{T,J}^{(ij)}\) for arbitrary type tree \(T\)._
In the following proposition, we construct the local sections on \(\mathscr{C}_{T}^{\prime}\) for \(T\) is a caterpillar type tree, for the arbitrary type tree, the construction are the related by replacing \(I(ij)\) by \(I=I(ij,T,J)\) which is a size \(2(n-2)\) set and defined in [5, Proposition 4.10].
**Proposition 2.10**.: _Let \(T\) be the caterpillar tree on \(n\) leaves with endpoints \(i\) and \(j\), then the lifting section \(\sigma^{\prime}|_{\mathscr{C}^{\prime}{}_{T}}:=\sigma^{\prime}_{(ij),T,I}\) restricted on \(\mathscr{C}_{T}^{\prime}\) is the local lifting of \(\sigma_{T,I}^{(ij)}|_{\mathscr{C}_{T}}\) on \(\mathscr{C}_{T}\) by the following way:_
(2.11)
Proof.: Notice that \(\Gamma(\mathsf{Gr}_{0}(2,n))\cong R(ij)_{S_{A_{0}}}\), where \(S_{A_{0}}\) is the multiplicative closed subset of \(R(ij)\) generated by \(\{u_{kl}\,|\,kl\in A_{0}=\binom{[n]}{2}\smallsetminus\{i,j\}\}\) and for any \(x\in\mathscr{C}_{T}\subseteq\mathscr{T}\mathsf{Gr}_{0}(2,n)\), \(\sigma_{T,I}^{(ij)}(x)(u_{kl})=\mathsf{exp}(x_{kl}-x_{ij})\neq 0\) for \(kl\in I(ij)\) and \(\sigma_{T,I}^{(ij)}(x)(u_{kl})=\mathsf{max}\{\mathsf{exp}(x_{ik}+x_{jl}-2x_{ ij}),\mathsf{exp}(x_{il}+x_{jk}-2x_{ij})\}\neq 0\) for \(kl\neq\{i,j\}\), so the multiplicative seminorm can be extended to \(R(ij)_{S_{A_{0}}}\) uniquely and we can get a seminorm on \(\Gamma(\mathsf{Gr}_{0}(2,n))^{\mathbf{G}_{m,K}^{n}}\) which extend the valuation on \(K\), and this seminorm can be defined as the image of lifting section \(\sigma^{\prime}_{(ij,T,I)}:\mathscr{C}_{T}^{\prime}\hookrightarrow\mathbf{R} ^{\binom{n}{2}}/\mathsf{im}L\to\mathbf{R}^{I}\to(\mathbf{A}_{K}^{I})^{\mathsf{an}}\).
### Explict description of \(\sigma(\mathscr{T}\mathsf{M}_{0,4})\)
By the example 1.20, we know that \(\mathsf{M}_{0,4}^{\mathsf{trop}}\) consists of three half rays emanating from the origin (the picture of a tropical line), more preciously, we have \(\mathsf{M}_{0,4}^{\mathsf{trop}}=\mathscr{C}_{T_{0}}^{\prime}\cup\mathscr{C}_{T_ {1}}^{\prime}\cup\mathscr{C}_{T_{2}}^{\prime}\cup\mathscr{C}_{T_{3}}^{\prime}\), where \(T_{0}\) is the star tree \((1234)\), \(T_{1}=(12\,|\,34)\), \(T_{2}=(13\,|\,24)\) and \(T_{3}=(14\,|\,23)\).
Then the image of \(\mathscr{C}_{T_{i}}^{\prime}\) under the map \(\pi\circ P\) in the definition 1.21 should be as following:
1. \(\pi\circ P(\mathscr{C}_{T_{1}}^{\prime})=\{\overline{(0,-\frac{1}{2}l_{1},- \frac{1}{2}l_{1},-\frac{1}{2}l_{1},-\frac{1}{2}l_{1},0)}\in\mathbf{R}^{\binom {4}{2}}/\mathsf{im}L\,|\,l_{1}\in\mathbf{R}_{\geqslant 0}\}\)
2. \(\pi\circ P(\mathscr{C}_{T_{2}}^{\prime})=\{\overline{(-\frac{1}{2}l_{2},0,- \frac{1}{2}l_{2},-\frac{1}{2}l_{2},0,-\frac{1}{2}l_{2})}\in\mathbf{R}^{\binom {4}{2}}/\mathsf{im}L\,|\,l_{2}\in\mathbf{R}_{\geqslant 0}\}\)
3. \(\pi\circ P(\mathscr{C}_{T_{3}}^{\prime})=\{\overline{(-\frac{1}{2}l_{3},-\frac{ 1}{2}l_{3},0,0,-\frac{1}{2}l_{3},-\frac{1}{2}l_{3})}\in\mathbf{R}^{\binom{4}{ 2}}/\mathsf{im}L\,|\,l_{3}\in\mathbf{R}_{\geqslant 0}\}\)
Now we consider \(T_{1}\) as a caterpillar tree with \(4\) leaves with end points \(1\) and \(4\), then we can use the section \(\sigma^{\prime}_{(14,T_{1},I(14))}\) in proposition 2.10, where \(I(14)=\{12,13,24,34\}\), so we have:
1. \(\pi^{\prime}_{I(14)}(\mathscr{C}_{T_{1}}^{\prime})=\{(\frac{1}{2}l_{1},0,0, \frac{1}{2}l_{1})\in\mathbf{R}^{4}\}\)
2. \(\pi^{\prime}_{I(14)}(\mathscr{C}_{T_{2}}^{\prime})=\{(0,\frac{1}{2}l_{2},\frac {1}{2}l_{2},0)\in\mathbf{R}^{4}\}\)
3. \(\pi^{\prime}_{I(13)}(\mathscr{C}_{T_{3}}^{\prime})=\{(0,\frac{1}{2}l_{3},\frac {1}{2}l_{3},0)\in\mathbf{R}^{4}\}\)
Thus, for any \(x\in\mathscr{C}_{T_{1}}^{\prime}\), the seminorm \(\sigma^{\prime}_{(14,T_{1},I(14))}(x)\) on the ring \(K[u_{12},u_{13},u_{24},u_{34}]\) is defined as
\[\sum_{\alpha}c_{\alpha}u_{12}^{\alpha_{12}}u_{13}^{\alpha_{13}}u_{24}^{\alpha _{24}}u_{34}^{\alpha_{34}}\mapsto\mathsf{max}_{\alpha}\{|c_{\alpha}|\, \mathsf{exp}(\alpha_{12}(\frac{1}{2}l_{1})+\alpha_{34}(\frac{1}{2}l_{1})\} \tag{2.13}\]
Thus we have:
* \(\sigma^{\prime}_{(14,T_{1},I(14))}(x)(u_{12})=\mathsf{exp}(\frac{1}{2}l_{1})\)
* \(\sigma^{\prime}_{(14,T_{1},I(14))}(x)(u_{13})=1\)
* \(\sigma^{\prime}_{(14,T_{1},I(14))}(x)(u_{24})=1\)
* \(\sigma^{\prime}_{(14,T_{1},I(14))}(x)(u_{34})=\mathsf{exp}(\frac{1}{2}l_{1})\)
Meanwhile we have \(\Gamma(\mathsf{Gr}_{0}(2,4))^{\mathsf{G}_{m,K}^{4}}\cong K[(\frac{u_{23}}{u_{ 12}u_{34}})^{\pm 1},(\frac{u_{23}}{u_{13}u_{24}})^{\pm 1}]\), but \(u_{23}=u_{12}u_{34}-u_{13}u_{24}\), thus \(\Gamma(\mathsf{Gr}_{0}(2,4))^{\mathsf{G}_{m,K}^{4}}\cong K[u,u^{-1},(u-1)^{-1}] \cong K[x_{1}^{\pm 1},x_{2}^{\pm 1}]/(x_{1}-x_{2}+1)\).
By letting \(u=\frac{u_{13}u_{24}}{u_{12}u_{34}}\), we have \(\sigma^{\prime}_{(14,T_{1},I(14))}(x)(u)=\mathsf{exp}(-l_{1})\). Thus for \(f=\sum_{n=1}^{m}a_{n}u^{n}\),
\[\sigma^{\prime}_{(14,T_{1},I(14))}(x)(f)=\mathsf{max}_{n}(|a_{n}|_{K}\mathsf{ exp}(-nl_{1})) \tag{2.14}\]
By comparing 3.15 and 2.14, we can see they are the same valuation on the function field of \(\overline{\mathsf{M}}_{0,4}\).
### Explict description of \(\sigma(\mathscr{T}\mathsf{M}_{0,5})\)
By the Example 1.20 and 3.16, we know that \(\mathsf{M}_{0,5}^{\mathsf{trop}}\) is a union of \(15\) quadrants \(\mathbf{R}_{\geqslant 0}^{2}\). These quadrants are corresponding to the combinatorial types of the tree of type \((**\,|\,*|\,**)\) and attached along the rays which are corresponding to the combinatorial types of the tree of type \((***\,|\,**)\) as we describe in Figure 4. So we have \(\mathsf{M}_{0,5}^{\mathsf{trop}}=\bigcup\mathscr{C}_{T_{(ij\,|\,m\,|\,kl)}}\).
Without loss of generality, we only do the computation for the quadrants for the trees with combinatorial types \((15\,|\,2\,|\,34)\), \((15\,|\,3\,|\,24)\) and \((15\,|\,4\,|\,23)\) as we describe in the Figure 4.
Let's use \(l_{ij}\) to denote the distance function defined by the tropical curve \(\delta_{ij}\), then the distance function defined by combinatorial tree associated to a quadrant connect two rays \(\delta_{ij}\) and \(\delta_{kl}\) should be \(l_{ij}+l_{kl}\). Thus the images of \(\mathscr{C}^{\prime}_{T_{(15\,|\,2\,|\,34)}}\), \(\mathscr{C}^{\prime}_{T_{(15\,|\,3\,|\,24)}}\) and \(\mathscr{C}^{\prime}_{T_{(15\,|\,4\,|\,23)}}\) under the map \(\pi\circ P\) in the Definition 1.21 should be as following:
1. \[\begin{split}\pi\circ P(\mathscr{C}^{\prime}_{T_{(15\,|\,2\,|\,34 )}})&=\overline{\{(-\frac{1}{2}l_{15},-\frac{1}{2}l_{15}-\frac{1}{ 2}l_{34},-\frac{1}{2}l_{15}-\frac{1}{2}l_{34}}}\\ &,\overline{0,-\frac{1}{2}l_{34},-\frac{1}{2}l_{34},-\frac{1}{2 }l_{15},0,-\frac{1}{2}l_{15}-\frac{1}{2}l_{34}},\\ &-\frac{1}{2}l_{15}-\frac{1}{2}l_{34})\in\mathbb{R}^{\binom{5}{ 2}}/\text{im}L\,|\,l_{15},l_{34}\in\mathbb{R}_{\geqslant 0}\big{\}}\end{split}\]
2.
\[\pi\circ P(\mathscr{C}^{\prime}_{T_{(15\,|\,3\,|\,24)}}) =\{(-\frac{1}{2}l_{15}-\frac{1}{2}l_{24},-\frac{1}{2}l_{15},-\frac {1}{2}l_{15}-\frac{1}{2}l_{24}\] \[\qquad\qquad\frac{-1}{2}l_{15}-\frac{1}{2}l_{24})\in\mathbf{R}^{ \binom{5}{2}}/\!\!\text{im}\,L\,|\,l_{15},l_{24}\in\mathbf{R}_{\geqslant 0}\}\]
3. \[\pi\circ P(\mathscr{C}^{\prime}_{T_{(15\,|\,4\,|\,23)}}) =\overline{\{(-\frac{1}{2}l_{15}-\frac{1}{2}l_{23},-\frac{1}{2}l_ {15}-\frac{1}{2}l_{23},-\frac{1}{2}l_{15}\] \[\qquad\qquad\frac{-1}{2}l_{15}-\frac{1}{2}l_{23},-\frac{1}{2}l_{1 5})\in\mathbf{R}^{\binom{5}{2}}/\!\!\text{im}\,L\,|\,l_{15},l_{23}\in\mathbf{ R}_{\geqslant 0}\}}\]
Now we consider \(T_{(15\,|\,2\,|\,34)}\) and \(T_{(15\,|\,3\,|\,24)}\) as the caterpillar trees with \(5\) leaves with end points \(1\) and \(4\), \(T_{(15\,|\,4\,|\,23)}\) as the caterpillar trees with \(5\) leaves with end points \(1\) and \(3\), then we can use the sections \(\sigma^{\prime}_{(14,T_{(15\,|\,2\,|\,34)},I(14))}\), \(\sigma^{\prime}_{(14,T_{(15\,|\,3\,|\,24)},I(14))}\), and \(\sigma^{\prime}_{(13,T_{(15\,|\,4\,|\,23)},I(13))}\) in proposition 2.10, where \(I(14)=\{12,13,15,24,34,45\}\) and \(I(13)=\{12,14,15,23,34,35\}\). so we have:
1. \(\pi^{\prime}_{I(14)}(\mathscr{C}^{\prime}_{T_{(15\,|\,2\,|\,34)}})=\{(\frac{1 }{2}l_{34},0,\frac{1}{2}l_{15}+\frac{1}{2}l_{34},\frac{1}{2}l_{15},\frac{1}{2 }l_{15}+\frac{1}{2}l_{34},0)\in\mathbf{R}^{6}\}\)
2. \(\pi^{\prime}_{I(14)}(\mathscr{C}^{\prime}_{T_{(15\,|\,3\,|\,24)}})=\{(0,\frac {1}{2}l_{24},\frac{1}{2}l_{15}+\frac{1}{2}l_{24},\frac{1}{2}l_{15}+\frac{1}{2 }l_{24},\frac{1}{2}l_{15},0)\in\mathbf{R}^{6}\}\)
3. \(\pi^{\prime}_{I(13)}(\mathscr{C}^{\prime}_{T_{(15\,|\,4\,|\,23)}})=\{(0,\frac {1}{2}l_{23},\frac{1}{2}l_{15}+\frac{1}{2}l_{23},\frac{1}{2}l_{15}+\frac{1}{2 }l_{23},\frac{1}{2}l_{15},0)\in\mathbf{R}^{6}\}\)
Thus, for any \(x\in\mathscr{C}^{\prime}_{T_{(15\,|\,2\,|\,34)}}\), the seminorm \(\sigma^{\prime}_{(14,T_{(15\,|\,2\,|\,34)},I(14))}(x)\) on the ring \(K[u_{12},u_{13},u_{15},u_{24},u_{34},u_{45}]\) is defined as
\[\sum_{\alpha}c_{\alpha}u_{12}^{\alpha_{12}}u_{13}^{\alpha_{13}}u_{15}^{\alpha _{15}}u_{24}^{\alpha_{24}}u_{34}^{\alpha_{34}}u_{45}^{\alpha_{45}} \mapsto\mathsf{max}_{\alpha}\{|c_{\alpha}|\,\mathsf{exp}(\alpha_{12}(\frac{1} {2}l_{34})+(\frac{1}{2}l_{15}+\frac{1}{2}l_{34})\alpha_{15}+(\frac{1}{2}l_{15 })\alpha_{24}+(\frac{1}{2}l_{15}+\frac{1}{2}l_{34})\alpha_{34}\}\]
Thus we have:
* \(\sigma^{\prime}_{(14,T_{(15\,|\,2\,|\,34)},I(14))}(x)(u_{12})=\mathsf{exp}( \frac{1}{2}l_{34})\)
* \(\sigma^{\prime}_{(14,T_{(15\,|\,2\,|\,34)},I(14))}(x)(u_{13})=1\)
* \(\sigma^{\prime}_{(14,T_{(15\,|\,2\,|\,34)},I(14))}(x)(u_{15})=\mathsf{exp}( \frac{1}{2}l_{15}+\frac{1}{2}l_{34})\)
* \(\sigma^{\prime}_{(14,T_{(15\,|\,2\,|\,34)},I(14))}(x)(u_{24})=\mathsf{exp}( \frac{1}{2}l_{15})\)
* \(\sigma^{\prime}_{(14,T_{(15\,|\,2\,|\,34)},I(14))}(x)(u_{34})=\mathsf{exp}( \frac{1}{2}l_{15}+\frac{1}{2}l_{34})\)
* \(\sigma^{\prime}_{(14,T_{(15\,|\,2\,|\,34)},I(14))}(x)(u_{45})=1\)
Meanwhile, we have:
\[\Gamma(\mathsf{Gr}_{0}(2,5))^{\mathbf{G}_{m,K}^{5}} \cong K[(\tfrac{u_{23}}{u_{12}u_{34}})^{\pm 1},(\tfrac{u_{23}}{u_{13}u_{2 4}})^{\pm 1},(\tfrac{u_{25}}{u_{12}u_{45}})^{\pm 1},(\tfrac{u_{25}}{u_{15}u_{24}})^{\pm 1},( \tfrac{u_{35}}{u_{13}u_{45}})^{\pm 1},(\tfrac{u_{35}}{u_{15}u_{34}})^{\pm 1}] \tag{2.17}\] \[\cong K[u^{\pm 1},v^{\pm 1},(u-1)^{-1},(v-1)^{-1},(u-v)^{-1}]\] (2.18) \[\cong K[x_{1}^{\pm 1},x_{2}^{\pm 1},x_{3}^{\pm 1},x_{4}^{\pm 1},x_{5}^ {\pm 1}]/(x_{3}-x_{1}+1,x_{4}-x_{2}+1,x_{5}-x_{2}+x_{1}) \tag{2.16}\]
**Remark 2.19**.: For 2.17, by the Plucker relations, we have \(u_{13}u_{25}=u_{12}u_{35}+u_{15}u_{23}\), thus \(\tfrac{u_{13}u_{45}}{u_{15}u_{34}}-\tfrac{u_{13}u_{24}}{u_{12}u_{34}}=\tfrac{u _{13}u_{45}}{u_{15}u_{34}}(1-\tfrac{u_{15}u_{24}}{u_{12}u_{45}})\). By letting \(\tfrac{u_{13}u_{45}}{u_{15}u_{34}}:=v\), \(\tfrac{u_{13}u_{24}}{u_{12}u_{34}}:=u\)\(\tfrac{u_{15}u_{24}}{u_{12}u_{45}}:=w\) we can get the results above.
Thus we have:
1. \(\sigma^{\prime}_{(14,T_{(15\,|\,2\,|\,34)},I(14))}(x)(u)=\mathsf{exp}(-l_{34})\)
2. \(\sigma^{\prime}_{(14,T_{(15\,|\,2\,|\,34)},I(14))}(x)(v)=\mathsf{exp}(-l_{15}- l_{34})\)
3. \(\sigma^{\prime}_{(14,T_{(15\,|\,2\,|\,34)},I(14))}(x)(\tfrac{v}{u})=\mathsf{ exp}(-l_{15})\)
Thus for any polynomial \(f=\sum_{\beta}a_{\beta}u^{\beta_{1}}v^{\beta_{2}}\) in \(K[u,v]\),we have:
\[\sigma^{\prime}_{(14,T_{(15\,|\,2\,|\,34)},I(14))}(x)(f)=\mathsf{max}_{\beta} (|a_{\beta}|_{K}\mathsf{exp}(-\beta_{1}l_{34}-\beta_{2}(l_{15}+l_{34})) \tag{2.20}\]
**Remark 2.21**.: It's not hard to verify that 2.20 and 2.14 are independent of the change of variables in remark 2.19, by permuting the new variables.
Skeleton of \((\overline{\mathrm{M}}_{0,n},\mathscr{M}_{\overline{\mathrm{M}}_{0,n}}\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**Remark 3.3**.:
1. \((X,\mathscr{M}_{X})\) is sharp if for any point \(x\in X\), the monoid \(\mathscr{M}_{X,x}^{*}=\{1\}\). For any monoidal space \((X,\mathscr{M}_{X})\), there is a associated sharp monoidal space \((X,\overline{\mathscr{M}}_{X})\), where \(\overline{\mathscr{M}}_{X}:=\mathscr{M}_{X}/\mathscr{M}_{X}^{*}\).
**3.4**.: Fix a monoid \(P\), we denote by \(\mathsf{Spec}\,P\) the set of prime ideals of \(P\). The Zariski topology of \(\mathsf{Spec}\,P\) is generated by \(\mathsf{D}(f):=\{\mathfrak{p}\,|\,f\notin\mathfrak{p}\}\) for \(f\in P\). We associated a sheaf of sharp monoid \(\overline{\mathscr{M}}_{P}\) as:
\[\overline{\mathscr{M}}_{P}(\mathsf{D}(f)):=\frac{S^{-1}P}{(S^{-1}P)^{*}}\]
where \(S=\{f^{n}\}_{n\geqslant 0}\) is the face of \(P\) generated by \(f\). The pair \((\mathsf{Spec}\,P,\overline{\mathscr{M}}_{P})\) is called _affine Kato fan_. A monoidal space \((X,\mathscr{M}_{X})\) is called _Kato fan_ if it has an open covering of affine Kato fans.
**Theorem 3.5**.: _[_7_, Proposition 10.2]_ _Kato fans associated to log-regular log scheme. Let \((X,\mathscr{M}_{X})\) be a log regular log scheme. Then there is an initial strict morphism \((X,\overline{\mathscr{M}}_{X})\to F(X)\) in the category of monoidal spaces, where \(F(X)\) is a Kato fan. Explicitly, there exist a Kato fan \(F(X)\) and a morphism \(\rho:(X,\overline{\mathscr{M}}_{X})\to F(X)\) such that \(\rho^{-1}(\mathscr{M}_{F})\cong\overline{\mathscr{M}}_{X}\) and any other morphism from \((X,\overline{\mathscr{M}}_{X})\) to a Kato fan factors through \(\rho\)._
**Lemma 3.6**.: _[_4_, Lemma 2.2.3]_ _Let \(X\) be a log regular log scheme. Then the associated Kato fan \(F(X)\) consists of the generic points of intersections of irreducible components of \(D_{X}\)._
**3.7**.: Let \(X^{+}\) be a log regular log scheme over log trait \(S^{+}\), \(x\in F(X^{+})\), \(\overline{g_{1}},\ldots,\overline{g_{n}}\) be the generators of the monoid \(\overline{\mathscr{M}}_{X^{+},x}\) and notice that \(g_{1},\ldots,g_{n}\) is a system of generator of \(\mathfrak{m}_{x}\subseteq\mathscr{O}_{X^{+},x}\). For any \(f\in\mathscr{O}_{X^{+},x}\),
\[f=\sum_{\beta\in\mathbf{Z}_{\geqslant 0}^{n}}c_{\beta}g^{\beta}\]
in \(\widehat{\mathscr{O}_{X^{+},x}}\), where \(c_{\beta}\in\widehat{\mathscr{O}_{X^{+},x}}^{*}\cup\{0\}\).
**Proposition 3.8**.: _[_4_, Proposition 3.2.10]_ _Let_
\[\sigma_{x}:=\{\alpha\in\mathsf{Hom}_{\mathsf{Mon}}(\overline{\mathscr{M}}_{X^ {+},x},\mathbf{R}_{\geqslant 0})\,|\,\alpha(\varpi)=1\}\]
_then there exist an unique minimal semivaluation \(v_{\alpha}:\mathscr{O}_{X^{+},x}\smallsetminus\{0\}\to\mathbf{R}_{\geqslant 0}\) associated to each \(\alpha\in\sigma_{x}\) such that the following properties are satisfied:_
1. _For any_ \(f\in\overline{\mathscr{M}}_{X^{+},x}\)_, we have_ \(v_{\alpha}(f)=\alpha(\overline{f})\)_._
2. _For any_ \(f\in\mathscr{O}_{X^{+},x}\) _and any admissible expansion_ \(f=\sum_{\beta\in\mathbf{Z}_{\geqslant 0}^{n}}c_{\beta}g^{\beta}\)_, we have:_ \[v_{\alpha}(f)=\mathsf{min}_{\beta}\{v_{K}(c_{\beta})+\alpha(\overline{g}^{ \beta})\}\] _where_ \(v_{K}\) _is the valuation on the base field_ \(K\)_._
**Definition 3.9**.: Let \(\widetilde{\sigma_{x}}:=\{v_{\alpha}\,|\,\alpha\in\sigma_{X}\}\), we define the _skeleton of a log-regular log scheme_\(X^{+}\) over \(S^{+}\) as \(\mathsf{Sk}(X^{+}):=\bigsqcup\widetilde{\sigma_{x}}/\sim\), where the equivalence relation \(\sim\) is generated by couples of the form \((v_{\alpha},v_{\alpha\circ\tau_{x,y}})\).
**3.10**.: For a log regular scheme \(X^{+}\) over log trait \(S^{+}\), the associated Kato fan \(F(X)\) is vertical and saturated over \(\mathsf{Spec}\,\mathbf{N}=\{\emptyset,\mathbf{N}_{\geqslant 1}\}\). We can construct \(\Delta_{F(X)}\) a conical polyhedral complex with an integral structure and \(\Delta^{1}_{F(X)}\) a compact conical polyhedral complex with an integral structure associated with \(F(X)\) which were defined in [9] for toroidal embedding without self-intersection. Let \(\{U_{\alpha}\}_{\alpha}\) be an affine covering of \((F(X),\mathscr{M}_{F(X)})\). We have the following datum of \(\Delta_{F(X)}\):
* \(P_{\alpha}=\Gamma(U_{\alpha},\mathscr{M}_{F(X)})\).
* \(\sigma_{\alpha}=\mathsf{Hom}_{\mathsf{Mon}}(P_{\alpha},\mathbf{R}_{\geqslant 0 })\subseteq(P_{\alpha}^{\mathsf{gp}}\otimes_{\mathbf{Z}}\mathbf{R})^{\vee}:=V _{\alpha}^{\vee}\).
* \(\Delta_{F(X)}=\bigcup_{\alpha}\sigma_{\alpha}\).
* the integral structure is the family \((N_{\alpha})_{\alpha}\) where \(N_{\alpha}=\mathsf{Hom}_{\mathsf{gp}}(P_{\alpha}^{\mathsf{gp}},\mathbf{Z})\).
The datum of \(\Delta^{1}_{F(X)}\):
Let \(\pi\) be the image of the map \(\mathbf{N}\to P_{\alpha}\neq\{1\}\)
* \(V_{\alpha}^{\vee,1}:=\{x\in V_{\alpha}^{\vee}\,|\,x(\pi)=1\}\).
* \(\sigma_{\alpha}^{1}:=\sigma_{\alpha}\cap V_{\alpha}^{\vee,1}\).
* \(\Delta^{1}_{F(X)}=\bigcup_{\alpha}\sigma_{\alpha}^{1}\).
* the integral structure is the family \((N^{1}_{\alpha})_{\alpha}\), where \(N^{1}_{\alpha}=N_{\alpha}\cap V_{\alpha}^{\vee,1}\).
**Lemma 3.11**.: _Let \(X^{+}\) be a log regular scheme over log trait \(S^{+}\), then_
\[\mathsf{Sk}(X^{+})\cong\Delta^{1}_{F(X^{+})}\]
_as compact conical polyhedral complexes._
Proof.: This result is direct from the definitions above.
### Explicit description of \(\mathsf{Sk}(\mathcal{X}^{+}_{0,n})\) for \(n=4,5\)
To study the essential skeleton \(\mathsf{Sk}^{\mathsf{ess}}\!\left(\overline{\mathsf{M}}_{0,n},\overline{ \mathsf{M}}_{0,n}\smallsetminus\mathsf{M}_{0,n}\right)\) for \(n\geqslant 3\), by proposition 0.3, we take a good dlt minimal model \(\mathcal{X}^{+}_{0,n}=(\mathcal{X}_{0,n},\overline{D}_{\overline{\mathsf{M}}_ {0,n}}+(\mathcal{X}_{0,n})_{s,\mathsf{red}})\) of \((\overline{\mathsf{M}}_{0,n},\overline{\mathsf{M}}_{0,n}\smallsetminus\mathsf{M }_{0,n})\) by restricting the coefficients on the valuation ring \(K^{\circ}\), and study the Berkovich skeleton of \(\mathsf{Sk}(\mathcal{X}^{+}_{0,n})\). In general, to describe \(\mathsf{Sk}(\mathcal{X}^{+}_{0,n})\), we need Kapranov's blow-ups construction of \(\overline{\mathsf{M}}_{0,n}\)[6] in order to get the local equations of the boundary divisors and then study the intersection of boundary divisors of \(\overline{\mathsf{M}}_{0,n}\).
**3.12**.: **For \(n=4\)**
For the log regular log scheme \(\overline{\mathsf{M}}_{0,4}\), we have \(\overline{\mathsf{M}}_{0,4}\cong\mathbf{P}^{1}_{K}\) and it equipped with the divisorial log structure associated to the effective divisor \(\{0,1,\infty\}\). So we can take \(\mathcal{X}^{+}_{0,4}:=(\mathbf{P}^{1}_{K^{\circ}},\mathscr{M}_{D_{\mathcal{ X}^{+}_{0,4}}})\) as the log model of \(\overline{\mathsf{M}}_{0,4}\) where \(D_{\mathcal{X}^{+}_{0,4}}=\mathbf{P}^{1}_{k}+\overline{[0:1]}+\overline{[1:0 ]}+\overline{[1:1]}\), and let \(\mathbf{P}^{1}_{K^{\circ}}=\mathsf{Proj}\,K^{\circ}[T_{0},T_{1}]\), we have \((\mathcal{X}^{+}_{0,4})_{s}=\mathbf{P}^{1}_{k}=V_{+}(\varpi)\), \(E_{1}=V_{+}(T_{0})=\overline{[0:1]}\), \(E_{2}=V_{+}(T_{1})=\overline{[1:0]}\), \(E_{3}=V_{+}(T_{1}-T_{0})=\overline{[1:1]}\). Now it's easy to see that \(D_{\mathcal{X}^{+}_{0,4}}\) is a divisor with strict normal crossing in \(\mathcal{X}^{+}_{0,4}=\mathbf{P}^{1}_{K^{\circ}}\) and \((D_{\mathcal{X}^{+}_{0,4}})_{s}\) is a divisor with normal crossing relative to \(K\) in \(\overline{\mathsf{M}}_{0,4}\). Thus \((\mathbf{P}^{1}_{K^{\circ}},D_{\mathcal{X}^{+}_{0,4}})\) is logarithmic smooth over \((\mathsf{Spec}\,K^{\circ},(\varpi))\) by Lemma 1.9. Similarly we have \((\overline{\mathsf{M}}_{0,4},\mathscr{M}_{\overline{\mathsf{M}}_{0,4}})\) is log regular (toroidal embedding without self-intersection) by [7, Proposition 8.3]. Let \(F(\mathcal{X}^{+}_{0,4})\) be the Kato fan associated to the log scheme \(\mathcal{X}^{+}_{0,4}\), then we have:
\[\mathsf{M}^{\mathsf{trop}}_{0,4}\cong\Delta^{1}_{F(\mathcal{X}^{+}_{0,4})}\cong \mathsf{Sk}(\mathcal{X}^{+}_{0,4}) \tag{3.13}\]
### Explicit description for \(\mathsf{Sk}(\mathcal{X}_{0,4}^{+})\)
Let's denote \(\eta_{1},\eta_{2},\eta_{3}\) be the generic points of the intersection \(\overline{E}_{1}\cap(\mathcal{X}_{0,4})_{s}\), \(\overline{E}_{2}\cap(\mathcal{X}_{0,4})_{s}\), \(\overline{E}_{3}\cap(\mathcal{X}_{0,4})_{s}\) respectively. Then we have:
1. \(\mathscr{O}_{\mathcal{X}_{0,4},\eta_{1}}\cong K^{\circ}[u]_{(\varpi,u)}\)
2. \(\mathscr{O}_{\mathcal{X}_{0,4},\eta_{2}}\cong K^{\circ}[u]_{(\varpi,u)}\)
3. \(\mathscr{O}_{\mathcal{X}_{0,4},\eta_{3}}\cong K^{\circ}[u]_{(\varpi,u-1)}\)
Thus we have:
1. \(\widehat{\mathscr{O}_{\mathcal{X}_{0,4},\eta_{1}}}\cong K^{\circ}\langle u \rangle\,\llbracket u\rrbracket\)
2. \(\widehat{\mathscr{O}_{\mathcal{X}_{0,4},\eta_{2}}}\cong K^{\circ}\langle u \rangle\,\llbracket u\rrbracket\)
3. \(\widehat{\mathscr{O}_{\mathcal{X}_{0,4},\eta_{3}}}\cong K^{\circ}\langle u \rangle\,\llbracket u-1\rrbracket\)
For \(f\in\mathscr{O}_{\mathcal{X}_{0,4},\eta_{1}}\), if \(f\) is a polynomial in \(K^{\circ}[u]\), by \(\left|\cdot\right|_{\alpha}=\mathsf{exp}(-v_{\alpha}(\cdot))\) and \(\alpha(\varpi)=1\), thus \(|\lambda|_{\alpha}=|\lambda|_{K}\) for any \(\lambda\in K^{\circ}\) and for \(f=\sum_{n=1}^{m}a_{n}u^{n}\),
\[|f|_{\alpha}=\mathsf{exp}(-\mathsf{min}_{n}(v_{K}(a_{n})+\alpha(\overline{u} ^{n})))=\mathsf{max}_{n}(|a_{n}|_{K}\mathsf{exp}(-n\alpha(\overline{u}))). \tag{3.15}\]
### For \(n=5\)
By the definition 3.20, we know that \(\overline{\mathsf{M}}_{0,5}\cong\mathsf{Bl}_{p_{1},p_{2},p_{3},p_{4}}\mathbf{ P}_{K}^{2}\), where four points \(p_{1},p_{2},p_{3},p_{4}\) are in general position, that is, no three of them lying on a projective line. By linear transformation, we can assume these points are given by
\[p_{1}=[1:0:0],\;p_{2}=[0:1:0],\;p_{3}=[0:0:1],\;p_{4}=[1:1:1]\]
The irreducible components of the boundary divisor are the exceptional curves on \(\overline{\mathsf{M}}_{0,5}\), Let \(E_{i}\) be the class of exceptional curve corresponding to the total transformation of \(p_{i}\) and \(H\) be the class of hyperplane corresponding to the total transformation of a line. Then, \(\{E_{1},E_{2},E_{3},E_{4},H\}\) is a basis of \(\mathsf{Pic}(\overline{\mathsf{M}}_{0,5})\) and
\[(E_{i},E_{j})=-\delta_{ij},\;(E_{i},H)=0,\;(H,H)=1\]
Thus by the basic properties of the exceptional curve, we can get the 10 exceptional curves are:
1. \(\{E_{i}\}_{1\leqslant i\leqslant 4}\)
2. \(\{H-E_{i}-E_{j}\}_{i\neq j}\)
For each \(H-E_{i}-E_{j}\) can be considered as the strict transformation of a line \(L_{ij}\) cross the points \(p_{i},p_{j}\). we can use the following Peterson graph to represent the intersection situations among these exceptional curves:
where \(E_{ij}:=H-E_{i}-E_{j}\), and we send the tropical curves \(\{\delta_{i5}\}_{1\leqslant i\leqslant 4}\) to the class of exceptional curves \(\{E_{i}\}_{1\leqslant i\leqslant 4}\) and send \(\{\delta_{ij}\}\) to \(\{H-E_{k}-E_{l}\}\) where \(\{k,l\}\) is disjoint from \(\{i,j,5\}\).
Now let \(\mathcal{X}_{0,5}:=\overline{\mathcal{M}}_{0,5}\otimes_{\mathbf{Z}}K^{\circ}\), and \(D_{\mathcal{X}_{0,5}}:=\sum\overline{E}_{i}+\sum\overline{E}_{ij}+(\mathcal{X }_{0,5})_{s}\), then \(\mathcal{X}_{0,5}^{+}:=(\mathcal{X}_{0,5},D_{\mathcal{X}_{0,5}})\) is a log regular model of the log regular scheme \(\overline{\mathsf{M}}_{0,5}\), and it's easy to see:
* \(\overline{E}\cap\overline{E^{\prime}}\neq\emptyset\) if and only if \(E\cap E^{\prime}\neq\emptyset\) for any exceptional curve \(E,E^{\prime}\).
Without loss of generality, Let's consider the collection \(\{\overline{E}_{1},\overline{E}_{12},\overline{E}_{13},\overline{E}_{14}\}\) and denote \(\eta_{1},\eta_{12},\eta_{13},\eta_{14},\eta_{112},\eta_{113},\eta_{114}\) as the generic points of
\[\overline{E}_{1}\cap(\mathcal{X}_{0,5})_{s},\ \overline{E}_{12}\cap( \mathcal{X}_{0,5})_{s},\ \overline{E}_{13}\cap(\mathcal{X}_{0,5})_{s},\ \overline{E}_{14}\cap(\mathcal{X}_{0,5})_{s},\] \[\overline{E}_{1}\cap\overline{E}_{12}\cap(\mathcal{X}_{0,5})_{s}, \ \overline{E}_{1}\cap\overline{E}_{13}\cap(\mathcal{X}_{0,5})_{s},\ \overline{E}_{1}\cap\overline{E}_{14}\cap(\mathcal{X}_{0,5})_{s}\]
respectively. Thus we have:
* \(\sigma_{\eta_{112}}=\{\alpha\in\mathsf{Hom}_{\mathsf{Mon}}(\overline{ \mathcal{M}}_{\mathcal{X}_{0,5}^{+},\eta_{112}},\mathbf{R}_{\geqslant 0})\,|\, \alpha(\varpi)=1\}\cong\mathbf{R}_{\geqslant 0}^{2}\)
* \(\sigma_{\eta_{113}}=\{\alpha\in\mathsf{Hom}_{\mathsf{Mon}}(\overline{ \mathcal{M}}_{\mathcal{X}_{0,5}^{+},\eta_{113}},\mathbf{R}_{\geqslant 0})\,|\, \alpha(\varpi)=1\}\cong\mathbf{R}_{\geqslant 0}^{2}\)
* \(\sigma_{\eta_{114}}=\{\alpha\in\mathsf{Hom}_{\mathsf{Mon}}(\overline{ \mathcal{M}}_{\mathcal{X}_{0,5}^{+},\eta_{114}},\mathbf{R}_{\geqslant 0})\,|\, \alpha(\varpi)=1\}\cong\mathbf{R}_{\geqslant 0}^{2}\)
* \(\sigma_{\eta_{1}}=\{\alpha\in\mathsf{Hom}_{\mathsf{Mon}}(\overline{ \mathcal{M}}_{\mathcal{X}_{0,5}^{+},\eta_{1}},\mathbf{R}_{\geqslant 0})\,|\, \alpha(\varpi)=1\}\cong\mathbf{R}_{\geqslant 0}\)
* \(\sigma_{\eta_{12}}=\{\alpha\in\mathsf{Hom}_{\mathsf{Mon}}(\overline{ \mathcal{M}}_{\mathcal{X}_{0,5}^{+},\eta_{12}},\mathbf{R}_{\geqslant 0})\,|\, \alpha(\varpi)=1\}\cong\mathbf{R}_{\geqslant 0}\)
* \(\sigma_{\eta_{13}}=\{\alpha\in\mathsf{Hom}_{\mathsf{Mon}}(\overline{ \mathcal{M}}_{\mathcal{X}_{0,5}^{+},\eta_{13}},\mathbf{R}_{\geqslant 0})\,|\, \alpha(\varpi)=1\}\cong\mathbf{R}_{\geqslant 0}\)
* \(\sigma_{\eta_{14}}=\{\alpha\in\mathsf{Hom}_{\mathsf{Mon}}(\overline{ \mathcal{M}}_{\mathcal{X}_{0,5}^{+},\eta_{14}},\mathbf{R}_{\geqslant 0})\,|\, \alpha(\varpi)=1\}\cong\mathbf{R}_{\geqslant 0}\)
Note that \(\sigma_{\eta_{1}},\sigma_{\eta_{12}},\sigma_{\eta_{13}},\sigma_{\eta_{14}}\) could be embedded into \(\sigma_{\eta_{112}},\sigma_{\eta_{113}},\sigma_{\eta_{114}}\) by the cospecization map as a face of cone. Thus, by Proposition 3.8, we have \(\mathsf{Sk}(\mathcal{X}_{0,5}^{+})\cong\mathsf{M}_{0,5}^{\mathsf{top}}\).
### Explicit description for \(\mathsf{Sk}(\mathcal{X}_{0,5}^{+})\)
By the Proposition 3.8, it's sufficient to clarify the local equations of each exceptional divisor and their intersections in order to get the explicit description of each valuation \(v_{\alpha}\). For \(\mathbf{P}_{K}^{2}=\mathsf{Proj}\,K[T_{0},T_{1},T_{2}]\), we have \([1:0:0]=(T_{1},T_{2})\), \([0:1:0]=(T_{0},T_{2})\), \([0:0:1]=(T_{0},T_{1})\), \([1:1:1]=(T_{1}-T_{0},T_{2}-T_{0})\), \(L_{12}=(T_{2})\).
Without loss of generality, we take the collection \(\{E_{1},E_{12},E_{13},E_{14}\}\), in order to study the local equation of \(E_{1}\cap E_{1i}\), it's suffice to take an open affine subscheme \(U\subseteq X\) such that \(U\cap Z=\{p_{i}\}\) and study the blowing up restricted on \(U\):
\[\pi_{U}:\pi^{-1}(U)\cong\mathsf{Bl}_{p_{i}}U\to U\]
Let's do the computation for \(E_{1}\cap E_{12}\) in detail:
Consider \(D_{+}(T_{0})=\mathsf{Spec}\,K[\frac{T_{1}}{T_{0}},\frac{T_{2}}{T_{0}}]\cong \mathsf{Spec}\,K[x_{1},x_{2}]\), then we have \(p_{1},p_{4}\in D_{+}(T_{0})\) which are define by \((x_{1},x_{2})\) and \((x_{1}-1,x_{2}-1)\) respectively and \(L_{12}=V(x_{2})\) Take \(U=\mathsf{Spec}\,K[x_{1},x_{2},(x_{1}-1)^{-1}]\), we have \(\mathsf{Bl}_{p_{1}}U=U_{1}\cup U_{2}\) with \(U_{1}=\mathsf{Spec}\,K[x_{1},\frac{x_{2}}{x_{1}},(x_{1}-1)^{-1}]\) and \(U_{2}=\mathsf{Spec}\,K[x_{2},\frac{x_{1}}{x_{2}},(x_{1}-1)^{-1}]\). Now in \(U_{1}\) the exceptional curve \(E_{1}|_{U_{1}}=V(x_{1})\) and the strict transformation \(\widetilde{L_{12}}|_{U_{1}}=E_{12}|_{U_{1}}=V(\frac{x_{2}}{x_{1}})\)
Thus, by the argument above, we have \(\eta_{112}=(\varpi,x_{1},\frac{x_{2}}{x_{1}})\) in
\[\mathsf{Spec}\,K^{\circ}[x_{1},\frac{x_{2}}{x_{1}},(x_{1}-1)^{-1}]\]
and
\[\mathscr{O}_{\mathcal{X}_{0,5},\eta_{112}}\cong K^{\circ}[x_{1}, \frac{x_{2}}{x_{1}},(x_{1}-1)^{-1}]_{(\varpi,x_{1},\frac{x_{2}}{x_{1}})}\] \[\widehat{\mathscr{O}_{\mathcal{X}_{0,5},\eta_{112}}}\cong K^{ \circ}\langle x_{1},\frac{x_{2}}{x_{1}},(x_{1}-1)^{-1}\rangle\left[\left[x_{1},\frac{x_{2}}{x_{1}}\right]\right]\]
By the same argument in the case \(n=4\), for any polynomial \(f=\sum_{\beta}a_{\beta}x_{1}^{\beta_{1}}x_{2}^{\beta_{2}}\) in \(K^{\circ}[x_{1},x_{2}]\),we have:
\[|f|_{\alpha}=\mathsf{max}_{\beta}(|a_{\beta}|_{K}\mathsf{exp}(-\beta_{1}\alpha (\overline{x}_{1})-\beta_{2}(\alpha(\overline{x_{2}})+\alpha(\overline{x}_{1}))) \tag{3.18}\]
**3.19**.: **For n \(\geqslant\) 6**__
**Definition 3.20**.: (Kapranov) Let \(p_{1},\ldots,p_{n-1}\) be the points in general position in \(\mathbf{P}^{n-3}\), then \(\overline{\mathsf{M}}_{0,n}\) is the iterated blow-ups of \(\mathbf{P}^{n-3}\) along the points \(p_{1},\ldots,p_{n-1}\), along the strict transformation \(\widetilde{l}_{ij}\) of the lines \(l_{ij}=\overline{p_{i}p_{j}}\) for \(i\neq j\), until along the strict transformation \(\widetilde{l}_{j_{1}j_{2}\cdots j_{n-4}}\) of the linear spaces \(l_{j_{1}j_{2}\cdots j_{n-4}}\) contain \(\{p_{j_{1}},p_{j_{2}},\ldots,p_{j_{n-4}}\}\) Thus we have:
\[\overline{\mathsf{M}}_{0,n}\cong\mathsf{Bl}_{\{\widetilde{l}_{j_{1}j_{2} \cdots j_{n-4}}\}}\cdots\mathsf{Bl}_{\{\widetilde{l}_{ij}\}_{i\neq j}}( \mathsf{Bl}_{p_{1},\ldots,p_{n-1}}\mathbf{P}^{n-3})\]
Let \(\left\{E_{j_{1}j_{2}\cdots j_{n-k}}\right\}_{4\leqslant k\leqslant n-1}\) be the exceptional divisors corresponding to
\[\left\{l_{j_{1}j_{2}\cdots j_{n-k}}\right\}_{4\leqslant k\leqslant n-1}\]
respectively. Let \(H\) be the hyperplane class on \(\overline{\mathsf{M}}_{0,n}\). Then there are \(2^{n-1}-n-1\) boundary divisors we can get:
1. \(\{E_{i}\}_{1\leqslant i\leqslant n-1}\)
2. \(\{E_{ij}\}_{i\neq j}\)
3. \(\{E_{j_{1}j_{2}\cdots j_{n-k}}\}_{j_{1}j_{2}\cdots j_{n-k}\in[n-1]}\)
4. \(\{H-\sum_{t=1}^{n-3}E_{j_{t}}-\sum E_{j_{t}j_{s}}-\cdots-\sum E_{j_{t_{1}} \cdots j_{t_{n-4}}}\}_{j_{1}j_{2}\cdots j_{n-3}\in[n-1]}\)
For each \(H-\sum_{t=1}^{n-3}E_{j_{t}}-\sum E_{j_{t}j_{s}}-\cdots-\sum E_{j_{t_{1}} \cdots j_{t_{n-4}}}\) can be considered as the strict transformation of the linear space \(l_{j_{t_{1}}\cdots j_{t_{n-3}}}\) cross the points \(p_{j_{1}},p_{j_{2}},\ldots,p_{j_{n-3}}\).
**Remark 3.21**.: We can send the tropical curves \(\{\delta_{in}\}_{1\leqslant i\leqslant n}\) to the exceptional divisors \(E_{i}\), tropical curves \(\{\delta_{ijn}\}_{i\neq j}\) to the exceptional divisors \(E_{ij}\), tropical curves \(\{\delta_{ij}\}\) to \(H-\sum_{t=1}^{n-3}E_{j_{t}}-\sum E_{j_{t}j_{s}}-\cdots-\sum E_{j_{t_{1}} \cdots j_{t_{n-4}}}\) where \(\{j_{1},\ldots,j_{n-3}\}\) is disjoint from \(\{i,j,n\}\).
**Proposition 3.22**.: _Let \(\mathcal{X}_{0,n}^{+}\) be a log regular model of \(\mathsf{M}_{0,n}\), then we have_
\[\overline{\mathsf{M}}_{0,n}^{\mathsf{prop}}\cong\mathsf{Sk}(\mathcal{X}_{0,n }^{+})\cong\Delta_{F(\mathcal{X}_{0,n}^{+})}^{1}\]
Proof.: Let \(D_{J}^{I}\) and \(D_{J^{\prime}}^{I^{\prime}}\) be two irreducible boundary divisors of \(\overline{\mathsf{M}}_{0,n}\), where
\[|I|,|J|,|I^{\prime}|,|J^{\prime}|\geqslant 2\]
and \(I\cup J=[n],I^{\prime}\cup J^{\prime}=[n]\), then by [8], \(D_{J}^{I}\cap D_{J^{\prime}}^{I^{\prime}}=\emptyset\) iff there are no inclusions among any two of \(I,J,I^{\prime},J^{\prime}\). Then the polyhedron \(\sigma_{\eta}\) associated to the generic point \(\eta\) of top intersections of boundary divisors is isomorphic to \(\mathbf{R}_{\geqslant 0}^{n-3}\), thus \(\sigma_{\eta}\) is equal to the polyhedron associated to stable tropical curve determined by the intersection.
## 4. Comparison for faithful tropicalization and skeleton for \(n\geqslant 3\)
In this section, we get the comparison result of faithful tropicalization and skeleton of \(\overline{\mathsf{M}}_{0,n}\) for \(n\geqslant 3\).
**4.1**.: _For \(n\geqslant 3\)._
**Example 4.2**.: For \(n=3,4,5\), let \(\sigma(\mathscr{T}\mathsf{M}_{0,n})\) be the image of \(\mathscr{T}\mathsf{M}_{0,n}\) under the section map \(\sigma\) of tropicalization, then we have \(\mathsf{Sk}(\mathcal{X}_{0,n}^{+})=\sigma(\mathscr{T}\mathsf{M}_{0,n})\).
Proof.:
1. For \(n=3\), since \(\overline{\mathsf{M}}_{0,3}=\mathsf{Spec}\,K\), \(\overline{\mathsf{M}}_{0,3}^{\mathsf{an}}=\{v_{K}\}\). The claim is obvious.
2. For \(n=4,5\), by compare 3.15 with 2.14; 3.18 with 2.20, we get the results.
**Lemma 4.3**.: _Let \([C]\) be a point of \(\overline{\mathsf{M}}_{0,n}^{\mathsf{an}}\), then \([C]\) is represented by a pair_
\[(\mathsf{val}_{C}:L_{[C]}\to\mathbf{R}\cup\{\infty\},\,\mu_{C}:\mathsf{Spec}\, R_{[C]}\to\mathcal{X}_{0,n}^{+})\]
_where \(L_{[C]}\) is a field extension of \(K\) and \(R_{[C]}\) is the correspondent valuation ring of \(L\). In particular, the dual graph \(G_{[C]}\) of the special fiber of family of stable curves \(\mu_{C}\) coincide with tropical curve associated to \(\mathsf{trop}([C])\)._
Proof.: Take the model of \(\overline{\mathsf{M}}_{0,n}\) as \(\mathcal{X}_{0,n}^{+}\), then for any point \([C]\in\overline{\mathsf{M}}_{0,n}^{\mathsf{an}}\), there exits a unique morphism \(\phi:\mathsf{Spec}\,R_{[C]}\to\mathcal{X}_{0,n}^{+}\) such that the following diagram is commutative by the valuative criterion of properness:
Where \(R_{[C]}\) is a valuation subring of the valued field \(L\) respective with the valuation \(\mathsf{val}_{C}\). This finishes the proof.
**Theorem 4.4**.: _The universal curve diagram is commutative:_
(4.5)
Proof.:
1. Let's first prove \(\pi^{\mathsf{an}}_{n+1}(\mathsf{Sk}(\mathcal{X}^{+}_{0,n+1}))\subseteq\mathsf{Sk} (\mathcal{X}^{+}_{0,n})\). For any \([C]\in\mathsf{Sk}(\mathcal{X}^{+}_{0,n+1})\), by the Definition 3.9, \([C]\) is a point corresponding to a pair \((\eta,\alpha)\), where \(\eta\) is a generic point of intersections of irreducible components of \(D_{\mathcal{X}_{0,n+1}}\) and \(\alpha:\overline{\mathscr{M}}_{\mathcal{X}^{+}_{0,n+1},\eta}\to\mathbf{R}_{\geqslant 0}\) is a morphism of monoids such that \(\alpha(\overline{m})=\mathsf{val}_{C}(m)\) for any \(m\in\mathscr{M}_{\mathcal{X}_{0,n+1},\eta}\) and \(\alpha(\varpi)=\mathsf{val}_{C}(\varpi)=1\) for any uniformizer \(\varpi\in K^{\circ}\). We claim that \(\pi^{\mathsf{an}}_{n+1}([C]):=[C^{\prime}]\in\mathsf{Sk}(\mathcal{X}^{+}_{0,n})\): By the structure of \(\mathsf{Sk}(\mathcal{X}^{+}_{0,n+1})\), we can assume \(\eta\) is a generic point of top intersections of irreducible boundary divisors, more precisely, Let \(\eta\in\bigcap_{I,J,I^{\prime},J^{\prime}}\overline{D^{I}_{J}}\cap\overline{D ^{I^{\prime}}_{J^{\prime}}}\cap(\mathcal{X}_{0,n+1})_{s}\), where \(D^{I}_{J}\) and \(D^{I^{\prime}}_{J^{\prime}}\) are irreducible boundary divisors on \(\overline{\mathsf{M}}_{0,n+1}\) and \(I,J,I^{\prime},J^{\prime}\) are index sets which are satisfied the conditions described in [8, Fact 4], then \(\pi_{n+1}(\eta)\in\bigcap\overline{\pi_{n+1}(D^{I}_{J})}\cap\overline{\pi_{n+1 }(D^{I^{\prime}}_{J^{\prime}})}\cap(\mathcal{X}_{0,n})_{s}\) which is a dimension zero subscheme of \(\mathcal{X}_{0,n}\), note that \(\overline{\pi_{n+1}(D^{I}_{J})}\) or \(\overline{\pi_{n+1}(D^{I^{\prime}}_{J^{\prime}})}\) is either \(\mathcal{X}_{0,n}\) or an irreducible boundary divisor of \(\mathcal{X}_{0,n}\), thus \(\pi_{n+1}(\eta)\) is a generic point of intersections of irreducible boundary divisors. Meanwhile, since we have \(\pi_{n+1}(\mathsf{M}_{0,n+1})\subseteq\mathsf{M}_{0,n}\), \(\pi_{n+1}\) induces a morphism of compactifying log schemes \(\pi_{n+1}:(\mathcal{X}_{0,n+1},\mathscr{M}_{\mathcal{X}^{+}_{0,n+1}})\to( \mathcal{X}_{0,n},\mathscr{M}_{\mathcal{X}^{+}_{0,n}})\), in particular, we have following commutative diagram of characteristic charts at stalks: (4.6) \[\begin{CD}\overline{\mathscr{M}}_{\mathcal{X}^{+}_{0,n},\pi_{n+1}(\eta)}@>{ \theta}>{}>\overline{\mathscr{M}}_{\mathcal{X}^{+}_{0,n+1},\eta}\\ @V{}V{c_{n}}V@V{}V{c_{n+1}}V\\ \mathscr{O}_{\mathcal{X}_{0,n},\pi_{n+1}(\eta)}@>{\pi^{\sharp}_{n+1,\eta}}>{}> \mathscr{O}_{\mathcal{X}_{0,n+1},\eta}\end{CD}\]
More precisely, we have:
(4.7)
Note that \(\mathsf{val}_{C^{\prime}}=\mathsf{val}_{C}\circ\pi^{\sharp}_{n+1,\eta}\), now let \(\alpha^{\prime}=\theta\circ\alpha:\overline{\mathscr{M}}_{\mathcal{X}^{+}_{0,n},\pi_{n+1}(\eta)}\to\mathbf{R}_{\geqslant 0}\), we have \(\alpha^{\prime}(\overline{m})=\mathsf{val}_{C^{\prime}}(m)\) for any \(m\in\mathscr{M}_{\mathcal{X}_{0,n},\pi_{n+1}(\eta)}\) and \(\alpha^{\prime}(\varpi)=\mathsf{val}_{C^{\prime}}(\varpi)=1\) for any uniformizer \(\varpi\in K^{\circ}\), thus \([C^{\prime}]\in\mathsf{Sk}(\mathcal{X}^{+}_{0,n})\). For the surjectivity, Let \([C^{\prime}]=(\varrho,\beta)\in\mathsf{Sk}(\mathcal{X}^{+}_{0,n})\), assume \(\varrho\) is the generic point of the
top intersection of irreducible boundary divisors of \(\mathcal{X}^{+}_{0,n}\) and let \(\mathsf{val}_{C^{\prime}}\) be the valuation associated to \([C^{\prime}]\) such that \(\beta(\overline{m})=\mathsf{val}_{C^{\prime}}(m)\) for any \(m\in\mathscr{M}_{\mathcal{X}^{+}_{0,n},\varrho}\). Then by [8, Fact 3], there exists a generic point \(\vartheta\) of the top intersection of irreducible boundary divisors of \(\mathcal{X}^{+}_{0,n+1}\) such that \(\pi_{n+1}(\vartheta)=\varrho\), meanwhile, for \(\beta:\overline{\mathscr{M}}_{\mathcal{X}^{+}_{0,n},\varrho}\to\mathbf{R}_{\geq 0}\), then it can be lifted through \(\varsigma:\overline{\mathscr{M}}_{\mathcal{X}^{+}_{0,n+1,\vartheta}}\to \mathbf{R}_{\geq 0}\) by taking \(\varsigma=\beta+\beta^{\prime}\), where \(\beta^{\prime}:\mathbf{N}\to\mathbf{R}_{\geq 0}\) is any map of monoids. Thus we have \(\pi^{\mathsf{an}}_{n+1}(\vartheta,\varsigma)=(\varrho,\beta)\).
2. Now let's prove \(\pi^{\mathsf{an}}_{n+1}\big{(}\sigma(\mathsf{M}^{\mathsf{trop}}_{0,n+1})\big{)} \subseteq\sigma(\mathsf{M}^{\mathsf{trop}}_{0,n})\), let \(x\) be a point in \(\mathsf{M}^{\mathsf{trop}}_{0,n+1}\), then \(x\in\mathscr{C}^{\prime}_{T}\), assume \(T\) is an arbitrary stable tropical curve with \(n+1\) leaves with endpoints leaves \(i,j\) such that \(n+1\notin\{i,j\}\). Let \(\leqslant\) be a partial order on \([n+1]\smallsetminus\{i,j\}\) that has the cherry property on \(T\) with respect to \(i\) and \(j\), assume \(n+1\) is the maximal leaf in a subtree \(T_{a}\). Consider \(x\) as a lift an element \(x^{\prime}\in\overline{\mathscr{C}_{T}}\cap\mathscr{T}\mathsf{Gr}_{0}(2,n+1) \subseteq\overline{\mathscr{C}_{T}}\cap\mathscr{T}U_{ij}\), then the associated vanishing set \(J(x^{\prime})=\emptyset\), by [5, Proposition 3; Theorem 3], there exists a compatible set \(I\) of size \(2(n-1)\) and a local section \(\sigma^{(ij)}_{T,I,\emptyset}:\mathscr{C}^{(ij)}_{T,\emptyset}\to(\mathsf{Spec }\,K[u_{kl}\,|\,kl\in I])^{\mathsf{an}}\) by \(\sigma^{(ij)}_{T,I,\emptyset}(x^{\prime})=\sigma^{(ij)}_{I}(x^{\prime})\). Now consider \(\pi^{\mathsf{trop}}_{n+1}(x)\in\mathscr{C}^{\prime}_{T_{0}}\), then \(T_{0}\) is a stable tropical curve with \(n\) leaves and \(i,j\) as endpoints leaves, take \[\leqslant_{0}:=\leqslant\smallsetminus\{(k,l)\,|\,k\,,\,l=n+1\}\] Then \(\leqslant_{0}\) has a cherry property on \(T_{0}\) with respect to \(i\) and \(j\). Let \(I_{0}\) be the set which is compatible with \(\leqslant_{0}\) and \(\emptyset\), then we take \(I\) above as \(I:=I_{0}\cup\{i(n+1),t(n+1)\}\) or \(I:=I_{0}\cup\{j(n+1),t(n+1)\}\) as the set compatible with \(\leqslant\) and \(\emptyset\), where \(t\in[n]\). Note that we have: \[\Gamma(\mathsf{Gr}_{0}(2,n))^{\mathbf{G}^{n}_{m}}\cong K\Big{[}\big{(}\frac{u_ {kl}}{u_{ik}u_{jl}}\big{)}^{\pm 1},\big{(}\frac{u_{kl}}{u_{jk}u_{il}}\big{)}^{\pm 1} \,|\,k,l\neq i,j\Big{]}_{kl\in\binom{[n]}{2}}\] In order to see \(\pi^{\mathsf{an}}_{n+1}\circ\sigma(x)=\sigma(x^{\prime})\), it's sufficient to these two valuation coincide on \(\big{(}\frac{u_{il}}{u_{ik}u_{jl}}\big{)}^{\pm 1},\big{(}\frac{u_{il}}{u_{jk}u_{il}} \big{)}^{\pm 1}\), we only check this on \(\frac{u_{il}}{u_{ik}u_{jl}}\), the argument for the rest of cases are similar. Note that for the index pairs \(\{kl,ik,il,jk,jl\}\), \(4\) of them are in \(I_{0}\), assume \(ik,il,jk,jl\in I_{0}\), since we have \(u_{kl}=u_{ik}u_{jl}-u_{il}u_{jk}\), we only check the valuations on \(\frac{u_{il}u_{ik}}{u_{ik}u_{jl}}\): (4.8) \[\pi^{\mathsf{an}}_{n+1}\circ\sigma(x)\big{(}\frac{u_{il}u_{jk}}{u_{ik}u_{jl}} \big{)}=\frac{\mathsf{exp}(d_{il}-d_{ij})\mathsf{exp}(d_{jk}-d_{ij})}{\mathsf{ exp}(d_{ik}-d_{ij})\mathsf{exp}(d_{jl}-d_{ij})}.\] (4.9) \[\sigma(x^{\prime})\big{(}\frac{u_{il}u_{jk}}{u_{ik}u_{jl}}\big{)}=\frac{ \mathsf{exp}(d^{\prime}_{il}-d^{\prime}_{ij})\mathsf{exp}(d^{\prime}_{jk}-d^{ \prime}_{ij})}{\mathsf{exp}(d^{\prime}_{ik}-d^{\prime}_{ij})\mathsf{exp}(d^{ \prime}_{jl}-d^{\prime}_{ij})}.\] where \(d_{**},d^{\prime}_{**}\) are distance between two different leaves on the tropical curves \(T\) and \(T_{0}\). Let's discuss the relation between \(d_{**}\) and \(d^{\prime}_{**}\) in following cases:
1. If \(n+1\) is adjacent to an edge and a leaf \(m\): let \(d_{0}\neq 0\) be the nearest non-zero distance between \(n+1\) and another leaf. (A) \(d_{i(n+1)},d_{j(n+1)}\geqslant d_{0}\), (1) If \(k,l\neq m\), then \(d_{**}=d^{\prime}_{**}\). (2) If \(k=m\), then \(d^{\prime}_{*k}=d_{**}-d_{0}\), \(d^{\prime}_{ij}=d_{ij}\), \(d_{jl}=d^{\prime}_{jl}\), \(d^{\prime}_{il}=d_{il}\).
(B) If \(d_{i(n+1)}=0\): then \(d^{\prime}_{i*}=d_{i*}-d_{0}\). S2. If \(n+1\) is adjacent to an edge and at least two leaves, then \(d_{**}=d^{\prime}_{**}\). S3. If \(n+1\) is only leave adjacent to two edges, then \(d_{**}=d^{\prime}_{**}\). Thus we have \(\pi_{n+1}^{\mathsf{an}}\circ\sigma(x)=\sigma(x^{\prime})\).
3. For the commutativity of the middle diagram, By theorem 4.12 for \((\eta,\alpha)=[C]\in\overline{\mathsf{M}}_{0,n}^{\mathsf{an}}\), we can assume \([C]\in\mathsf{Sk}(\mathcal{X}^{+}_{0,n})\), then we have: \[\mathsf{trop}([C])=(G_{[C]},\ell_{E(G_{[C]})})\] where \(G_{[C]}\) is dual graph determined by Lemma 4.3 and \(\ell_{E(G_{[C]})}\) is the length function on the edges of \(G_{[C]}\) determined by \(\alpha\). More precisely, assume \(\eta\in\bigcap_{i=1}^{n-3}\overline{D^{J_{i}}_{I_{i}}}\cap(\mathcal{X}_{0,n}) _{s}\), \(\alpha=(\alpha_{i})_{i}^{n-3}\in\mathbf{R}_{\geqslant 0}^{n-3}\), then \(G_{[C]}\) is stable dual graph with \(\#E(G_{[C]})=n-3\) and \(\ell_{E(G_{[C]})}\) is determined by \(d(k,l)\) which is the distance between two leaves \(k,l\) in \(G_{[C]}\). \(d(k,l)\) is determined by the intersection of \(\{D^{J_{i}}_{I_{i}}\}_{i}\) and \(\alpha=(\alpha_{i})_{i}\) as following: For each \(D^{J_{i}}_{I_{i}}\), \[d_{i}(k,l)=\begin{cases}0&\{k,l\}\subseteq I_{i}\text{ or }J_{i}\\ \alpha_{i}&\text{else}\end{cases}\] For \(D^{J_{i}}_{I_{i}}\cap D^{J_{j}}_{I_{j}}\), assume \(J_{i}\subseteq J_{j}\), then \(I_{j}\subseteq I_{i}\). assume \(\#J_{j}\smallsetminus J_{i}\geqslant 2\) \[d_{ij}(k,l)=\begin{cases}0&\{k,l\}\subseteq J_{i}\text{ or }J_{j}\smallsetminus J_{i}\text{ or }I_{j}\\ \alpha_{i}+\alpha_{j}&k\in J_{i},l\in I_{j}\\ \alpha_{i}&k\in J_{i},l\in J_{j}\smallsetminus J_{i}\\ \alpha_{j}&k\in J_{j}\smallsetminus J_{i},l\in I_{j}\end{cases}\] For the rest cases, we can use similar methods to get a unique \(d(k,l):=d_{1\cdots(n-3)}(k,l)\) and \(\mathsf{trop}(\pi_{n+1}^{\mathsf{an}}([C]))=\pi_{n+1}^{\mathsf{trop}}(\mathsf{ trop}([C]))\) by 1. This finishes the proof.
### Comparison Theorem
**Lemma 4.11**.: _Let \(x\) be a point in \(\mathscr{T}\mathsf{M}_{0,n}\) parameterized by a stable tropical curve \(T\) with \(n\) leaves and endpoints leaves \(i,j\) such that \(n\notin\{i,j\}\). Consider \(\mathsf{M}_{0,n}=\mathsf{Spec}\left(\Gamma(\mathsf{G}_{\mathsf{r}0}(2,n))^{ \mathbf{G}_{m}^{n}}\right)\) in Plucker coordinates for the given \(T\), specifically, for \(\Gamma(\mathsf{G}_{\mathsf{r}0}(2,n))^{\mathbf{G}_{m}^{n}}\cong K\Big{[}\binom{ u_{il}u_{ik}}{u_{ik}u_{jl}}\Big{]}\,k,l\neq i,j\Big{]}_{kl\in[\binom{n}{2}}\), we have:_
1. \(K(\overline{\mathsf{M}}_{0,n})\cong K\Big{(}\binom{u_{il}u_{ik}}{u_{ik}u_{jl}} \Big{)}\) _such that any three of_ \(\binom{u_{il}u_{ik}}{u_{ik}u_{jl}}\) _the cardinality of index sets satisfied_ \(\#(\{ij\}\cup\{i^{\prime}j^{\prime}\}\cup\{i^{\prime\prime}j^{\prime\prime}\} )\geqslant 4\)_._
2. _For every fixed_ \(\frac{u_{il}u_{ik}u_{jl}}{u_{ik}u_{jl}}\) _above, it could be taken as a local generator of the intersections of boundary divisors of_ \(\overline{\mathsf{M}}_{0,n}\)_._
Proof.:
1. This comes from a direct computation by the Plucker relations and there are \(n-3\) algebraic independent \(\frac{u_{il}u_{ik}}{u_{ik}u_{jl}}\) in \(\Gamma(\mathsf{G}_{\mathsf{r}0}(2,n))^{\mathbf{G}_{m}^{n}}\).
2. Without loss of generality, we can let \(\frac{u_{il}u_{ik}}{u_{ik}u_{jl}}=x_{1}\) and \(K\big{[}\frac{u_{il}u_{ik}}{u_{ik}u_{jl}}\big{]}\) as \(K[x_{1},x_{2},\ldots,x_{n-3}]\) and consider \(\mathsf{Spec}\,K[x_{i}]_{1\leqslant i\leqslant n-3}\) as \(D^{+}(T_{0})\) of \(\mathbf{P^{-3}}\), then we can blow up
an affine open subscheme \(U\) of \(D^{+}(T_{0})\) along \(p_{1}\) and repeated this process as in 3.20, the local equation of the exceptional divisor \(E_{1}\) is \(x_{1}\).
**Theorem 4.12**.: _Let \(\sigma(\mathscr{T}\mathsf{M}_{0,n})\) be the image of \(\mathscr{T}\mathsf{M}_{0,n}\) under the section map \(\sigma\) of tropicalization, then we have \(\mathsf{Sk}(\mathcal{X}_{0,n}^{+})=\sigma(\mathscr{T}\mathsf{M}_{0,n})\)._
Proof.: Assume by induction we have \(\mathsf{Sk}(\mathcal{X}_{0,n}^{+})=\sigma(\mathscr{T}\mathsf{M}_{0,n})\). for \(n\). Let \([C]\) be a point in \(\sigma(\mathscr{T}\mathsf{M}_{0,n+1})\), since \([C]\) is a birational point, we have \([C]\) is a pair
\[(\xi_{n+1},\mathsf{val}_{C}:K(\overline{\mathsf{M}}_{0,n+1})\to\mathbf{R}\cup \{\infty\})\]
where \(\xi_{n+1}\) is the generic point of \(\overline{\mathsf{M}}_{0,n+1}\) and the valuation \(\mathsf{val}_{C}\) is an extension of the base field \(K\). Consider the point \(\pi_{n+1}^{\mathsf{an}}([C]):=[C^{\prime}]=(\xi_{n},\mathsf{val}_{C^{\prime}})\), then for the fiber over \([C^{\prime}]\), we have:
\[(\overline{\mathsf{M}}_{0,n+1}^{\mathsf{an}})_{[C^{\prime}]}\cong\Big{(}( \overline{\mathsf{M}}_{0,n+1})_{\xi_{n}}\otimes_{\kappa(\xi_{n})}\mathscr{H}( \xi_{n})\Big{)}^{\mathsf{an}}. \tag{4.13}\]
For the fiber \((\overline{\mathsf{M}}_{0,n+1})_{\xi_{n}}\), we have
\[\mathsf{M}_{0,n+1}\times_{\overline{\mathsf{M}}_{0,n}}\mathsf{Spec}\,\kappa( \xi_{n})\hookrightarrow\overline{\mathsf{M}}_{0,n+1}\times_{\overline{\mathsf{ M}}_{0,n}}\mathsf{Spec}\,\kappa(\xi_{n}). \tag{4.14}\]
Note that \(\mathsf{M}_{0,n+1}\times_{\overline{\mathsf{M}}_{0,n}}\mathsf{Spec}\,\kappa( \xi_{n})\cong\mathbf{P}^{1}_{\kappa(\xi_{n})}\smallsetminus\{p_{1},p_{2},\dots, p_{n}\}\), thus \((\overline{\mathsf{M}}_{0,n+1})_{\xi_{n}}\) is a compactification of the curve \(\mathbf{P}^{1}_{\kappa(\xi_{n})}\smallsetminus\{p_{1},p_{2},\dots,p_{n}\}\). We have a proper surjective morphism \(\mathbf{P}^{1}_{\kappa(\xi_{n})}\to(\overline{\mathsf{M}}_{0,n+1})_{\xi_{n}}\), thus we have \(\mathbf{P}^{1}_{\kappa(\xi_{n})}\cong(\overline{\mathsf{M}}_{0,n+1})_{\xi_{n}}\). Let \(\Big{(}(\mathsf{M}_{0,n+1})_{\xi_{n}}\otimes_{\kappa(\xi_{n})}\mathscr{H}( \xi_{n})\Big{)}^{\mathsf{trop}}\) be the image of tropicalization map restricted on \(\Big{(}(\mathsf{M}_{0,n+1})_{\xi_{n}}\otimes_{\kappa(\xi_{n})}\mathscr{H}( \xi_{n})\Big{)}^{\mathsf{an}}\) and \(\mathsf{Sk}(\mathbf{P}^{1,+}_{\mathscr{H}(\xi_{n})^{0}}):=\mathsf{Sk}(\mathcal{ X}_{0,n+1}^{+})\cap(\overline{\mathsf{M}}_{0,n+1}^{\mathsf{an}})_{[C^{\prime}]}\)
Now we claim that
\[\sigma\bigg{(}\Big{(}(\mathsf{M}_{0,n+1})_{\xi_{n}}\otimes_{\kappa(\xi_{n})} \mathscr{H}(\xi_{n})\Big{)}^{\mathsf{trop}}\bigg{)}=\mathsf{Sk}(\mathbf{P}^{1,+}_{\mathscr{H}(\xi_{n})^{0}}). \tag{4.15}\]
**4.16**.: To see this, we can take \(x\in(\mathbf{P}^{1}_{\mathscr{H}(\xi_{n})}\smallsetminus\{p_{1},p_{2},\dots,p_{ n}\})^{\mathsf{trop}}\), then \(\pi_{n+1}^{\mathsf{an}}(\sigma(x))=\xi_{n+1}:=\sigma(x^{\prime})\). Let's assume the combinatorial type trees and local sections associated with \(x\) and \(x^{\prime}\) are the same as the proof of (2) in theorem 4.4. Then there exists a point \(v_{x}\) in \(\mathsf{Sk}(\mathbf{P}^{1,+}_{\mathscr{H}(\xi_{n})^{0}})\) such that the associated combinatorial type tree is same as \(x\)'s. Thus it is sufficient to show \(v_{x}=\sigma(x)\) in \(K(\overline{\mathsf{M}}_{0,n+1_{[C^{\prime}]}}):=\mathscr{H}(\xi_{n})(u)\). Without loss of generality, Let's assume the \((n+1)\)-th leave of \(T\) satisfies the condition S1 above (see Figure 4.16) and assume \(u=\frac{u_{i(n+1)}u_{jk}}{u_{ik}u_{j(n+1)}}\), where \(1\leqslant k\leqslant n\). Then there exists \(1\leqslant l\leqslant n\) such that
\[\sigma(x)(\frac{u_{i(n+1)}u_{jl}}{u_{il}u_{j(n+1)}})=\mathsf{exp}(-d_{0}).\]
Note that :
\[u=\frac{u_{i(n+1)}u_{jk}}{u_{ik}u_{j(n+1)}}=\bigg{(}\frac{u_{i(n+1)}u_{jl}}{u_{ il}u_{j(n+1)}}\bigg{)}\cdot\bigg{(}\frac{u_{il}u_{jk}}{u_{ik}u_{jl}}\bigg{)}. \tag{4.17}\]
\[\sigma(x)(u)=\mathsf{exp}(-d_{0})\cdot\sigma(x)\bigg{(}\frac{u_{il}u_{jk}}{u_{ ik}u_{jl}}\bigg{)}. \tag{4.18}\]
On the other hand, \(v_{x}\) associated tree \(T\) is corresponding to \((\eta_{x},\alpha_{x})\) where \(\eta_{x}\in\overline{D^{j(n+1)}}\bigcap_{I,J}\overline{D_{J}^{I}}\bigcap( \mathcal{X}_{0,n+1})_{s}\) and by Lemma 4.11, the \(\frac{u_{i(n+1)}u_{jl}}{u_{il}u_{j(n+1)}}:=u^{\prime}\) could be taken as an element of the system of the generator of \(\mathfrak{m}_{x}\). Then we have:
\[v_{x}(u)=\exp(-\alpha_{x}(\overline{u^{\prime}}))\cdot v_{x}\bigg{(}\frac{u_{ il}u_{jk}}{u_{ik}u_{jl}}\bigg{)}. \tag{4.18}\]
By the induction, we have \(\sigma(x)\bigg{(}\frac{u_{il}u_{jk}}{u_{ik}u_{jl}}\bigg{)}=v_{x}\bigg{(}\frac{u _{il}u_{jk}}{u_{ik}u_{jl}}\bigg{)}\), thus we have \(v_{x}=\sigma(x)\).
By theorem 4.4, we have:
\[\sigma(\mathscr{T}\mathsf{M}_{0,n+1})=\bigcup_{[C^{\prime}]}\sigma\bigg{(} \Big{(}\mathsf{M}_{0,n+1})_{\xi_{n}}\otimes_{\kappa(\xi_{n})}\mathscr{H}(\xi_ {n})\bigg{)}^{\mathsf{trop}}\bigg{)}. \tag{4.19}\]
Meanwhile we have
\[\bigcup_{[C^{\prime}]}\mathsf{Sk}(\mathbf{P}^{\mathbf{1},+}_{\mathscr{H}(\xi_ {n})^{\circ}})=\bigcup_{[C^{\prime}]}\mathsf{Sk}(\mathcal{X}_{0,n+1}^{+}) \cap(\overline{\mathsf{M}}^{\mathsf{an}}_{0,n+1})_{[C^{\prime}]}=\mathsf{Sk}( \mathcal{X}_{0,n+1}^{+}). \tag{4.20}\]
Finally, we have \(\mathsf{Sk}(\mathcal{X}_{0,n}^{+})=\sigma(\mathscr{T}\mathsf{M}_{0,n+1})\). This finishes the proof.
|
2302.13963 | Two-photon production in low-velocity shocks | The Galactic interstellar medium abounds in low-velocity shocks with
velocities less than, say, about 70 km/s. Some are descendants of higher
velocity shocks, while others start off at low velocity (e.g., stellar bow
shocks, intermediate velocity clouds, spiral density waves). Low-velocity
shocks cool primarily via Ly-alpha, two-photon continuum, optical recombination
lines (e.g., H-alpha), free-bound emission, free-free emission and forbidden
lines of metals. The dark far-ultraviolet (FUV) sky, aided by the fact that the
two-photon continuum peaks at 1400 angstroms, makes the FUV band an ideal
tracer of low-velocity shocks. Recent GALEX FUV images reaffirm this
expectation, discovering faint and large interstellar structure in old
supernova remnants and thin arcs stretching across the sky. Interstellar bow
shocks are expected from fast stars from the Galactic disk passing through the
numerous gas clouds in the local interstellar medium within 15 pc of the Sun.
Using the best atomic data available to date, we present convenient fitting
formulae for yields of Ly$\alpha$, two-photon continuum and H$\alpha$ for pure
hydrogen plasma in the temperature range of 10^4 K to 10^5 K. The formulae
presented here can be readily incorporated into time-dependent cooling models
as well as collisional ionization equilibrium models. | S. R. Kulkarni, J. Michael Shull | 2023-02-27T17:05:20Z | http://arxiv.org/abs/2302.13963v1 | # Two-photon production in low-velocity shocks
###### Abstract
The Galactic interstellar medium abounds in low-velocity shocks with velocities \(v_{s}\lesssim 70\) km s\({}^{-1}\). Some are descendants of higher velocity shocks, while others start off at low velocity (e.g., stellar bow shocks, intermediate velocity clouds, spiral density waves). Low-velocity shocks cool primarily via Ly\(\alpha\), two-photon continuum, optical recombination lines (e.g., H\(\alpha\)), free-bound emission, free-free emission and forbidden lines of metals. The dark far-ultraviolet (FUV) sky, aided by the fact that the two-photon continuum peaks at 1400 A, makes the FUV band an ideal tracer of low-velocity shocks. Recent _GALEX_ FUV images reaffirm this expectation, discovering faint and large interstellar structure in old supernova remnants and thin arcs stretching across the sky. Interstellar bow shocks are expected from fast stars from the Galactic disk passing through the numerous gas clouds in the local interstellar medium within 15 pc of the Sun. Using the best atomic data available to date, we present convenient fitting formulae for yields of Ly\(\alpha\), two-photon continuum and H\(\alpha\) for pure hydrogen plasma in the temperature range of \(10^{4}\) K to \(10^{5}\) K. The formulae presented here can be readily incorporated into time-dependent cooling models as well as collisional ionization equilibrium models.
0000-0002-4880-7880]S. R. Kulkarni
## 1 Motivation
Supernova remnants and stellar wind bubbles are iconic examples of shocks in the interstellar medium (ISM). These shocks, with the passage of time, descend to lower velocities. Our interest here is shocks with velocities less than 70 km s\({}^{-1}\). The post-shock temperature depends on the mean molecular mass, but we adopt a fiducial value of \(T_{s}\leq 10^{5}\) K and investigate the cooling of such shock-heated hydrogen gas. These shocks cool primarily via Ly\(\alpha\) (whose photons are trapped within the shocked region and eventually die on a dust particle) and two-photon continuum. The latter can be detected by Far Ultra-Violet (FUV) imagers. Low-velocity shocks can also arise on Galactic length scales: intermediate-velocity and high-velocity clouds raining down from the lower halo into the disk and gas that is shocked as it enters a spiral arm. Vallee (2017) provides a good description of the Milky Way's spiral arms, and Kim et al. (2008) discuss Galactic interstellar shocks.
Stellar bow shocks are another major source of low-velocity shocks. For instance, consider our own Sun, a generic G5V star with a weak stellar wind (\(2\times 10^{-14}\,M_{\odot}\) yr\({}^{-1}\)) moving into a warm (\(\sim 7,000\) K) and partially ionized cloud (ionization fraction, \(x\approx 1/3\)) at a relative speed of 23-26 km s\({}^{-1}\)(Frisch et al., 2011; McComas et al., 2012; Zank et al., 2013; Gry & Jenkins, 2014). Because this velocity is not larger than the magnetosonic velocity of the interstellar cloud, there is only a "bow wake" instead of a bow shock (McComas et al., 2012). In the Galactic disk, interstellar space is occupied by the Warm Neutral Medium (WNM; \(10^{3}\) K to \(8\times 10^{3}\) K), the Warm Ionized Medium (WIM; \(8\times 10^{3}\) K), and the Hot Ionized Medium (HIM; \(10^{5}\) K to \(10^{6}\) K), in roughly equal proportions.
From studies with SDSS-Apogee + _Gaia_-DR2 (Anguiano et al., 2020), the 3D velocity dispersion of the typical (\(\alpha\)-abundance tagged) thin-disk star is 48 km s\({}^{-1}\), whereas those belonging to the thick disk have dispersion of 87 km s\({}^{-1}\). The majority of these local stars reside in the thin disk with a density ratio \(n_{\rm thin}/n_{\rm thick}=2.1\pm 0.2\). As discussed in a previous study (Shull & Kulkarni, 2023), a sizeable number of stars should be moving supersonically through ambient gas in the WNM
and WIM.1 The sizes of the resulting bow shocks will be determined by the stellar velocity and the magnitude of the stellar wind.
Footnote 1: Only a few stars are likely transiting the Cold Neutral Medium (CNM; 100 K), given its small volume filling factor, \(\sim 1\%\).
Separately, recent developments warrant a closer look at low-velocity shocks. We draw attention to the discoveries of three large-diameter supernova remnants (Fesen et al., 2021) and a 30-degree long, thin arc in Ursa Major (Bracco et al., 2020). In large part, these findings were made possible with a new diagnostic - _GALEX_ FUV continuum imaging. The detection of such faint, extended features demonstrates simultaneously the value of the dark FUV sky (O'Connell, 1987) as well as the value of the FUV band in detecting two-photon emission, a distinct diagnostic of warm (\(T\lesssim 10^{5}\) K) shocked gas (Kulkarni, 2022).
The primary goal of this paper is to develop accurate hydrogen plasma cooling models, paying attention to the production of the two-photon continuum in warm plasma, \(T\lesssim 10^{5}\) K, the temperature range of interest to low velocity shocks. To this end, we first derive the probability of Ly\(\alpha\), two-photon continuum, and H\(\alpha\) resulting from excitation of the ground state of hydrogen to all \(n\ell\) levels for \(n\leq 5\) (SS2). Next, we review rate coefficients for line excitation by collisions with electrons (SS3), followed by a review of collisional ionization (SS4). The results are combined to construct a cooling curve for warm hydrogen plasma (SS5). We then present a comprehensive (isobaric and isochoric) cooling framework and apply it to gas shock heated to \(10^{5}\) K (SS6). In SS7 we summarize our results and discuss future prospects. Unless otherwise stated, the atomic line data (A-coefficients, term values) were obtained from the NIST Atomic Spectra Database2 and basic formulae are from Draine (2011).
Footnote 2: [https://physics.nist.gov/PhysRefData/ASD/lines_form.html](https://physics.nist.gov/PhysRefData/ASD/lines_form.html)
## 2 Two-photon production
Colliding electrons excite hydrogen atoms to various levels and, if sufficiently energetic, ionize H i to H ii. Excited levels are also populated by radiative recombination. Excited hydrogen atoms return to the ground state, some by emitting a Lyman-series photon and others via a cascade of optical/IR recombination lines and ending with Ly\(\alpha\) emission. Atoms that find themselves in the metastable _2s_\({}^{2}\)S\({}_{1/2}\) level, if undisturbed over a timescale of \(A_{2s\to 1s}^{-1}\approx 0.12\) s, return to the ground state by emitting a two-photon continuum. Here, \(A_{2s\to 1s}\) is the Einstein A-coefficient for the _2s-1s_ transition (Drake, 1986). Its value should be compared to those for allowed transitions (e.g., \(6.26\times 10^{8}\) s\({}^{-1}\) for Ly\(\alpha\) and\((1-5)\times 10^{7}\) s\({}^{-1}\) for H\(\alpha\), depending on the upper levels, _3s, 3p, 3d_, involved.)
The goal of this section is to compute the production of Ly\(\alpha\) photons, two-photon continuum and H\(\alpha\) resulting from electronic excitation of H atoms. We consider excitations to 15 \(n\ell\) levels; see Table 1 for term values and index scheme. We make the following assumptions. (1) The proton density in the plasma is less than the "_2s_ critical density" of \(1.5\times 10^{4}\) cm\({}^{-3}\) (see Chapter 14 of Draine, 2011). This ensures that atoms in the _2s_ level are not collisionally mixed to the _2p_ level over a timescale of \(A_{2s\to 1s}^{-1}\) and thus relax by emitting a two-photon continuum. (2) The cooling plasma is optically thick to Lyman lines (case B), so that Lyman photons are absorbed in the vicinity of where they are emitted.
\begin{table}
\begin{tabular}{r r r r} \hline \hline \multicolumn{1}{c}{\(i\)} & \multicolumn{1}{c}{level} & \multicolumn{1}{c}{\(L_{k}\,(\)cm\({}^{-1})\)} & \multicolumn{1}{c}{\(k\)} \\ \hline
1 & 1s & 0 & - \\
2 & 2s & 82303 & 1 \\
3 & 2p & " & 2 \\
4 & 3s & 97544 & 3 \\
5 & 3p & " & 4 \\
6 & 3d & " & 5 \\
7 & 4s & 102879 & 6 \\
8 & 4p & " & 7 \\
9 & 4d & " & 8 \\
10 & 4f & " & 9 \\
11 & 5s & 105348 & 10 \\
12 & 5p & " & 11 \\
13 & 5d & " & 12 \\
14 & 5f & " & 13 \\
15 & 5g & " & 14 \\ \hline \end{tabular} Note. – In constructing this table we follow the notation and term values of Anderson et al. (2000), where \(i\) is the index assigned to levels, and \(k\) is the index for upper levels excited in transitions from the ground state (1s). The energy for transition \(k\) is \(hcL_{k}\) where \(L_{k}\) is the wavenumber where \(L_{k}\) is the wavenumber (in cm\({}^{-1}\)). The symbol " is equivalent to “ditto”. As can be gathered from the entries for \(L_{k}\), the small differences in energy due to fine structure effects are ignored.
\end{table}
Table 1: The indexing scheme and spectroscopic terms
Thus, when computing branching ratios, all allowed Lyman series recombinations can be ignored.
### Photon Yields
Consider, for example, an atom excited to one of the \(n=3\) levels. An atom excited to _3s_ or _3d_ will decay to _2p_ by emitting H\(\alpha\) followed by Ly\(\alpha\). (We ignore forbidden transitions such as _ns-1s_ two-photon decays; see Chluba & Sunyaev 2008.) An atom excited to _3p_ can decay by emitting Ly\(\beta\) or decay to _2s_ by emitting H\(\alpha\) followed by two-photon decay. For the latter, the branching fraction \({\cal B}_{\beta}\) for Ly\(\beta\) emission is \(A_{3p\to 1s}/(A_{3p\to 1s}+A_{3p\to 2s})\approx 88\%\). However, under case B, the Ly\(\beta\) photon will be absorbed elsewhere in the nebula, and the situation will be repeated until de-excitation ends with emission of H\(\alpha\)+Ly\(\alpha\).
For a fiducial value of optical depth (\(\tau_{0,\alpha}=1000\)) of Ly\(\alpha\), Table 2 lists the corresponding optical depths for the Lyman series. The branching ratio \({\cal B}_{\gamma}\) to emit a Ly\(\gamma\) line is slightly smaller that that for Ly\(\beta\). As with Ly\(\beta\) under case-B conditions, Ly\(\gamma\) will also be converted to some combination of Ly\(\alpha\), optical/IR recombination lines, and a two-photon continuum. The oscillator strength, \(f\propto n^{-3}\), where \(n\) is the principal quantum number of the excited state. Thus the Lyman-line optical depths decrease rapidly with increasing \(n\) (up the series). In contrast, the branching factors \({\cal B}\) decrease slowly with \(n\).
Each state other than _4s_ and _5s_ has two fine-structure levels. For example, the _4p_ state has two levels, \(P_{1/2}\) and \(P_{3/2}\), with very little energy difference between the fine structure levels. However, the electron collisional excitation rate coefficients presented below (SS3) refer to the sum of transitions to the entire level, e.g., _1s\(\rightarrow\)4p_. The excitation coefficient is divided in proportion to the number of levels of the excited state, \(g_{u}=2J+1\) where \(J\) is the total angular momentum of the excited state. The photon yields for Ly\(\alpha\), \(2\gamma\) continuum, and H\(\alpha\) are given in Table 3.
## 3 Electron Collisional Excitation
The excitation of lines of hydrogen due to collisions with electrons is a venerable topic in ISM studies. The classic review by Dalgarno & McCray (1972) summarizes the atomic physics of the 1960s. Scholz & Walters (1991) undertook detailed calculations of the \(n=1\to 2\) excitations and also provided an estimate for the cooling rate coefficient, \(\Lambda_{\rm HI}\). Anderson et al. (2002; see also Anderson et al. 2000) present close-coupling R-matrix calculations. We adopt these rates since they offer improved accuracy over previous studies (Scholz et al. 1990). The Anderson et al. (2002) theoretical cross sections were constructed with 15 physical energy states
\begin{table}
\begin{tabular}{l r r r r r} \hline \hline line & \(\lambda\) (Å) & \(f\) & \(\tau_{0}\) & \(n\ell\) & \({\cal B}\) \\ \hline Ly\(\alpha\) & 1215.67 & 0.4164 & 1000 & _2p_ & 1 \\ Ly\(\beta\) & 1025.73 & 0.07912 & 160 & _3p_ & 0.881 \\ Ly\(\gamma\) & 972.54 & 0.02901 & 56 & _4p_ & 0.839 \\ Ly\(\delta\) & 949.74 & 0.01394 & 26 & _5p_ & 0.819 \\ \hline \end{tabular} Note. – Columns 1–4 give the name, wavelength, absorption oscillator strength, and central optical depth of the line. The column density of the nebula is assumed to provide a line-center optical depth of \(\tau_{0}=1000\) for Ly\(\alpha\), from which \(\tau_{0}\) for other Lyman lines follow. \({\cal B}\) (column 6) is the branching ratio for an atom excited to an _np_ level (column 5) to relax by emitting the appropriate Lyman series line, as opposed to a multi-decay cascade.
\end{table}
Table 2: Lyman lines: Optical depths and scatterings
\begin{table}
\begin{tabular}{l r r r} \hline \hline \(k\) & \(p_{k}({\rm Ly}\alpha)\) & \(p_{k}({\rm H}\alpha)\) & \(p_{k}(2\gamma)\) \\ \hline
1 & 0 & 0 & 1 \\
2 & 1 & 0 & 0 \\
3 & 1 & 1 & 0 \\
4 & 0 & 1 & 1 \\
5 & 1 & 1 & 0 \\
6 & 0.585 & 0.415 & 0.415 \\
7 & 0.261 & 0.261 & 0.739 \\
8 & 0.813 & 0.187 & 0.187 \\
9 & 1 & 1 & 0 \\
10 & 0.513 & 0.378 & 0.487 \\
11 & 0.305 & 0.265 & 0.695 \\
12 & 0.687 & 0.267 & 0.313 \\
13 & 0.936 & 0.702 & 0.064 \\
14 & 1 & 1 & 0 \\ \hline \end{tabular} Note. – Photon production yields \(p_{k}\) upon excitation to level with index “\(k\)”, under case B conditions; see Table 1 for the definition of \(k\). For instance, an H atom excited to _3s_ (\(k=3\)) relaxes by emitting one H\(\alpha\) photon and one Ly\(\alpha\) photon.
\end{table}
Table 3: Photon yields for Ly\(\alpha\), H\(\alpha\), and \(2\gamma\) continuum
up to \(n=5\) (_1s_ to _5g_) supplemented by 24 pseudo-states described by orbitals (\(\overline{n},\overline{\ell}\)) with \(\overline{n}=6-9\) and \(\overline{\ell}=0-5\).
Anderson et al. (2002) present collision strengths, \(\overline{\Omega}_{ij}\), for excitation from levels \(i\) to \(j\), averaged over a Maxwellian velocity distribution at electron temperatures, \(E_{T}\equiv k_{B}T\), ranging from 0.5-25 eV. The collisional excitation rate coefficients (in cm\({}^{3}\) s\({}^{-1}\)) are then given by:
\[q_{i\to j} = \frac{2\sqrt{\pi}\ \alpha a_{0}^{2}}{g_{i}}\sqrt{\frac{I_{\rm H}}{k _{B}T}}\,\overline{\Omega}_{ij}(T)\,\exp(-E_{ij}/kT) \tag{1}\] \[= \frac{8.629\times 10^{-6}}{g_{i}}\frac{\overline{\Omega}_{ij}}{ \sqrt{T}}\exp(-E_{ij}/k_{B}T)\ \,\]
where \(a_{0}\) is the Bohr radius, \(\alpha\) is the fine structure constant, \(g_{i}\) is the degeneracy of level \(i\), and \(E_{ij}\) is the energy difference between level \(i\) and \(j\). Since we are only interested in excitations from the ground state, we assume \(g_{i}=2\) for _1s_ (\({}^{2}\)S\({}_{1/2}\)).
Given our focus on warm plasma, we limit the model fits to 1 eV \(\leq E_{T}\leq 15\) eV. After some experimentation, we found that a cubic polynomial provides an adequate fit3:
Footnote 3: A first-order fit would have been sufficient for excitations to all states but 1s-np and 1s-nd. For simplicity, we elected to use the same number of coefficients for all transitions.
\[\overline{\Omega}_{ij}(T)=a_{1}+a_{2}x+a_{3}x^{2}\, \tag{2}\]
where \(x=\ln(T/10^{6}\,{\rm K})\). The fit is precise to about 1% for all levels except 5\(\ell\) levels, for which the fitting errors approach 5% (see Figure 1). The fitting coefficients can be found in Table 4.
The line cooling rate per unit volume is given by \(n_{e}n_{\rm HI}\Lambda_{\rm HI}\) where \(n_{e}=n_{p}\) is the electron (and proton) density and \(n_{\rm HI}\) is the density of H atoms. The total particle density is \(n_{t}=n_{\rm HI}+n_{e}+n_{p}=n_{\rm H}(1+x)\) with \(n_{\rm H}=n_{p}+n_{\rm HI}\) and \(x=n_{e}/n_{\rm H}\).
\begin{table}
\begin{tabular}{c c r r r} \hline \hline \(k\) & trans & \(a_{0}\) & \(a_{1}\) & \(a_{2}\) \\ \hline
1 & 1s-2s & 0.5532 & 0.1044 & 0.0105 \\
2 & 1s-2p & 5.4261 & 2.2029 & 0.2481 \\
3 & 1s-3s & 0.1121 & 0.0131 & 0.0008 \\
4 & 1s-3p & 0.9355 & 0.3518 & 0.0382 \\
5 & 1s-3d & 0.1957 & 0.0517 & 0.0050 \\
6 & 1s-4s & 0.0390 & \(-0.0005\) & \(-0.0008\) \\
7 & 1s-4p & 0.3224 & 0.1124 & 0.0114 \\
8 & 1s-4d & 0.0944 & 0.0213 & 0.0016 \\
9 & 1s-4f & 0.0117 & 0.0011 & 0.0002 \\
10 & 1s-5s & 0.0175 & \(-0.0019\) & \(-0.0004\) \\
11 & 1s-5p & 0.1464 & 0.0501 & 0.0055 \\
12 & 1s-5d & 0.0471 & 0.0094 & 0.0008 \\
13 & 1s-5f & 0.0108 & 0.0003 & \(-0.0000\) \\
14 & 1s-5g & 0.0005 & \(-0.0004\) & 0.0001 \\ \hline \end{tabular}
\end{table}
Table 4: Polynomial fits to collision strengthsa
Figure 1: The electron-impact collision strengths, \(\overline{\Omega}_{ij}\), for _1s_ to \((n,\ell)\) excitations of hydrogen as a function of temperature for \(n=2,3,4,5\). Open circles are model data from Anderson et al. (2002), and the lines are second-order polynomial fits (see Equation 2 and Table 4).
We used the fitting model to compute the run of collisional rate coefficients, \(q_{i\to j}\), with temperature (Figure 2). With the cooling coefficients in hand, we computed the sum of the luminosity radiated in lines up to \(n=5\). We consider this sum to be an adequate representation of \(\Lambda_{\rm HI}(T)\) for warm hydrogen. The cooling rate coefficient is
\[\Lambda_{\rm HI}(T)=\sum_{k=1}^{14}q_{k}(T)E_{k}\;, \tag{3}\]
where the energy of transition with index \(k\) is \(E_{k}=hcL_{k}\); see Table 1 for definition of \(k\) and the adopted values for the wavenumbers, \(L_{k}\) (in cm\({}^{-1}\)). Separately, in SSA, we compare this cooling coefficient to previously published coefficients (Spitzer, 1978; Scholz & Walters, 1991; Dere et al., 1997).
The coefficient for energy loss through line \(X\) (where, for instance, \(X\) denotes Ly\(\alpha\), H\(\alpha\), \(2\gamma\)) is given by
\[\Lambda_{X}(T)=\sum_{k=1}^{14}q_{k}(T)p_{k}(X)E_{X}\;,\]
where \(E_{X}\) is line energy and \(p_{k}(X)\) is given in Table 3.
### Simple Fits to Line cooling and Collision rates
The collisional excitation rate coefficient is the sum over all hydrogen levels,
\[Q(T)=\sum_{i=1}^{14}q_{k}(T)\;.\]
Both \(Q(T)\) and the hydrogen cooling rate (from excitation to \(n=2\)) fall off with temperature as \(\exp(-T/T_{12})\), where \(k_{B}T_{12}=3I_{\rm H}/4\) is the energy difference between \(n=1\) and \(n=2\) levels. We fit the collision rate and \(\Lambda_{\rm HI}\) over two temperature ranges: "hot" (\(10^{4}\) K \(<T<1.5\times 10^{5}\) K) and "warm" (\(10^{4}\) K \(<T<1.5\times 10^{4}\) K),
\[Q_{\rm HI}(T)=A\exp(-T/T_{12})\sum_{i=0}^{n}a_{i}z^{i}\;, \tag{4}\]
where \(z=\log T_{4}\) with \(T_{4}=(T/10^{4}\) K). A similar expression was derived for \(\Lambda_{\rm HI}(T)\). The fitting parameters for \(Q_{\rm HI}\) and \(\Lambda_{\rm HI}\) are given in Table 5, and the quality of the fit is displayed in Figure 3.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Quantity & \(A\) & \(a_{0}\) & \(a_{2}\) & \(a_{3}\) & \(a_{4}\) \\ \hline \(\Lambda_{\rm HI}\):hot & \(6.0\times 10^{-19}\) & 1.018 & \(-\)0.771 & 1.537 & \(-\)0.716 \\ \(\Lambda_{\rm HI}\):warm & \(6.0\times 10^{-19}\) & 1.032 & \(-\)1.138 & 3.376 & \\ \hline \(Q_{\rm HI}\):hot & \(1.0\times 10^{-7}\) & 0.371 & \(-\)0.304 & 0.560 & \(-\)0.255 \\ \(Q_{\rm HI}\):warm & \(1.0\times 10^{-7}\) & 0.376 & \(-\)0.433 & 1.220 & \\ \hline \end{tabular} Note. – “Quantity” refers to the cooling coefficient (\(\Lambda_{\rm HI}\) in erg cm\({}^{3}\) s\({}^{-1}\)) or collisional excitation rate coefficient (\(Q_{\rm HI}\) in cm\({}^{3}\) s\({}^{-1}\)). These quantities are fitted to a model displayed in Equation 4 over two temperature ranges: “hot” (\(10^{4}\) K \(<T<1.5\times 10^{5}\) K) and “warm” (\(10^{4}\) K \(<T<1.5\times 10^{4}\) K.)
\end{table}
Table 5: Fits to Cooling and Collisional Coefficients
Figure 2: Electron collisional excitation rate coefficients, \(q_{ij}\), for _1s\(\rightarrow(n,\ell)\)_ transitions of hydrogen derived from collision strengths provided by Anderson et al. (2002). The curves are coded by color (\(n=2,3,4,5\) as labeled) and by line type (dash-dash 1s-_n_s, continuous for _1s-np_, dash-dot for _1s-nd_, dotted for _1s-nf_, and back to dash-dash for _1s-ng_.)
In Figure 4 we plot the line production efficiency4. Consistent with the collisional coefficients displayed in Figure 2 we see that Ly\(\alpha\) has the highest efficiency, approximately 2/3, followed by 2-photon emission, 1/3. H\(\alpha\) is quite weak, even when measured by photons emitted. H\(\alpha\) emission requires excitation to the \(n=3\) level, whereas two-photon and Ly\(\alpha\) emission are obtained by excitation to \(n=2\) (and cascade from higher states). However, H\(\alpha\) has a major advantage --it can be observed with existing ground-based observatories. For this reason, we provide a fitting formula for \(y_{\rm H\alpha}\), the fraction of H\(\alpha\) photons per ionization,
Footnote 4: The fraction of photon (e.g. H\(\alpha\), Ly\(\alpha\)) emitted per collision. Each two-photon emission is regarded as one event.
\[f_{\rm H\alpha}=\sum_{k=0}^{2}a_{k}z^{k}\;, \tag{5}\]
where \(z=\log T_{4}\), as before. The model fit for H\(\alpha\) is shown in Figure 5, and the values for the model coefficients are given in Table 6, as well as those for Ly\(\alpha\) and two-photon emission.
## 4 Electron collisional ionization
The collisional ionization rate coefficient is derived as the integral,
\[k_{ci}(T)=\int_{I_{\rm H}}^{\infty}\sigma_{ci}(E)v\,f(E)\,dE\;,\]
where \(f(E)\) is the Maxwellian energy distribution, \(I_{\rm H}=13.598\) eV is the ionization energy of hydrogen, and \(\sigma_{ci}(E)\) is the collisional ionization cross section as a function of electron energy in the center-of-mass frame, \(E=\sfrac{1}{2}\mu v^{2}\) with \(\mu=m_{e}m_{\rm H}/(m_{e}+m_{\rm H})\approx m_{e}\). Here, \(m_{e}\) and \(m_{\rm H}\) are the mass of the electron and hydrogen atom, respectively. The collisional ionization rate coefficient can be sensibly written as
\[k_{ci}(T)=A(T)\exp(-I_{\rm H}/k_{B}T)\;. \tag{6}\]
Black (1981) provided an approximate form for \(k_{ci}T)\), based on ionization cross sections tabulated by Lotz (1967),
\[k_{ci}(T)=5.85\times 10^{-11}T\sfrac{1}{2}\exp(-I_{\rm H}/k_{B}T)\,{\rm cm}^{3} \,{\rm s}^{-1}\;. \tag{7}\]
This expression is consistent with the approximation, \(\sigma_{ci}\propto(1-I_{\rm H}/E)\), quoted in Draine (2011), but only valid at low collision energies (\(I_{\rm H}\leq E\leq 3I_{\rm H}\)). As shown by Lotz (1967), the high-energy behavior is \(\sigma_{ci}\propto\ln E/E\). Scholz & Walters (1991) provided a better approximation (for \(10^{4}\) K to \(2\times 10^{5}\) K) using a sixth-order polynomial,
\[A(T)=\exp\Big{(}\sum_{i=0}^{6}a_{i}y^{i}\Big{)}\,\,{\rm cm}^{3}\,{\rm s}^{-1} \tag{8}\]
where \(y=\ln T\). A comparison (Figure 6) between Equation 7 (Black 1981 fit) and the more accurate Scholz &
\begin{table}
\begin{tabular}{l l l l} \hline \hline \multicolumn{1}{c}{ phot} & \multicolumn{1}{c}{\(a_{0}\)} & \multicolumn{1}{c}{\(a_{1}\)} & \multicolumn{1}{c}{\(a_{2}\)} \\ \hline H\(\alpha\) & 0.031 & 0.302 & \(-\)0.149 \\ \(2\gamma\) & 0.377 & \(-\)0.095 & \\ Ly\(\alpha\) & 0.623 & 0.095 & \\ \hline \end{tabular} Note. – Electron collisions with hydrogen atoms produce Ly\(\alpha\), two-photon continuum (\(2\gamma\)), H\(\alpha\), and other lines. For each of these categories, the efficiency of photon production, \(f\), depends on temperature and is fitted to a model given by Equation 5. The model fits are accurate to 2%.
\end{table}
Table 6: Efficiency of photon per collision
Figure 3: (Top Panel.) Left axis shows the line cooling coefficient, \(\Lambda_{\rm HI}(T)\), for hydrogen (black line), where the total cooling rate is \(n_{e}n_{\rm HI}\Lambda_{\rm HI}(T)\). Right axis shows the percent residuals (red-dashed lines) in the form of \([1-(\Lambda_{\rm HI}/{\rm fit})]\). The model form is given by Equation 4, and model parameters are given in Table 5. (Bottom Panel.) The same, but for \(Q\), the collisional coefficient. The rate of collisions per unit volume is \(n_{e}n_{\rm HI}Q\).
Walters fit (Equation 8) shows that the former breaks down at high temperatures. We offer a modified formula with the correct asymptotic behavior (\(k_{B}T>I_{\rm H}\)) as used in the shock models of Shull and McKee (1979),
\[k_{ci}(T)=\frac{5.85\times 10^{-11}T^{\sfrac{1}{2}}}{\left[1+0.1k_{B}T/I_{\rm H} \right]}\exp\Big{(}-\frac{I_{\rm H}}{k_{B}T}\Big{)}\,{\rm cm}^{3}\,{\rm s}^{-1}\;. \tag{9}\]
As can be seen from Figure 6, this modified formula provides a good fit at both low and high temperatures. For quick estimates we use Equation 9, but the polynomial formulation of Scholz and Walters (1991) is preferred when precision is needed (e.g., in numerical integration of differential equations). The resulting ionization power loss per unit volume is \(n_{e}n_{\rm HI}\Lambda_{ci}\) where \(\Lambda_{ci}=k_{ci}I_{\rm H}\) is the collisional ionization energy loss coefficient.
## 5 The Hydrogen Cooling Curve
The goal in this section is to construct a cooling curve for "warm" (\(T\lesssim 10^{5}\,{\rm K}\)) hydrogen plasma. In SS3 we formulated the cooling coefficient due to line cooling, while in SS4 we presented the same for ionization losses. Here, we summarize the cooling coefficients for radiative recombination (free-bound) and free-free losses. Armed thus, we formulate the cooling curve for hydrogen in the temperature range \(10^{4}\,{\rm K}\) to \(10^{5}\,{\rm K}\).
The kinetic energy of recombining electrons is a loss to the thermal pool. The model fits for \(\alpha_{k}\) where \(k=1,A,B\) (corresponding to recombinations to \(n=1s\) level, case A and case B) can be found in Table 7 (SSB). The radiative recombination power loss per unit volume is \(n_{e}n_{p}\alpha\Lambda_{\rm rr}\) where \(\alpha\) is either case A or case B, as appropriate. Here, the radiative recombination energy rate coefficient \(\Lambda_{\rm rr}=\alpha\langle E_{\rm rr}\rangle\) where \(\langle E_{\rm rr}\rangle\) is the mean thermal energy lost by electron upon recombination. Following Draine (2011) we let \(\langle E_{\rm rr}\rangle=f_{\rm rr}k_{B}T\).
The free-free emission rate per unit volume is \(n_{e}n_{p}\Lambda_{\rm ff}\), where \(\Lambda_{\rm ff}\) is the free-free emissivity. The free-free power per electron is \(n_{p}\Lambda_{\rm ff}\). The mean time for an electron to
Figure 4: (_Left_) Case B photon production per collision as a function of temperature, \(T\) for Ly\(\alpha\), H\(\alpha\) and two-photon continuum. (_Right_) The ratio of yields of two-photon decays to that of H\(\alpha\) photons.
Figure 5: (Left axis) The fraction, \(f_{\rm H\alpha}\), of electron collisions with an H i atom that result in emission of an H\(\alpha\) photon as a function of temperature. A polynomial model for \(y_{\rm H\alpha}\) is described in Equation 5. The ratio of this model to calculations is displayed by the dashed red line.
Figure 6: (Left axis): The run of collisional ionization coefficient (black line) of Scholz and Walters (1991). (Right axis): In red, we plot the ratio of the ionization rate coefficients of Black (1981) from Equation 7 and our handy formula (Equation 9) to that of Scholz and Walters (1991) labeled \(k_{ci}^{S}\).
recombine is \((n_{p}\alpha)^{-1}\). Thus, the free-free energy lost up to the point of recombination is \(\Lambda_{\rm ff}/\alpha\), which we equate to \(f_{\rm ff}k_{B}T\). The combined recombination and free-free cooling rate coefficient is then
\[\Lambda_{\rm rf}=\Lambda_{\rm rr}+\Lambda_{\rm ff}=\alpha f_{\rm rf}(T)k_{B}T. \tag{10}\]
where \(f_{\rm rf}=f_{\rm rr}+f_{\rm ff}\). The run of \(f_{\rm rf}(T)\) with temperature is displayed in Figure 15 (SSB), and the model fits are presented in Table 7 (SSB).
We now have all the elements to formulate the cooling rate per unit volume, \({\cal C}(T)\), expressed as a negative value (for energy losses):
\[{\cal C}(T)=-n_{e}n_{\rm HI}\big{[}\Lambda_{\rm HI}(T)+k_{ci}(T)I_{\rm H} \big{]}-n_{e}n_{p}\Lambda_{\rm rf}. \tag{11}\]
The three RHS terms are given by Equations 3, 6, and 10, respectively.
## 6 Low-velocity shocks: a simple cooling model
The investigation of time-dependent cooling of gas heated to \(T\approx 10^{5}\,\)K is a classic endeavor, constituting the Ph. D. thesis topics of Michael Jura (Jura and Dalgarno, 1972) and Minas Kafatos (Kafatos, 1973). The motivation in the 1970s seems to have been ambient gas heated by an FUV shock-breakout pulse. Separately, Draine and Salpeter (1978) investigated the production of Ly\(\alpha\) from SN shocks, and Shull and Silk (1979) computed UV emission from SNRs in primeval galaxies.
In unmagnetized plasma, the post-shock temperature of an adiabatic shock is given by
\[T_{s}=\frac{2(\gamma-1)}{(\gamma+1)^{2}}\frac{\mu v_{s}^{2}}{k_{B}}=(1.12\times 1 0^{5}\ {\rm K})\frac{\mu}{m_{\rm H}}\Big{(}\frac{v_{s}}{70\,{\rm km\,s^{-1}}} \Big{)}^{2}\ . \tag{12}\]
Here, \(\mu\) is the mean mass per particle and \(\gamma\) is the ratio of specific heats at constant pressure and constant volume; \(\gamma=5/3\) for mono-atomic gas. If \(y\) is the number density of helium relative to that of hydrogen, the mean molecular mass for H\({}^{0}\) and He\({}^{0}\) is \(\mu=[(1+4y)/(1+y)]m_{\rm H}=1.23m_{\rm H}\) for \(y=0.0819\)(Planck Collaboration et al., 2020). For H\({}^{+}\) and He\({}^{0}\), \(\mu=0.64m_{\rm H}\). For H\({}^{+}\) and He\({}^{+}\), \(\mu=0.61m_{\rm H}\), and for H\({}^{+}\) and He\({}^{+2}\), \(\mu=0.59m_{\rm H}\).
Three timescales come into play for post-shocked gas: \(\tau_{r}\), the recombination timescale; \(\tau_{ci}\), the collisional ionization timescale; and \(\tau_{c}\), the cooling timescale. For gas around \(10^{5}\,\)K, we have \(\tau_{ci}\ll\tau_{r}\). With this inequality, the cooling gas does not obey the conditions for collisional ionization equilibrium. Thus, it is often essential to undertake a full time-dependent calculation.
### Electron-Proton Equilibration
At the collisionless shock front, the electrons and protons receive similar amounts of random motion. Being more massive, the protons acquire more energy and are initially much hotter than the electrons. The equilibration timescale for electrons to be heated up to the temperature of the protons via electron-proton encounters is approximately
\[t_{\rm loss}=14\ \Big{(}\frac{T}{10^{5}\,{\rm K}}\Big{)}^{3/2}\Big{(}\frac{{ \rm cm}^{-3}}{n_{e}}\Big{)}\Big{(}\frac{25}{{\rm ln}\Lambda}\Big{)}\ {\rm yr}\,\]
where \({\rm ln}\Lambda\) is the Coulomb logarithmic factor accounting for distant encounters (Spitzer, 1978; Chapter 2). The current view (Laming et al., 1996; Ghavamian et al., 2007) is that plasma instabilities and electromagnetic waves drive electron-proton equilibration faster than two-body interactions. We assume that equipartition occurs before collisional ionization sets in (SS6.2).
### Collisional Ionization
The rate equation for the number density of electrons is
\[\frac{dn_{e}}{dt}=n_{e}n_{\rm HI}k_{ci}(T)-n_{e}n_{p}\alpha(T)\, \tag{13}\]
where \(\alpha=\alpha_{A},\alpha_{B}\) as needed. Given our assumpton of hydrogen plasma, the number density of protons, \(n_{p}=n_{e}\). As noted in SSC a solution to this equation at cosnatnt \(T\) is
\[x(t)^{-1}=x_{0}^{-1}\exp(-t/\tau_{ci})+x_{eq}^{-1}\Big{[}1-\exp(-t/\tau_{ci}) \Big{]}\.\]
Here, \(x_{0}=x(t=0)\), \(\tau_{r}\equiv(\alpha n_{\rm H})^{-1}\) and \(\tau_{ci}\equiv(k_{ci}n_{\rm H})^{-1}\) are the characteristic time scales for recombination and collisional ionization, respectively, and
\[x_{eq}(T)=\frac{k_{ci}(T)}{\alpha+k_{ci}(T)}=\frac{\tau_{r}}{\tau_{r}+\tau_{ci} }. \tag{14}\]
The lowest probable value for the ionization fraction in a realistic diffuse atomic medium, before heating commences, is \(x_{0}\approx 2\times 10^{-4}\). In this case the electrons come from stellar FUV photoionization of C, S, Mg, Si, Fe and other trace metals. The timescale for electron ionization to reach fraction \(x\) is \(\tau_{ci}\ln(x/x_{0})\).
### Recombination
As can be seen from Figure 7, collisional ionization is a strong function of temperature. At late times, when the plasma has cooled, collisional ionization can be ignored and Equation 13 simplifies to \(dx/dt=-x^{2}/\tau_{r}\), with the solution
\[\frac{1}{x}-\frac{1}{x_{0}}=\frac{t}{\tau_{r}}.\]
The ionization fraction decreases from \(x_{0}\) to \(x_{0}/m\) on a timescale of \(t=(m-1)\tau_{r}/x_{0}\). The run of recombination time scale as a function of temperature is shown in Figure 7.
### Basic Cooling and Recombining Framework
The path in the phase diagram of density, ionization fraction, and temperature along which the gas cools depends on the cirucmstances. For planar radiative shocks, the pressure behind the shock is \(P_{0}+(3/4)\rho_{0}v_{0}^{2}\), rising to \(P_{0}+\rho_{0}v_{0}^{2}\) downstream when \(\rho\gg\rho_{0}\). Here, the pre-shock parameters have subscript 0. Thus, radiative shocks are good examples of cooling at nearly constant pressure ("isobaric"). As the gas cools, its density rises to maintain the pressure. A second possibility is cooling at constant density ("isochoric"). The decrease in temperature, following cooling, leads to lower pressure. Pressure changes are conveyed at the speed of sound, \(c_{s}\). The time scale for adiabatic sound waves to cross a nebula of length \(L\) is \(\tau_{a}=L/c_{s}\). Isochoric cooling will take place when the cooling time is short, \(\tau_{c}\ll\tau_{a}\).
The first law of thermodynamics states that any gain in the internal energy (\(U\)) of the system is due to increase in internal heat and work done: \(dU=dQ-PdV\). For mono-atomic gas, the internal energy of the nebula per unit volume is \(U=(3/2)nk_{B}T\), while the pressure is given by Boyle's law \(P=nk_{B}T\). Here, \(N=nV\) is the total number of particles in the nebula whose volume is \(V\). Let \(N_{\rm H}=n_{\rm H}V\) be the total number of hydrogen nuclei. Ionization can produce changes in \(N\), whereas \(N_{\rm H}\) is fixed.
The three physical parameters governing the cooling hydrogen plasma are \(n_{e}\), \(T\) and \(n_{\rm H}\). We have two differential equations, one for ionization balance (\(n_{e}\); Equation 13) and one for for energy loss (\(T\); discussed below). A third differential equation follows from the assumed framework: \(dP/dt=0\) (isobaric cooling) or \(dV/dt=0\) (isochoric cooling).
For an isochoric system, no work is done by or on the nebula. Adopting case B framework the energy balance equations becomes
\[q\frac{d}{dt}(nk_{B}T)=-n_{e}n_{\rm H}\big{[}\Lambda_{\rm HI}+k_{ci}I_{\rm H} \big{]}-n_{e}^{2}\alpha_{B}f_{\rm rf}k_{B}T\;,\]
where \(q=3/2\) and the RHS gives the total cooling rate. Note that \(n_{\rm H}\) remains constant, whereas \(n=n_{\rm H}+n_{e}\) varies as the ionization fraction changes. The ionization-recombination equation (Equation 13) can be restated
\[\frac{dn}{dt}=n_{e}n_{\rm HI}k_{ci}-n_{e}^{2}\alpha_{B}.\]
We combine the above two equations to obtain
\[\begin{split} qnk_{B}\frac{dT}{dt}=-n_{e}n_{\rm HI}\big{[}& \Lambda_{\rm HI}+k_{ci}I_{\rm H}+k_{ci}qk_{B}T\big{]}\\ &-n_{e}^{2}\alpha_{B}\big{(}f_{\rm rf}-q\big{)}k_{B}T\;,\end{split}\]
which we deliberately recast as
\[\begin{split} qnk_{B}\frac{dT}{dt}=-n_{e}n_{\rm HI}\big{[}& \Lambda_{\rm HI}+k_{ci}I_{\rm H}\big{]}-n_{e}^{2}\alpha_{B}f_{\rm rf }k_{B}T\\ +qk_{B}T\big{[}& n_{e}^{2}\alpha_{B}-n_{e}n_{\rm HI}k _{ci}\big{]}\;.\end{split} \tag{15}\]
In this formulation, the meaning of Equation 15 is clear. The LHS arises from the loss of internal energy. The first term on the RHS represents energy loss from H i collisional line excitation (\(\Lambda_{\rm HI}\)) and collisional ionization (loss of \(I_{\rm H}\) per collision). The loss of kinetic energy per recombination (including the free-free radiation up until the recombination event) is given by the second term. The final term accounts for losses/gains to the thermal pool of electrons during recombination and ionization. For plasma in collisional ionization equilibrium this term vanishes, as expected.
For isobaric cooling, the pressure, \(P=(n_{\rm H}+n_{e})k_{B}T\) is fixed. In this case, we compute \(n_{e}\) and \(T\) and then deduce \(n_{\rm H}\) through the pressure equation. As the nebula cools, the ambient gas, in order to maintain the pressure, \(P_{a}\), does work on the nebula by compressing the nebula. The work done by the medium on the nebula is \(PdV\). However, since \(P\) is constant, \(d(PV)=P_{a}dV\). The internal energy of the nebula is then the enthalpy, \(HV\) where \(H=U+P_{a}\). Going forward, we will drop the subscript to \(P\). It is this store of enthalpy that powers the nebular cooling, \(\mathcal{C}V\). Since \((U+P)V=(5/2)Nk_{B}T\) we see that Equation 15 still applies but with \(q=5/2\).
### A shock heated nebula
Consider a nebula composed of hydrogen which has been shocked heated to, say, \(T_{s}=10^{5}\,\)K. The electron-proton equilibration timescale will be shorter than
Figure 7: Ionization time (\(\tau_{ci}\); see Equation 9) and recombination time (\(\tau_{r}\); case B) as a function of temperature. The two timescales cross at about 15,000 K, at which point the ionization fraction would be 50%, if collisional ionization equilibrium were to hold (see Equation 14).
\(14n_{e}^{-1}\) yr (see SS6.1). The collisional ionization time, \(t_{\rm ci}\), is short, \(10n_{\rm H}^{-1}\) yr at \(T=10^{5}\) K, rising to \(60n_{\rm H}^{-1}\) yr at 50,000 K. Thus, even if the pre-shocked gas has minimal ionization (\(x_{0}\approx 2\times 10^{-4}\)) it will take a time, \(\tau_{\rm ci}\ln(x/x_{0})=7.8\)ci for the ionization fraction to reach \(x=0.5\). The initial losses are large, owing to both collisional ionization and collisional excitation by the newly liberated electrons and subsequent radiation. An exception is if the pre-shocked gas is pre-ionized. If the shock is strong, pre-ionization (H\({}^{+}\) and He\({}^{+}\)) will be achieved, which diminishes the hydrogen Ly\(\alpha\) emission. (There will still be cooling from He II Ly\(\alpha\)\(\lambda\)304 and lines from metal ions.) As the gas cools, recombination becomes more efficient. Once the gas reaches \(10^{4}\) K, cooling by forbidden lines of metals will occur. The gas will eventually settle down at \(T_{1}\approx 5000\)-\(8000\) K, the temperature of the stable WNM phase (see Heiles and Troland, 2003; Kanekar et al., 2003; Patra et al., 2018; Murray et al., 2018).
Ignoring "metals", the mean particle mass is \(\mu=m_{\rm H}(1+4y)/(1+y+x_{0})\). The shock velocity and \(\mu\) determine the post-shock temperature, \(T_{s}\) through Equation 12. Recall that, in our simplified model, the losses from the shocked nebula are only those associated with hydrogen (line radiation, ionization, free-bound, and free-free). In particular, while we include helium in computing the reduced mass, we do not include losses due to helium. In short, we treat helium as a silent and inactive partner. The energy per H-nucleus and associated electron is \(E_{0}=qk_{B}T_{s}(1+x_{0})\). The end state is when hydrogen has largely recombined and thus the energy per H-atom is \(E_{1}=qk_{B}T_{1}\).
For our fiducial temperature of \(10^{5}\) K, we have \(E_{0}\approx[12.9,21.5](1+x_{0})\) eV energy per H atom for \(q=3/2,5/2\). Thus, on simple grounds, we can see that low-velocity shocks will not significantly affect the ionization of the incoming particles. More precise radiative shock models show that fast shocks, \(v_{s}>110\) km s\({}^{-1}\), can pre-ionize (H\({}^{+}\), He\({}^{+}\)) the incoming medium (Shull and McKee, 1979; Raymond, 1979; Dopita and Sutherland, 1996).
Example runs are shown in Figures 8 and 9. In addition to the run of physical quantities (\(T\), \(x\), \(n_{\rm H}\)) we also plot the total number of recombinations per H nucleus,
\[N_{r}=\int\frac{1}{n_{\rm H}}n_{\rm H}^{2}x^{2}\alpha_{B}(T)dt\;,\]
and the total number of collisions per atom
\[N_{c}=\int n_{\rm H}x(1-x)\sum_{k=1}^{14}q_{k}(T)dt\;.\]
## 7 Conclusion & prospects
Low velocity shocks with velocities near 70 km s\({}^{-1}\) abound in our Galaxy. Some descend from higher velocity shocks (e.g., supernova remnants) while others start at low velocity (e.g., stellar bow shocks, high velocity cloud shocks). These shocks do not have strong precursor ionization fronts, and as such the post-shocked gas is partially neutral. Such shocks cool primarily through Ly\(\alpha\), two-photon continuum, H\(\alpha\), and metal emission lines. Ly\(\alpha\) is the brightest line, although resonant scattering traps usually traps these photons within the plasma, resulting in absorption by dust grains. H\(\alpha\) is weak but has the great advantage of being observable from the ground.
Two-photon continuum emission is about 50% of Ly\(\alpha\) emission (see Figure 4). It is several times brighter than H\(\alpha\), even when one compares photon fluxes rather than energy fluxes. Fortunately, two-photon emission can be observed with space-based observatories. Furthermore, the two-photon continuum has a distinct FUV/NUV ratio. In fact, _GALEX_ FUV and NUV imagery has led to the recent discovery of large middle-aged supernova
Figure 8: Run of temperature and density of a pure hydrogen plasma suddenly heated to \(10^{5}\) K and subsequently cooling down via an isobaric process. The hydrogen density, \(n_{H}\) and ionization, \(x_{0}\) at \(t=0\) are given in the legend in the top-most panel.
remnants (Fesen et al., 2021) and exotic shocked stellar bow shocks with angular scales of hundreds of degrees (Bracco et al., 2020). The Ultraviolet Explorer (UVEX) is a NASA Explorer mission currently under development (Kulkarni et al., 2021). Amongst other goals, UVEX aims to undertake FUV and NUV imaging of the entire sky with higher sensitivity and finer spatial resolution, relative to _GALEX_. The aforementioned successes with _GALEX_ imagery show great promise of identifying and studying low velocity shocks in a future all-sky survey with UVEX.
With this motivation, and using the best available atomic physics data and atomic calculations, we computed the collisional and cooling coefficients for warm hydrogen (\(T\lesssim 10^{5}\,\)K). The primary application of our results is in computing two-photon continuum from bow shocks and old supernova remnants. We allow for pre-ionization by keeping the ionization fraction of the pre-shocked gas as a free parameter that can be set to values computed from more sophisticated shock models (_ibid_; Dopita & Sutherland, 1996). Our expectation is that the accurate H-cooling developed here can be incorporated into time-dependent models (e.g., Gnat & Sternberg, 2007).
For completeness, we discuss two-photon emission from photoionized gas (e.g., H II regions, the Warm Ionized Medium). Draine (2011; Table 14.2) provides the recombination coefficient to the 2s level, \(\alpha_{28}\) and the recombination coefficient for H\(\alpha\) emission. From this we find ratios
\[\frac{\alpha_{28}}{\alpha_{B}}\approx 0.328T_{4}^{0.115},\ \frac{\alpha_{ \rm H\alpha}}{\alpha_{B}}\approx 0.450T_{4}^{-0.11}\;. \tag{16}\]
Thus, at typical temperatures of photoionized gas, recombination process result in similar diffuse emission for two-photon continuum and H\(\alpha\). However, while H\(\alpha\) emission is concentrated in a narrow line, the two-photon continuum is distributed over the FUV band. Compensating for this effect, the FUV sky is incredibly dark relative to the optical band (see Kulkarni, 2022 for detailed analysis of the FUV background).
In the Galactic plane and at low latitudes, two-photon emission will be attenuated by dust in the intervening neutral ISM and contaminated by reflected light from dust grains. In practice, this means that the use of two-photon continuum as a diagnostic will be restricted to high Galactic latitudes and will require careful modeling of reflected light. However, the early success with _GALEX_ promises rich returns from the all-sky survey in both the FUV and NUV planned with UVEX.
We thank Nikolaus Zen Prusinski, California Institute of Technology, for help with CHIANTI, a collaborative project involving George Mason University, the University of Michigan (USA), University of Cambridge (UK) and NASA Goddard Space Flight Center (USA).
|
2306.01890 | Mixed-type Distance Shrinkage and Selection for Clustering via Kernel
Metric Learning | Distance-based clustering and classification are widely used in various
fields to group mixed numeric and categorical data. In many algorithms, a
predefined distance measurement is used to cluster data points based on their
dissimilarity. While there exist numerous distance-based measures for data with
pure numerical attributes and several ordered and unordered categorical
metrics, an efficient and accurate distance for mixed-type data that utilizes
the continuous and discrete properties simulatenously is an open problem. Many
metrics convert numerical attributes to categorical ones or vice versa. They
handle the data points as a single attribute type or calculate a distance
between each attribute separately and add them up. We propose a metric called
KDSUM that uses mixed kernels to measure dissimilarity, with cross-validated
optimal bandwidth selection. We demonstrate that KDSUM is a shrinkage method
from existing mixed-type metrics to a uniform dissimilarity metric, and
improves clustering accuracy when utilized in existing distance-based
clustering algorithms on simulated and real-world datasets containing
continuous-only, categorical-only, and mixed-type data. | Jesse S. Ghashti, John R. J. Thompson | 2023-06-02T19:51:48Z | http://arxiv.org/abs/2306.01890v2 | # Mixed-type Distance Shrinkage and Selection for Clustering via Kernel Metric Learning
###### Abstract
Distance-based clustering and classification are widely used in various fields to group mixed numeric and categorical data. In many algorithms, a predefined distance measurement is used to cluster data points based on their dissimilarity. While there exist numerous distance-based measures for data with pure numerical attributes and several ordered and unordered categorical metrics, an efficient and accurate distance for mixed-type data that utilizes the continuous and discrete properties simulatenously is an open problem. Many metrics convert numerical attributes to categorical ones or vice versa. They handle the data points as a single attribute type or calculate a distance between each attribute separately and add them up. We propose a metric called KDSUM that uses mixed kernels to measure dissimilarity, with cross-validated optimal bandwidth selection. We demonstrate that KDSUM is a shrinkage method from existing mixed-type metrics to a uniform dissimilarity metric, and improves clustering accuracy when utilized in existing distance-based clustering algorithms on simulated and real-world datasets containing continuous-only, categorical-only, and mixed-type data.
Mixed-type data, metric learning, clustering, smoothing, kernel methods, similarity measures
Introduction
Datasets comprising continuous, ordered, and unordered categorical data are known as mixed-type data and are prevalent across various disciplines, and the availability of such heterogeneous data types continues to increase. Although several approaches have been employed to calculate the distance for mixed-type data points, there is no broadly accepted definition of distance. The challenge of quantifying distance is balancing the contributions of each variable-particularly between discrete and continuous-to the overall difference between data entries. In this paper, we develop a data-driven distance method that estimates the importance of discrete and continuous variables to the difference between entries using a shrinkage approach.
Many existing distances homogenize mixed-type data to single-type by projecting all data to either discrete or continuous, through methods such as discretization or dummy coding before calculating distance (see, for example, Guha et al., 2000; Dougherty et al., 1995). While these distances are computationally efficient and well-known, they can inaccurately calculate the meaningful differences between data points and overweight variables in the continuous or discrete domains. This overweighting can severely affect the clustering outcome of any methodology that requires distances through a significant loss of information on the homogenized data types.
Clustering is a fundamental technique in data analysis that involves grouping similar data points together based on distance. When clustering mixed-type data, choosing an appropriate distance metric that can handle the heterogeneity of the data types and scales is crucial. The metric should utilize and balance each data type appropriately to provide a meaningful distance between data points that accurately represents the similarity and dissimilarity between data points within a particular dataset. The choice of metric can have a significant impact on the accuracy, reliability, and interpretability of distance-based clustering results, and it is essential to carefully consider its performance using the standard statistics of clustering accuracy (CA) and Adjusted Rand Index (ARI) (Hubert & Arabie, 1985).
We propose a novel kernel distance for mixed-type data that balances mixed-type data important to within-dataset similarity for clustering applications. We prove that kernel similarity functions can be used as a distance metric and our specific kernel distance metric (called KDSUM) is a shrinkage method
between maximized dissimilarity and uniform similarity between all points. We demonstrate that maximum similarity cross-validation chooses optimal bandwidths. The advantage of this method is that the importance of each variable to similarity between data points are balanced by the magnitudes of cross-validated bandwidths.
We apply kernel distance to agglomerative hierarchical clustering and demonstrate the utility of our metric for clustering both simulated and real-world datasets. We find a kernel distance almost unilaterally improves clustering performance compared to other common mixed-type distances, such as Gower's distance (Gower 1971). A kernel distance provides researchers and practitioners with a unified, robust, effective, and efficient distance for mixed-type data, aiding in informed decision-making from the more accurately characterized clusters. We compare KDSUM with agglomerative cluster to competing clustering approaches. For continuous data, we compare to hierarchical clustering techniques with standard linkage methods and Partitioning Around Medoids with Euclidean distance, \(k-\)means, and Gaussian Mixture models. For categorical data, we compare to hierarchical clustering techniques with standard linkage methods and Partitioning Around Medoids with Gower's distance, \(k-\)modes, and Robust Clustering using links (ROCK). For mixed-type data, we compare to hierarchical clustering techniques with standard linkage methods and Partitioning Around Medoids with Gower's distance, \(k-\)prototypes, and model based clustering for mixed-type data (clustMD).
The paper is structured as follows: Section 2 discusses existing homogenized and non-homogenized approaches for mixed-type distances. Section 3 describes the kernel methods derived from probability density estimation used in this paper. Section 4 contains the methodology for the kernel distance metric (KDSUM). Sections 5 and 6 describe the clustering algorithms, evaluation metrics, and the simulated and real data. Finally, Section 7 are the conclusions and insights for future work.
## 2 Mixed-type distances
Consider a \(n\times p\) mixed-type data matrix \(X\) consisting of \(n\) observations with \(p\) many variables that are a combination of continuous, unordered and ordered categorical variables. Assume that the \(p\) variables are arranged such that the first continuous variables \(p_{c}\) are first, followed by unordered categorical variables \(p_{u}\), and then the ordered categorical variables \(p_{o}\) such that \(p=p_{c}+p_{u}+p_{o}\). For mixed-type distances,
consider that the rows of \(X\) are observation vectors \(\mathbf{x}_{j}\), and the dissimilarity or distance between any two observations \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\) is denoted \(d(\mathbf{x}_{i},\mathbf{x}_{j})\), whereas the similarity between the observations is denoted \(s(\mathbf{x}_{i},\mathbf{x}_{j})\).
Typical methodologies for calculating mixed-type distance require data to be homogenized to either numerical or categorical type. Discretization is the process of converting continuous variables into discrete categories or intervals; an example of this is binning which divides the range of a continuous variable into intervals and assigns each observation to the corresponding interval (Dougherty et al., 1995). For example, a person's age can be binned into discrete ordered categories such as "0-18", "19-30", "31-50", and so on.
Let \(\mathbf{x}_{i}\) be an observation with mixed-type variables. To calculate distance between observations using a categorical-only metric, we first homogenize the data using discretization. The \(k^{\text{th}}\) continuous variable of \(\mathbf{x}_{i}\) is divided into \(C=\{1,2,\ldots,c_{k}\}\) ordered categories of disjoint intervals \(\mathcal{Z}_{1},\mathcal{Z}_{2},\ldots,\mathcal{Z}_{c_{k}}\).Then, we define new order categorical variable such that \(\mathbf{z}_{k}=\{z_{i,k}=c|x_{i,k}\in\mathcal{Z}_{c}\}\) and replace each value \(x_{i,k}\) with \(z_{i,k}\). Discretization is often useful in cases where the data is highly skewed. However, it leads to a loss of information, and choosing an optimal interval width can be challenging and affect the analysis results.
Another method for coercing continuous data to categorical is dummy-coding that involves representing binned categorical variables as binary (0 or 1) variables. For each category in \(C\), a corresponding binary variable is created \(x_{i,h,m}\), where the binary variable assumes value 1 if \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\) are both in category \(c_{h,m}\), and 0 otherwise. The distance between any two dummy coded observations \(d(\mathbf{x}_{i},\mathbf{x}_{j})\) can be calculated using any binary distance metric (for more information, see Choi et al., 2010). Dummy-coding has the advantage of preserving all the information in the categorical variable and ease of interpretation. However, such an approach increases the dimensions of the feature space and ignores the ordered values of continuous scales. Foss et al. (2019) illustrated the inadequacy of dummy coding, noting that the expectation of the interval scale variable is always greater than 1, while the expectation from the categorical is always less than 1. This means that the choice of coding can lead to different interpretations of the data and may affect the analysis results.
The coercion of categorical data to continuous is typically conducted through an ordered numerical assignment such as converting Likert scale data to numerical. Then, the differences are calculated using any continuous metric, such as Euclidean distance. Additionally, continuous data may need to be scaled as not to over or under weigh the contribution of individual continuous variables, and the choice of scaling also affects distance calculations and thus clustering performance. Hennig et al. (2015) noted that distance-based clustering methods are not invariant to affine transformations, and Foss et al. (2016) showed that the choice of scaling can affect clustering performance.
Various mixed-type distance metrics do not require the homogenization or scaling of the data. The quadratic distance proposed by Lindsay et al. (2008) extends the chi-squared measures of distance between two distributions and requires the choice of a nonnegative definite kernel. De Leon & Carriere (2005) use a general mixed-data model to define the distance between two populations of mixed unordered categorical, ordered categorical, and interval scale data. Krzanowski (1983) proposes a distance based on Matusita's distance, as mixtures of these location models and generalizations are not identifiable without further conditions on some of the parameters. Recently, van de Welden et al., (2023) introduced a framework that allows for an implementation of distances between observations that can be extended to many existing distances for categorical variables. Modha and Spangler (2003) propose a method similar to \(k-\)prototypes that includes estimating a suitable weight that scales the relative contribution of the interval and categorical variables. However, the brute-force search to cluster repeatedly for a range of values for the weight that minimizes its objective function is computationally exhaustive.
The metrics shown in Table 1 will be used as benchmarks for the analysis of metrics herein. For any arbitrary variable \(l\), denote \(w_{i,j,l}=0\) if variable \(l\) has missing data, otherwise \(w_{i,j,l}=1\). Denote \(\mathds{1}_{i,j,l}\equiv\mathds{1}_{c}(x_{i,l}=x_{j,l})\) for categorical variables, where \(\mathds{1}_{i,j,l}=1\) if the two observations for the \(l\)th variables are the same, and 0 otherwise. Gower's distance (Gower, 1971) is a common hybrid distance function that calculates the distance between two vectors of the same length. It uses a weighted combination of interval and categorical distances, where the categorical distance is based on whether the categories match or not, and the interval distance is scaled based on the range of the variable. The user-specified weights for each variable may lead to intractable solutions and varying results based on the data. The \(k-\)prototypes algorithm (Huang, 1998) utilizes another hybrid distance technique that uses
a similar approach to Gower's distance, except the squared Euclidean distance is used for the interval scale variables. Unlike Gower's distance, it does not require variable-specific weights, but rather a single weight used for the entire categorical contribution of the distance function. The Podani distance metric (1999) extends Gower's general coefficient of similarity to ordinal variables, while the Wishart (2003) metric is similar to the Podani metric, except it makes use of the sample standard deviation for continuous variables, rather than the range of the continuous variables.
## 3 Kernel Density Estimation and Bandwidth Selection Procedures
Kernel functions are weighting functions that can map data points from a high-dimensional sample space to a low-dimensional space. Kernel functions are non-negative real-valued functions, integrate or sum to 1, and often assumed to be symmetric. We denote the kernel functions specific to datatypes as \(K\), \(L\), and \(l\) for continuous, unordered and ordered categorical variables, respectively. For each kernel, we denote bandwidths associated with the kernel functions as \(\boldsymbol{\lambda}\equiv\{\boldsymbol{\lambda}^{c},\boldsymbol{\lambda}^{u },\boldsymbol{\lambda}^{o}\}\) where \(\{\boldsymbol{\lambda}^{c}\}\equiv\{\lambda_{i}\}_{i=1}^{p_{c}},\{\boldsymbol{ \lambda}^{u}\}\equiv\{\lambda_{i}\}_{i=p_{c}+p_{u}}^{p_{c}+p_{u}}\), and \(\{\boldsymbol{\lambda}^{o}\}\equiv\{\lambda_{i}\}_{i=p_{c}+p_{u}+1}^{p}\).
There exist many common kernel functions used in the smoothing literature, such as the Gaussian kernel
\begin{table}
\begin{tabular}{c c} Metric & Definition \\ \hline \hline Gower (1971): & \(d(\mathbf{x}_{i},\mathbf{x}_{j})=1-s_{i,j}\); \\ & \(s_{i,j}=\frac{\sum_{i=1}^{p}w_{i,j}|s_{i,j,l}}{\sum_{l=1}^{p}w_{i,j,l}}\) \\ & \(s_{i,j,l}=1-\frac{|x_{i,l}-x_{j,l}|}{\max_{\{x_{k},l\}}-\min_{\{x_{k},l\}}(x_{k,l})}\) \\ \hline Huang (1998): & \(d(\mathbf{x}_{i},\mathbf{x}_{j})=\sum_{l=1}^{p_{c}}(x_{i,l}-x_{j,l})^{2}+\) \\ & \(+\gamma\sum_{l=p_{c}+1}^{p_{u}+p_{o}}\mathds{1}_{c}(x_{i,l}=x_{j,l})\); \\ & \(\gamma=\frac{p_{c}^{p_{c}}s_{r}^{2}}{p_{c}}\) \\ \hline Podani (1999): & \(d(\mathbf{x}_{i},\mathbf{x}_{j})=\sqrt{\sum_{l=1}^{p}w_{i,j,l}\left(\frac{x_{i, l}-x_{j,l}}{s_{i,l}}\right)^{2}}\); \\ & \(s_{i,j,l}=\max_{k}(x_{k,l})-\min_{k}(x_{k,l})\) (if \(l\) is continuous) \\ \hline Wishart (2003): & \(d(\mathbf{x}_{i},\mathbf{x}_{j})=\sqrt{\sum_{r=1}^{p}w_{i,j,r}\left(\frac{x_{i, r}-x_{j,r}}{s_{r}}\right)^{2}}\); \\ & \(s_{r}:=\) sample standard deviation of \(l\)th variable (if \(l\) is continuous) \\ \end{tabular}
\end{table}
Table 1: Common mixed-type distance metrics.
for continuous variables given by
\[k(x_{i},x,\lambda^{c})=\frac{1}{\sqrt{2\pi}}e^{\frac{-(x_{i}-x)^{2}}{2(\lambda^{c })^{2}}}, \tag{1}\]
where \(\lambda^{c}>0\). The Epanechnikov kernel (Epanechnikov, 1969) given by
\[k(z)=k(x_{i},x,\lambda^{c})=\frac{3}{4}\left(1-\frac{(x_{i}-x)^{2}}{(\lambda^{ c})^{2}}\right)\mathds{1}\left(\left|\frac{x_{i}-x}{\lambda^{c}}\right| \leq 1\right), \tag{2}\]
where \(\lambda^{c}>0\). For unordered categorical variables, we use an Aitken kernel (Li and Racine, 2023) given by
\[L(x_{i},x,\lambda^{u})=\begin{cases}1,&x_{i}=x,\\ \lambda^{u},&x_{i}\neq x,\end{cases} \tag{3}\]
where \(\lambda^{u}\in[0,1]\). For ordered categorical variables, a Wang & van Ryzin kernel (Wang & van Ryzin, 2008) given by
\[l(x_{i},x,\lambda^{o})=\begin{cases}1-\lambda^{o},&x_{i}=x,\\ \frac{1}{2}\left(1-\lambda^{o}\right)(\lambda^{o})^{|X_{i}-x|},&x_{i}\neq x, \end{cases} \tag{4}\]
where \(\lambda^{o}\in[0,1]\). A mixed-type joint kernel function between a random vector \(\mathbf{x}_{j}\equiv\{\mathbf{x}_{j}^{c},\mathbf{x}_{j}^{u},\mathbf{x}_{j}^{ o}\}\) and an arbitrary point \(\mathbf{x}\) is written as
\[K(\mathbf{x}_{i},\mathbf{x})=\prod_{k=1}^{p_{c}}\frac{1}{\lambda_{k}}k\left( \frac{{x_{i}}_{k}{}^{c}-x_{k}^{c}}{\lambda_{k}}\right)\prod_{k=p_{c}+1}^{p_{c }+p_{u}}L\left(x_{i,k}^{u},x_{k}^{u},\lambda_{k}\right)\times\cdots\times \prod_{k=p_{c}+p_{u}+1}^{p}l\left(x_{i,k}^{o},x_{k}^{o},\lambda_{k}\right). \tag{5}\]
These kernel functions can be used for probability density estimation, such as the Rozenblatt-Parzen density estimator (Rosenblatt 1956; Parzen 1962)\(\widehat{p}(\mathbf{x})=\frac{1}{n\lambda_{1}\lambda_{2}-\lambda_{p_{c}}}\sum_{ i=1}^{n}K(\mathbf{x}_{i},\mathbf{x})\) converges in probability to the underlying density function \(p(x)\) under the assumption that as \(n\rightarrow\infty\), then \(\boldsymbol{\lambda}\rightarrow\mathbf{0}\) and \(n\lambda_{1}\lambda_{2}\cdots\lambda_{p_{c}}\rightarrow\infty\). Optimal bandwidth selection methods are designed to preserve estimator convergence while having several other desirable properties, including smoothing out irrelevant variables (Loader, 1999). There is a wide range of methods for optimal bandwidth selection, including Akaike Information Criterion (Hurvich et al., 1998), Least Squares Cross-Validation (Sain et al., 1994), Rule of Thumb (Sil
verman, 1986), and Maximum-Likelihood Cross-Validation (MLCV) (Hall 1981). The MLCV objective function to be minimized is
\[CV(\mathbf{\lambda})=\sum_{i=1}^{n}\ln\left(\frac{1}{(n-1)}\sum_{j=1,j\neq i}^{n} \mathcal{L}_{\mathbf{\lambda}}(x_{i,k},x_{j,k})\right)=\sum_{i=1}^{n}\ln\left( \hat{\mathcal{L}}_{-i}(x_{i})\right), \tag{6}\]
where \(\hat{\mathcal{L}}_{-i}(x_{i})\) is the leave-one-out estimator of \(\mathcal{L}_{\mathbf{\lambda}}(\cdot)\) in Equation (5). These kernels and cross-validation approaches can be adapted to similarity functions, which are used for the kernel distance metric in this paper.
## 4 Similarity Functions and Mixed-type Kernel Distances
Consider a real-valued function \(\mathcal{L}(\mathbf{x}_{i},\mathbf{x}_{j})\) on the Cartesian product \(X\times X\). \(\mathcal{L}\) is a similarity function if, for any points \(\mathbf{x}_{i},\mathbf{x}_{j},\mathbf{x}_{k}\in X\), it satisfies four conditions (Chen et al., 2009):
* Symmetry: \(\mathcal{L}(\mathbf{x}_{i},\mathbf{x}_{j})=\mathcal{L}(\mathbf{x}_{j}, \mathbf{x}_{i})\),
* Indiscernible: \(\mathcal{L}(\mathbf{x}_{i},\mathbf{x}_{j})=\mathcal{L}(\mathbf{x}_{i}, \mathbf{x}_{i})=\mathcal{L}(\mathbf{x}_{j},\mathbf{x}_{j})\iff\mathbf{x}_{i} =\mathbf{x}_{j}\),
* Nonnegative self-similarity: \(\mathcal{L}(\mathbf{x}_{i},\mathbf{x}_{i})\geq\mathcal{L}(\mathbf{x}_{i}, \mathbf{x}_{j})\geq 0\),
* Similarity triangle inequality: \(\mathcal{L}(\mathbf{x}_{i},\mathbf{x}_{j})+\mathcal{L}(\mathbf{x}_{j}, \mathbf{x}_{k})\leq\mathcal{L}(\mathbf{x}_{i},\mathbf{x}_{k})+\mathcal{L}( \mathbf{x}_{j},\mathbf{x}_{j})\).
Let \(\mathcal{L}(\cdot)\) be a similarity function that maps two \(p\)-dimensional data vectors to the real numbers with the additional property that as the difference between two observations \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\) increases, \(\mathcal{L}(\mathbf{x}_{i},\mathbf{x}_{j})\) decreases.
The definition of norm and distance between two vectors is defined by the choice of kernel similarity function. For example, a Euclidean norm is positive, definite, and symmetric, and yields the similarity function \(\mathcal{L}_{L_{2}}(\mathbf{x}_{i},\mathbf{x}_{j})=1-\sum_{k=1}^{p}(x_{i,k}-x _{j,k})^{2}\). The Gaussian kernel function defined in Equation (1) can be written in terms of a non-linear transformation of \(\mathcal{L}_{L_{2}}\) as \(\mathcal{L}_{gauss}(x_{i},x_{j}):=k\left(\frac{x_{i}-x_{j}}{\lambda_{k}} \right)=\frac{1}{\sqrt{2\pi}}\text{exp}\left(-\frac{1}{2}\left(\frac{x_{i}-x_{ j}}{\lambda_{k}}\right)^{2}\right)\). Each of these kernels satisfy the necessary conditions of positive, definite and symmetric kernel functions (Bekka, de la Harpe, and Valette 2008) and satisfy the properties (S1)-(S4) to be similarity functions (Joshi et al., 2011; Phillips & Venkatasubramanian, 2011; Jakel et al., 2008). The Epanechnikov kernel function in Equation (2) can be viewed as a scaled and truncated version of \(\mathcal{L}_{L_{2}}\) by using \(z=\frac{x_{i,k}-x_{j,k}}{h}\). As an example for continuous kernels, we show that the Gaussian is a similarity
functions.
**Lemma 1**: _The Gaussian kernel is a similarity function._
_Proof_: We demonstrate that the Gaussian kernel \(k(x_{i},x_{j},\lambda^{c})=\frac{1}{\sqrt{2\pi}}e^{\frac{-(x_{i}-x_{j})^{2}}{2( \lambda^{c})^{2}}}\) satisfies properties (S1)-(S4) of a similarity function \(\forall\,\lambda^{c}>0\).
(S1): Since \((x_{i}-x_{j})^{2}=(x_{j}-x_{i})^{2}\), then \(k(x_{i},x_{j},\lambda^{c})=k(x_{j},x_{i},\lambda^{c})\).
(S2): \((\Rightarrow)\) Let \(k(x_{i},x_{j},\lambda^{c})=k(x_{i},x_{i},\lambda^{c})=k(x_{j},x_{j},\lambda^{ c})=\frac{1}{\sqrt{2\pi}}\) but assume that \(x_{i}\neq x_{j}\). Considering that \(k(x_{i},x_{i},\lambda^{c})\), then \(k(x_{j},x_{j},\lambda^{c})=\frac{1}{\sqrt{2\pi}}\) which implies \((x_{i}-x_{j})^{2}=0\). However, \((x_{i}-x_{j})^{2}>0\) by the assumption \(x_{i}\neq x_{j}\) which is a contradiction. Thus, \(k(x_{i},x_{j},\lambda^{c})=k(x_{i},x_{i},\lambda^{c})=k(x_{j},x_{j},\lambda^{ c})=\frac{1}{\sqrt{2\pi}}\Rightarrow x_{i}=x_{j}\).
\((\Leftarrow)\) Let \(x_{i}=x_{j}\), then by (S1) \(k(x_{i},x_{j},\lambda^{c})=k(x_{i},x_{i},\lambda^{c})=k(x_{j},x_{j},\lambda^{ c})=\frac{1}{\sqrt{2\pi}}\).
(S3): First, \(k(x_{i},x_{i},\lambda^{c})=\frac{1}{\sqrt{2\pi}}>0\). Assuming \(x_{i}\neq x_{j}\), then \(\frac{-(x_{i}-x_{j})^{2}}{2(\lambda^{c})^{2}}<0\) and \(e^{\frac{-(x_{i}-x_{j})^{2}}{2(\lambda^{c})^{2}}}<1\), and \(k(x_{i},x_{i},\lambda^{c})>k(x_{i},x_{j},\lambda^{c})\). If \(x_{i}=x_{j}\), then by (S2) \(k(x_{i},x_{i},\lambda^{c})=k(x_{i},x_{j},\lambda^{c})\). Thus, \(k(x_{i},x_{i},\lambda^{c})\geq k(x_{i},x_{j},\lambda^{c})\geq 0\).
(S4): If either \(x_{i}=x_{j}\) or \(x_{j}=x_{k}\), then the inequality holds by (S1)-(S3). Consider the case when \(x_{i}\neq x_{j}\) and \(x_{j}\neq x_{k}\). The Euclidean norm triangle inequality gives
\[(x_{i}-x_{k})^{2}\leq(x_{i}-x_{j})^{2}+(x_{j}-x_{k})^{2},\] \[\implies 2-\left(\frac{1}{2(\lambda^{c})^{2}}(x_{i}-x_{j})^{2}+\frac{1}{2 (\lambda^{c})^{2}}(x_{j}-x_{k})^{2}\right)\leq 2-\frac{1}{2(\lambda^{c})^{2}}(x _{i}-x_{k})^{2} \tag{7}\]
Consider \(e^{-x}\geq 1-x\), thus \(e^{\frac{-(x_{i}-x_{j})^{2}}{2(\lambda^{c})^{2}}}\geq 1-\frac{1}{2(\lambda^{c})^{2}} (x_{i}-x_{j})^{2}\). Using this inequality gives
\[e^{\frac{-(x_{i}-x_{j})^{2}}{2(\lambda^{c})^{2}}}+e^{\frac{-(x_{j}-x_{k})^{2} }{2(\lambda^{c})^{2}}}\geq 2-\left(\frac{1}{2(\lambda^{c})^{2}}(x_{i}-x_{j})^{2}+ \frac{1}{2(\lambda^{c})^{2}}(x_{j}-x_{k})^{2}\right) \tag{8}\]
and
\[1+e^{\frac{-(x_{i}-x_{k})^{2}}{2(\lambda^{c})^{2}}}\geq 2-\frac{1}{2(\lambda^{c}) ^{2}}(x_{i}-x_{k})^{2} \tag{9}\]
Inserting Equations (8) and (9) into (7) yields
\[e^{\frac{-(t_{i}-x_{j})^{2}}{2(\lambda^{c})^{2}}}+e^{\frac{-(x_{j}- x_{k})^{2}}{2(\lambda^{c})^{2}}}\leq e^{\frac{-(x_{j}-x_{k})^{2}}{2(\lambda^{c})^{2}}}+1\] \[\Longrightarrow \frac{1}{\sqrt{2\pi}}e^{\frac{-(t_{i}-x_{j})^{2}}{2(\lambda^{c})^ {2}}}+\frac{1}{\sqrt{2\pi}}e^{\frac{-(x_{j}-x_{k})^{2}}{2(\lambda^{c})^{2}}} \leq\frac{1}{\sqrt{2\pi}}e^{\frac{-(x_{j}-x_{k})^{2}}{2(\lambda^{c})^{2}}}+ \frac{1}{\sqrt{2\pi}}\] \[\Longrightarrow k(x_{i},x_{j},\lambda^{c})+k(x_{j},x_{k},\lambda^{c})\leq k(x_ {i},x_{k},\lambda^{c})+k(x_{j},x_{j},\lambda^{c})\]
which is (S4) and thus the Gaussian kernel is a similarity function.
\(\square\) Similar arguments can be made to show that an Epanechnikov kernel function is a similarity function, which we omit for brevity. For categorical kernels, there is far less literature support for their usage as similarity functions. We show that Equations (3) and (4) are similarity functions. _Lemma 2:_ The Aitken unordered kernel is a similarity function.
_Proof_: We demonstrate that the Aitken Kernel satisfies all properties of a similarity function (S1)-(S4).
(S1): By the definition of symmetry for this kernel function, (S1) is satisfied.
(S2): Suppose \(L(x_{i},x_{j},\lambda^{u})=L(x_{i},x_{i},\lambda^{u})=L(x_{j},x_{j},\lambda^ {u})\). Then \(k(\cdot)=1\), by definition. Conversely, if \(x_{i}=x_{j}\), then \(L(x_{i},x_{j},\lambda^{u})=L(x_{i},x_{i},\lambda^{u})=L(x_{j},x_{j},\lambda^{u })=1\).
(S3): First, \(L(x_{i},x_{i},\lambda^{u})=1\geq 0\). Then, \(L(x_{i},x_{j},\lambda^{u})=\lambda^{u}\leq 1\), based on the bounds of \(\lambda^{u}\). Thus, \(L(x_{i},x_{i},\lambda^{u})\geq L(x_{i},x_{j},\lambda^{u})\geq 0\).
(S4): If \(x_{i}=x_{j}\) or \(x_{j}=x_{k}\), the result is trivial. Assume \(x_{i}\neq x_{j}\) and \(x_{j}\neq x_{k}\). Then,
\[L(x_{i},x_{j},\lambda^{u})+L(x_{j},x_{k},\lambda^{u}) \leq L(x_{i},x_{k},\lambda^{u})+L(x_{j},x_{j},\lambda^{u})\] \[\Longrightarrow \lambda^{u}+\lambda^{u}\leq\lambda^{u}+1\] \[\implies\lambda^{u}\leq 1,\]
which is true based on the bounds of \(\lambda^{u}\).
\(\square\)
_Lemma 3:_ The Wang & van Ryzin ordered kernel is a similarity function.
_Proof_: We demonstrate that the Wang & van Ryzin Kernel satisfies all properties of a similarity function (S1)-(S4). By definition, (S1) and (S2) are satisfied.
(S3): First, \(l(x_{i},x_{i},\lambda^{o})=1-\lambda^{o}\in[0,1]\). Then, \(l(x_{i},x_{j},\lambda^{o})=\frac{1}{2}(1-\lambda^{o})(\lambda^{o})^{|x_{i}-x _{j}|}\). To show \(1-\lambda^{o}\geq 1\), we have \(l(x_{i},x_{j},\lambda^{o})=1-\lambda^{o}\in[0,1]\). Then, \(l(x_{i},x_{j},\lambda^{o})=\frac{1}{2}(1-\lambda^{o})(\lambda^{o})^{|x_{i}- x_{j}|}\). To show \(1-\lambda^{o}\geq 1\), we have \(l(x_{i},x_{j},\lambda^{o})=1-\lambda^{o}\in[0,1]\). Then, \(l(x_{i},x_{j},\lambda^{o})=\frac{1}{2}(1-\lambda^{o})(\lambda^{o})^{|x_{i}- x_{j}|}\). To show \(1-\lambda^{o}\geq 1\), we have \(l(x_{i},x_{j},\lambda^{o})=1-\lambda^{o}\in[0,1]\). Then, \(l(x_{i},x_{j},\lambda^{o})=\frac{1}{2}(1-\lambda^{o})(\lambda^{o})^{|x_{i}- x_{j}|}\). To show \(1-\lambda^{o}\geq 1\), we have \(l(x_{i},x_{j},\lambda^{o})=1-\lambda^{o}\in[0,1]\). Then, \(l(x_{i},x_{j},\lambda^{o})=\frac{1}{2}(1-\lambda^{o})(\lambda^{o})^{|x_{i}- x_{j}|}\). To show \(1-\lambda^{o}\geq 1\), we have \(l(x_{i},x_{j},\lambda^{o})=1-\lambda^{o}\in[0,1]\). Then, \(l(x_{i},x_{j},\lambda^{o})=\frac{1}{2}(1-\lambda^{o})(\lambda^{o})^{|x_{i}- x_{j}|}\). To show \(1-\lambda^{o}\geq 1\), we have \(l(x_{i},x_{j},\lambda^{o})=1-\lambda^{o}\in[0,1]\). Then, \(l(x_{i},x_{j},\lambda^{o})=\frac{1}{2}(1-\lambda^{o})(\lambda^{o})^{|x_{i}- x_{j}|}\). To show \(1-\lambda^{o}\geq 1\), we have \(l(x_{i},x_{j},\lambda^{o})=1-\lambda^{o}\in[0,1]\). Then, \(l(x_{i},x_{j},\lambda^{o})=\frac{1}{2}(1-\lambda^{o})(\lambda^{o})^{|x_{i}- x_{j}|}\). To show \(1-\lambda^{o}\geq 1\), we have \(l(x_{i},x_{j},\lambda^{o})=1-\lambda^{o}\in[0,1]\). Then, \(l(x_{i},x_{j},\lambda^{o})=\frac{1}{2}(1-\lambda^{o})(\lambda^{o})^{|x_{i}- x_{j}|}\). To show \(1-\lambda^{o}\geq 1\), we have \(l(x_{i},x_{j},\lambda^{o})=1-\lambda^{o}\in[0,1]\). Then, \(l(x_{i},x_{j},\lambda^{o})=1-\lambda^{o}\in[0,1]\).
\(\frac{1}{2}(1-\lambda^{o})(\lambda^{o})^{|x_{i}-x_{j}|}\), divide both sides by \(1-\lambda^{o}\) to see \(1\geq\frac{1}{2}(\lambda^{o})^{|x_{i}-x_{j}|}\) (note that we can divide by \(1-\lambda^{o}\), since if \(\lambda^{o}=1\), we have \(0\geq 0\) which holds). Since \(|x_{i}-x_{j}|\) is a positive integer, and since \(\lambda^{o}\in[0,1]\), \((\lambda^{o})^{|x_{i}-x_{j}|}\in[0,1]\), and the inequality follows.
(S4): If \(x_{i}=x_{j}\) or \(x_{j}=x_{k}\), the result is trivial. Assume \(x_{i}\neq x_{j}\) and \(x_{j}\neq x_{k}\). Then,
\[l(x_{i},x_{j},\lambda^{o})+l(x_{j},x_{k},\lambda^{o}) \leq l(x_{i},x_{k},\lambda^{o})+l(x_{j},x_{j},\lambda^{o})\] \[\implies\frac{1}{2}(1-\lambda^{o})(\lambda^{o})^{|x_{i}-x_{j}|}+ \frac{1}{2}(1-\lambda^{o})(\lambda^{o})^{|x_{j}-x_{k}|} \leq\frac{1}{2}(1-\lambda^{o})(\lambda^{o})^{|x_{i}-x_{k}|}+(1- \lambda^{o})\] \[\implies\frac{1}{2}\left((\lambda^{o})^{|x_{i}-x_{j}|}+(\lambda^{ o})^{|x_{j}-x_{k}|}\right) \leq\frac{1}{2}(\lambda^{o})^{|x_{i}-x_{k}|}+1.\]
Now, \(\frac{1}{2}\left((\lambda^{o})^{|x_{i}-x_{j}|}+(\lambda^{o})^{|x_{j}-x_{k}|} \right)\leq\max\left\{(\lambda^{o})^{|x_{i}-x_{j}|},(\lambda^{o})^{|x_{j}-x_{ k}|}\right\}\), so it will be sufficient to show
\[\max\left\{(\lambda^{o})^{|x_{i}-x_{j}|},(\lambda^{o})^{|x_{j}-x_{k}|}\right\} \leq\frac{1}{2}(\lambda^{o})^{|x_{i}-x_{k}|}+1.\]
If \(|x_{i}-x_{j}|\leq|x_{j}-x_{k}|\), then \(\max\left\{(\lambda^{o})^{|x_{i}-x_{j}|},(\lambda^{o})^{|x_{j}-x_{k}|}\right\} =(\lambda^{o})^{|x_{j}-x_{k}|}\), so we show \((\lambda^{o})^{|x_{j}-x_{k}|}\leq\frac{1}{2}(\lambda^{o})^{|x_{i}-x_{k}|}+1\), which is clear since \((\lambda^{o})^{|x_{i}-x_{k}|}\in[0,1]\). A similar argument holds if \(|x_{i}-x_{j}|>|x_{j}-x_{k}|\), and the result follows immediately.
\(\Box\)
To transform kernel similarities into distances, we extend the metric described in Phillips and Venkata-subramanian (2011) to the multivariate setting, which uses a well-defined kernel function to measure similarity between points \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\). The distance between \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\) is then defined as
\[d(\mathbf{x}_{i},\mathbf{x}_{j})=\mathcal{L}(\mathbf{x}_{i},\mathbf{x}_{i})+ \mathcal{L}(\mathbf{x}_{j},\mathbf{x}_{j})-\mathcal{L}(\mathbf{x}_{i}, \mathbf{x}_{j})-\mathcal{L}(\mathbf{x}_{j},\mathbf{x}_{i}). \tag{10}\]
If a symmetric kernel function is elected, the formula reduces to \(d(\mathbf{x}_{i},\mathbf{x}_{j})=\mathcal{L}(\mathbf{x}_{i},\mathbf{x}_{i})+ \mathcal{L}(\mathbf{x}_{j},\mathbf{x}_{j})-2\mathcal{L}(\mathbf{x}_{i}, \mathbf{x}_{j})\). Equation (10) represents the difference between the self-similarities of the two points and their cross-similarity. The multiplicative factor of two ensures that the distance between an object and itself equals zero and satisfies the identity of indiscernibles.
_Theorem 1:_ Equation (10) that satisfies (S1)-(S4) is a well-defined distance metric where it satisfies the following distant metric properties (Chen et al., 2009):
_Proof_: (D1) Nonnegativity (\(d({\bf x}_{i},{\bf x}_{j})\geq 0\)): note by (S3) that \({\cal L}({\bf x}_{i},{\bf x}_{i})-{\cal L}({\bf x}_{i},{\bf x}_{j})\geq 0\) and \({\cal L}({\bf x}_{j},{\bf x}_{j})-{\cal L}({\bf x}_{j},{\bf x}_{i})\geq 0\). Adding yields \({\cal L}({\bf x}_{i},{\bf x}_{i})+{\cal L}({\bf x}_{j},{\bf x}_{j})-{\cal L}({ \bf x}_{i},{\bf x}_{j})-{\cal L}({\bf x}_{j},{\bf x}_{i})\geq 0\) and thus, \(d({\bf x}_{i},{\bf x}_{j})\geq 0\).
(D2) Symmetry (\(d({\bf x}_{i},{\bf x}_{j})=d({\bf x}_{j},{\bf x}_{i})\)): note \(d({\bf x}_{i},{\bf x}_{j})={\cal L}({\bf x}_{i},{\bf x}_{i})+{\cal L}({\bf x}_{ j},{\bf x}_{j})-{\cal L}({\bf x}_{i},{\bf x}_{j})-{\cal L}({\bf x}_{j},{\bf x}_{i})=\)
\({\cal L}({\bf x}_{j},{\bf x}_{j})+{\cal L}({\bf x}_{i},{\bf x}_{i})-{\cal L}({ \bf x}_{j},{\bf x}_{i})-{\cal L}({\bf x}_{i},{\bf x}_{j})=d({\bf x}_{j},{\bf x} _{i})\)
(D3) Identity of indescernibles (\(d({\bf x}_{i},{\bf x}_{j})=0\iff{\bf x}_{i}={\bf x}_{j}\)): suppose that \(d({\bf x}_{i},{\bf x}_{j})=0\), thus \({\cal L}({\bf x}_{i},{\bf x}_{i})+{\cal L}({\bf x}_{j},{\bf x}_{j})-{\cal L}({ \bf x}_{i},{\bf x}_{j})-{\cal L}({\bf x}_{j},{\bf x}_{i})=0\), implying \({\cal L}({\bf x}_{i},{\bf x}_{i})+{\cal L}({\bf x}_{j},{\bf x}_{j})={\cal L}({ \bf x}_{i},{\bf x}_{j})+{\cal L}({\bf x}_{j},{\bf x}_{i})\) which is true if and only if \({\bf x}_{i}={\bf x}_{j}\) or \({\bf x}_{j}={\bf x}_{i}\) by (S2). Conversely, suppose \({\bf x}_{j}={\bf x}_{i}\), then \(d({\bf x}_{i},{\bf x}_{i})={\cal L}({\bf x}_{i},{\bf x}_{i})+{\cal L}({\bf x}_ {i},{\bf x}_{i})-{\cal L}({\bf x}_{i},{\bf x}_{i})-{\cal L}({\bf x}_{i},{\bf x} _{i})=2{\cal L}({\bf x}_{i},{\bf x}_{i})-2{\cal L}({\bf x}_{i},{\bf x}_{i})=0\)
(D4) Triangle inequality (\(d({\bf x}_{i},{\bf x}_{k})\leq d({\bf x}_{i},{\bf x}_{j})+d({\bf x}_{j},{\bf x }_{k})\)): note by (S4) that \({\cal L}({\bf x}_{i},{\bf x}_{j})+{\cal L}({\bf x}_{j},{\bf x}_{k})\leq{\cal L }({\bf x}_{i},{\bf x}_{k})+{\cal L}({\bf x}_{j},{\bf x}_{j})\), and \({\cal L}({\bf x}_{k},{\bf x}_{j})+{\cal L}({\bf x}_{j},{\bf x}_{i})\leq{\cal L }({\bf x}_{k},{\bf x}_{i})+{\cal L}({\bf x}_{j},{\bf x}_{j})\). Then,
\(d({\bf x}_{i},{\bf x}_{k})={\cal L}({\bf x}_{i},{\bf x}_{i})+{\cal L}({\bf x}_ {k},{\bf x}_{k})-{\cal L}({\bf x}_{i},{\bf x}_{k})-{\cal L}({\bf x}_{k},{\bf x }_{i})\leq{\cal L}({\bf x}_{i},{\bf x}_{i})+{\cal L}({\bf x}_{k},{\bf x}_{k})- {\cal L}({\bf x}_{i},{\bf x}_{j})-{\cal L}({\bf x}_{j},{\bf x}_{k})+\)
\({\cal L}({\bf x}_{j},{\bf x}_{j})-{\cal L}({\bf x}_{j},{\bf x}_{i})-{\cal L}({ \bf x}_{k},{\bf x}_{j})+{\cal L}({\bf x}_{j},{\bf x}_{j})=d({\bf x}_{i},{\bf x }_{j})+d({\bf x}_{j},{\bf x}_{k})\)
\(\Box\)
_4.1 KDSUM: Kernel Dissimilarity Metric for Mixed-Type Data_
The pairwise similarity between two observations \({\bf x}_{i}\) and \({\bf x}_{j}\) is
\[\psi({\bf x}_{i},{\bf x}_{j}|{\mathbf{\lambda}})=\prod_{k=1}^{p_{c}} \frac{1}{\lambda_{k}}k\left(\frac{x_{i,k}-x_{j,k}}{\lambda_{k}}\right)+\sum_{k= p_{c}+1}^{p_{u}}L(x_{i,k},x_{j,k},\lambda_{k})+\sum_{k=p_{c}+p_{u}+1}^{p}l(x_{i,k},x_{j,k}, \lambda_{k}). \tag{11}\]
Using the positive, definite, symmetric kernel functions defined in Section 3, \(\psi(\cdot)\) satisfies the similarity properties (S1)-(S4) and is a similarity function, thus we can set \({\cal L}(\cdot):=\psi(\cdot)\). We demonstrate that the sum of two similarity functions satisfies the rules (S1)-(S4) of being a similarity function, which can be extended to any number of sums of similarity functions. The results are similar for the product of similarity functions. We also note that since our similarity functions are kernel functions, the sum and product of multiple kernel functions is also a kernel function (see Bishop, 2006, pp 296).
_Lemma 4:_ The sum of similarity functions are also similarity functions
_Proof_: Let \({\cal L}^{(n)}(x_{i},x_{j})={\cal L}^{(n)}_{1}(x_{i},x_{j})+\ldots+{\cal L}^{(n )}_{n}(x_{1},x_{2})\) be the sum of \((n)\) similarity functions. We show each of the properties (S1)-(S4) through induction or directly:
(S1): For the base case, \({\cal L}^{(2)}(x_{i},x_{j})={\cal L}^{(2)}_{1}(x_{i},x_{j})+{\cal L}^{(2)}_{2}(x_ {i},x_{j})={\cal L}^{(2)}_{1}(x_{j},x_{i})+{\cal L}^{(2)}_{2}(x_{j},x_{i})={ \cal L}^{(2)}(x_{j},x_{i})\). Then, assuming \({\cal L}^{(n)}(x_{i},x_{j})={\cal L}^{(n)}_{1}(x_{i},x_{j})+\ldots+{\cal L}^{(n )}_{n}(x_{i},x_{j})={\cal L}^{(n)}_{1}(x_{j},x_{i})+\ldots+{\cal L}^{(n)}_{n}(x _{j},x_{i})=\)
\(\mathcal{L}^{(n)}(x_{j},x_{i})\) is true, we have \(\mathcal{L}^{(n+1)}(x_{i},x_{j})=\mathcal{L}^{(n+1)}_{1}(x_{i},x_{j})+\ldots+ \mathcal{L}^{(n)}_{n}(x_{i},x_{j})+\mathcal{L}^{(n+1)}_{n+1}(x_{i},x_{j})= \mathcal{L}^{(n+1)}_{1}(x_{i},x_{j})=\mathcal{L}^{(n+1)}_{1}(x_{j},x_{i})+ \ldots+\mathcal{L}^{(n)}_{n}(x_{j},x_{i})+\mathcal{L}^{(n+1)}_{n+1}(x_{j},x_{i})= \mathcal{L}^{(n+1)}(x_{j},x_{i})\)
(S2): If \(x_{i}=x_{j}\), then their self-similarity values are maximum, and the similarity values between them is also maximum:
\[\mathcal{L}^{(n)}(x_{i},x_{i}) =\text{maximum self-similarity value for the sum of $n$ similarity functions}\] \[\mathcal{L}^{(n)}(x_{i},x_{j}) =\text{similarity value between $x_{i}$ and $x_{j}$ or the sum of $n$ similarity functions}\] \[\mathcal{L}(n)(x_{j},x_{j}) =\text{maximum self-similarity value for the sum of $n$ similarity functions}\]
Since both \(\mathcal{L}^{(n)}(x_{i},x_{i})\) and \(\mathcal{L}^{(n)}(x_{j},x_{j})\) are maximum values, and the only way for \(\mathcal{L}^{(n)}(x_{i},x_{j})\) to be the same as both \(\mathcal{L}^{(n)}(x_{i},x_{i})\) and \(\mathcal{L}^{(n)}(x_{j},x_{j})\) is if \(x_{i}=x_{j}\). Conversely, if \(x_{i}\neq x_{j}\), then their self-similarity values are not maximum, and the similarity value between them should be less than the self-similarity values:
\[\mathcal{L}^{(n)}(x_{i},x_{i})=\text{self-similarity value of $x_{i}$ for the sum of $n$ similarity functions (not maximum)}\] \[\mathcal{L}^{(n)}(x_{i},x_{j})=\text{similarity value between $x_{i}$ and $x_{j}$ for the sum of $n$ similarity functions}\] \[\mathcal{L}^{(n)}(x_{j},x_{j})=\text{self-similarity value of $x_{j}$ for the sum of $n$ similarity functions (not maximum)}\]
Since \(\mathcal{L}^{(n)}(x_{i},x_{i})\neq\mathcal{L}^{(n)}(x_{i},x_{j})\), and \(\mathcal{L}^{(n)}(x_{j},x_{j})\neq\mathcal{L}^{(n)}(x_{i},x_{j})\), the only way for \(\mathcal{L}^{(n)}(x_{i},x_{j})\) to be the same as both \(\mathcal{L}^{(n)}(x_{i},x_{i})\) and \(\mathcal{L}^{(n)}(x_{j},x_{j})\) is if \(x_{i}=x_{j}\). In both cases, the property holds.
(S3): For the base case, since based on properties of similarity, \(\mathcal{L}^{(2)}_{1}(x_{i},x_{i})\geq\mathcal{L}^{(2)}_{1}(x_{i},x_{j})\geq 0\) and \(\mathcal{L}^{(2)}_{2}(x_{i},x_{i})\geq\mathcal{L}^{(2)}_{2}(x_{i},x_{j})\geq 0\), then \(\mathcal{L}^{(2)}_{1}(x_{i},x_{i})+\mathcal{L}^{(2)}_{2}(x_{i},x_{i})\geq \mathcal{L}^{(2)}_{1}(x_{i},x_{j})+\mathcal{L}^{(2)}_{2}(x_{i},x_{j})\geq 0+0\), and thus \(\mathcal{L}^{(2)}(x_{i},x_{i})\geq\mathcal{L}^{(2)}(x_{i},x_{j})\geq 0\).
Now, assuming \(\mathcal{L}^{(n)}(x_{i},x_{i})\geq\mathcal{L}^{(n)}(x_{i},x_{j})\geq 0\), we note \(\mathcal{L}^{(n+1)}_{k}(x_{i},x_{i})\geq\mathcal{L}^{(n+1)}_{k}(x_{i},x_{j})\geq 0\) for all \(k\in\{1,2,\ldots,n+1\}\). Adding these \(k\) sets of inequalities, and using our inductive hypothesis \(\mathcal{L}^{(n+1)}(x_{i},x_{i})=\mathcal{L}^{(n+1)}_{1}(x_{i},x_{i})+\ldots+ \mathcal{L}^{(n+1)}_{n}(x_{i},x_{i})+\mathcal{L}^{(n+1)}_{n+1}(x_{i},x_{i})\geq \mathcal{L}^{(n+1)}_{1}(x_{i},x_{j})+\ldots+\mathcal{L}^{(n+1)}_{n}(x_{i},x_{ j})+\mathcal{L}^{(n+1)}_{n+1}(x_{i},x_{j})=\mathcal{L}^{(n+1)}_{1}(x_{i},x_{j})\) and \(\mathcal{L}^{(n+1)}(x_{i},x_{j})=\mathcal{L}^{(n+1)}_{1}(x_{i},x_{j})+\ldots+ \mathcal{L}^{(n+1)}_{n+1}(x_{i},x_{j})+\mathcal{L}^{(n+1)}_{n+1}(x_{i},x_{j}) \geq(n+1)\times 0\).
(S4): For the base case, since based on properties of similarity, \(\mathcal{L}^{(2)}_{1}(x_{i},x_{j})+\mathcal{L}^{(2)}_{1}(x_{j},x_{k})\leq \mathcal{L}^{(2)}_{1}(x_{i},x_{k})+\mathcal{L}^{(2)}_{1}(x_{j},x_{j})\), and \(\mathcal{L}^{(2)}_{2}(x_{i},x_{j})+\mathcal{L}^{(2)}_{2}(x_{j},x_{k})\leq \mathcal{L}^{(2)}_{2}(x_{i},x_{k})+\mathcal{L}^{(2)}_{2}(x_{j},x_{j})\), then \(\mathcal{L}^{(2)}_{1}(x_{i},x_{j})+\mathcal{L}^{(2)}_{1}(x_{j},x_{k})+ \mathcal{L}^{(2)}_{2}(x_{i},x_{j})+\mathcal{L}^{(2)}_{2}(x_{j},x_{k})+ \mathcal{L}^{(2)}_{2}(x_{i},x_{j})+\mathcal{L}^{(2)}_{2}(x_{j},x_{k})+ \mathcal{L}^{(2)}_{2}(x_{j},x_{j})\), which implies \(\mathcal{L}^{(2)}_{2}(x_{i},x_{j})+\mathcal{L}^{(2)}(x_{j},x_{k})\leq \mathcal{L}^{(2)}(x_{i},x_{k})+\mathcal{L}^{(2)}(x_{j},x_{j})\).
Now, assuming \(\sum_{N=1}^{n}\mathscr{L}_{N}^{(n)}(x_{i},x_{j})+\sum_{N=1}^{n}\mathscr{L}_{N}^{(n )}(x_{j},x_{k})\leq\sum_{N=1}^{n}\mathscr{L}_{N}^{(n)}(x_{i},x_{k})+\sum_{N=1}^{ n}\mathscr{L}_{N}^{(n)}(x_{j},x_{j})\), then
\[\mathscr{L}^{(n+1)}(x_{i},x_{j})+\mathscr{L}^{(n+1)}(x_{j},x_{k})\] \[=\sum_{N=1}^{n}\mathscr{L}_{N}^{(n+1)}(x_{i},x_{j})+\mathscr{L}_{ n+1}^{(n+1)}(x_{i},x_{j})+\sum_{N=1}^{n}\mathscr{L}_{N}^{(n+1)}(x_{j},x_{k})+ \mathscr{L}_{n+1}^{(n+1)}(x_{j},x_{k})\] \[\leq\sum_{N=1}^{n}\mathscr{L}_{N}^{(n+1)}(x_{i},x_{k})+\mathscr{L }_{n+1}^{(n+1)}(x_{i},x_{k})+\sum_{N=1}^{n}\mathscr{L}_{N}^{(n+1)}(x_{j},x_{j} )+\mathscr{L}_{n+1}^{(n+1)}(x_{j},x_{j})\] \[=\mathscr{L}^{(n+1)}(x_{i},x_{k})+\mathscr{L}^{(n+1)}(x_{j},x_{j}),\]
which holds true by the inductive hypothesis
\(\sum_{N=1}^{n}\mathscr{L}_{N}^{(n+1)}(x_{i},x_{j})+\sum_{N=1}^{n}\mathscr{L}_ {N}^{(n+1)}(x_{j},x_{k})+\leq\sum_{N=1}^{n}\mathscr{L}_{N}^{(n+1)}(x_{i},x_{k} )+\sum_{N=1}^{n}\mathscr{L}_{N}^{(n+1)}(x_{j},x_{j})\) and since \(\mathscr{L}_{n+1}^{(n+1)}(x_{i},x_{j})+\mathscr{L}_{n+1}^{(n+1)}(x_{j},x_{k} )\leq\mathscr{L}_{n+1}^{(n+1)}(x_{i},x_{k})+\mathscr{L}_{n+1}^{(n+1)}(x_{j},x_{ j})\)\(\Box\)
Combining the similarity properties (S1)-(S4) and adapting the kernel distance described by Phillips and Venkatasubramanian (2011) to the multivariate setting, we define the kernel distance summation (KDSUM) metric between any two data points \(\mathbf{x}_{i}\), \(\mathbf{x}_{j}\) of the dataset \(X\) as
\[d(\mathbf{x}_{i},\mathbf{x}_{j}|\boldsymbol{\lambda})=\psi(\mathbf{x}_{i}, \mathbf{x}_{i}|\boldsymbol{\lambda})+\psi(\mathbf{x}_{j},\mathbf{x}_{j}| \boldsymbol{\lambda})-2\psi(\mathbf{x}_{i},\mathbf{x}_{j}|\boldsymbol{\lambda}). \tag{12}\]
where \(d(\mathbf{x}_{i},\mathbf{x}_{j}|\boldsymbol{\lambda})=2\left(\psi(\mathbf{x}_{ i},\mathbf{x}_{i}|\boldsymbol{\lambda})-\psi(\mathbf{x}_{i},\mathbf{x}_{j}| \boldsymbol{\lambda})\right)\) as \(\psi(\mathbf{x}_{i},\mathbf{x}_{j}|\boldsymbol{\lambda})\) is symmetric. Investigating the KDSUM metric asymptotics reveals that the bandwidth selection methodology described above is a shrinkage methodology between maximized dissimilarity and a fixed quantity of uniform dissimilarity between all points. To see KDSUM as a shrinkage method, consider that when \(\mathbf{x_{i}}=\mathbf{x_{j}}\), then \(\psi(\mathbf{x}_{i},\mathbf{x}_{j}|\boldsymbol{\lambda})\) is maximized and \(d(\mathbf{x}_{i},\mathbf{x}_{j})=0\) at any value of \(\boldsymbol{\lambda}\). Consider for continuous kernel types that:
\[\lim_{\lambda_{k}\to 0}\frac{1}{\lambda_{k}}k\left(\frac{x_{i,k}-x_{j,k }}{\lambda_{k}}\right)=\begin{cases}\infty,&x_{i,k}=x_{j,k},\\ 0,&x_{i,k}\neq x_{j,k},\end{cases}\] \[\lim_{\lambda_{k}\to\infty}\frac{1}{\lambda_{k}}k\left(\frac{x_{i,k }-x_{j,k}}{\lambda_{k}}\right)=0,\,\forall\,x_{i,k},x_{j,k}.\]
where \(x_{i,k},x_{i,k}\in(-\infty,\infty)\). For the Aitken unordered kernel in Equation (3) with bandwidth support \(\lambda_{k}=[0,1]\), then \(L(x_{i,k},x_{j,k},0)=\mathds{1}(x_{i,k}=x_{j,k})\) and \(L\left(x_{i,k},x_{j,k},1\right)=1\). For the Wang & Van Ryzin ordered kernel in Equation (4) with bandwidth support \(\lambda_{k}\in[0,1]\), then \(l(x_{i,k},x_{j,k},0)=\mathds{1}(x_{i,k}=x_{j,k})\) and
\(l(x_{i,k},x_{j,k},1)=0\). Thus, the asymptotics for \(d(\mathbf{x}_{i},\mathbf{x}_{j}|\boldsymbol{\lambda})\) when \(x_{i,k}\neq x_{j,k}\), \(\forall\,k\) are:
\[\lim_{\boldsymbol{\lambda}\rightarrow\boldsymbol{0}}d(\mathbf{x}_{i},\mathbf{x }_{j}|\boldsymbol{\lambda})=2\left(\prod_{k=1}^{p_{c}}\infty+\sum_{k=p_{c}+1}^ {p_{u}}1+\sum_{k=p_{c}+p_{u}+1}^{p}1-\prod_{k=1}^{p_{c}}0+\sum_{k=p_{c}+1}^{p_{ u}}0+\sum_{k=p_{c}+p_{u}+1}^{p}0\right)=\infty,\]
where the distance is clearly maximized. Alternatively, when all bandwidths are large \(\lambda_{\infty}=\left(\infty,\infty,\ldots,\frac{\varrho_{1}-1}{\varrho_{2}}, \frac{\varrho_{1}-1}{\varrho_{2}},\ldots,\frac{\varrho_{p_{u}}-1}{\varrho_{p_{ u}}},1,1,\ldots,1\right)\), we find that
\[\lim_{\boldsymbol{\lambda}\rightarrow\boldsymbol{\lambda}_{\infty}}d( \mathbf{x}_{i},\mathbf{x}_{j}|\boldsymbol{\lambda})=2\left(\prod_{k=1}^{p_{c }}0+\sum_{k=p_{c}+1}^{p_{u}}0+\sum_{k=p_{c}+p_{u}+1}^{p}0-\prod_{k=1}^{p_{c}}0 +\sum_{k=p_{c}+1}^{p_{u}}0+\sum_{k=p_{c}+p_{u}+1}^{p}0\right)=0,\]
where all distances are equal to zero. Thus, this methodology is a shrinkage method between maximized differences and zero difference, depending on the choice of bandwidth. To select bandwidths, we employ a maximized similarity cross-validation (MSCV) approach similar to maximum likelihood cross-validation (MLCV) (Stone, 1974; Geisser, 1975). The main difference is that we replace the leave-one-out likelihood function \(\mathcal{L}_{(-1)}(\mathbf{x}_{i})\) with the leave-one-out similarity function \(\psi_{(-1)}(\mathbf{x}_{i}|\boldsymbol{\lambda})=\frac{1}{n-1}\sum_{j=1,j\neq i }^{n}\psi(\mathbf{x}_{i},\mathbf{x}_{j}|\boldsymbol{\lambda})\) yielding \(CV(\boldsymbol{\lambda})=\sum_{i=1}^{n}\ln\left(\psi_{(-i)}(\mathbf{x}_{i}| \boldsymbol{\lambda})\right).\) The advantage of this methodology is that we are minimizing the dissimilarity between data points for continuous and categorical variables simultaneously, which allows for the clustering of similar points, while smoothing out irrelevant variables not important for similarity or clustering.
The algorithm to calculate the KDSUM metric is:
```
1: Given a dataset \(X\), reorder as continuous, unordered categorical, then ordered categorical, and ensure that the variables are casted accordingly.
2: Select symmetric kernel functions \(K,L,l\) from Equations (1)-(4), or any symmetric kernels of choice.
3: Calculate optimal bandwidths for each \(p_{i}\) using cross-validation procedure outlined in Section 3.
4: i. Define \(\psi_{-i}\mathbf{x}_{i},\mathbf{x}_{j},\boldsymbol{\lambda})\), which is the updated leave-one-out pairwise similarity defined in Equation (11) replacing \(\mathcal{L}_{-i}\) in Equation (6), ensuring the kernels selected in Step 2 are consistent.
5: ii. Optimize the function in Equation (6) to obtain the optimal bandwidths \(\boldsymbol{\lambda}\). To do a leave-one-out on \(\psi_{-i}(\cdot)\), remove one observation at a time and use the average to obtain \(\boldsymbol{\hat{\lambda}}\). Optimization is achieved through quadratic optimization subject to the bandwidth range constraints.
6: iii. The obtained \(\boldsymbol{\hat{\lambda}}\) is considered the optimal bandwidth, and is the vector of minimal parameters to maximize the separation of observations for each variable.
7: Calculate the pairwise distance between all observations \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\) using Equation (12) and the selected kernels in Step 2, with optimal bandwidth from Step 3, to obtain the \(n\times n\) dissimilarity matrix.
```
**Algorithm 1** KDSUM
Consider the following toy example of a simulated mixed-type data matrix to illustrate how distances are
smoothed through bandwidth selection:
\[X=\begin{array}{c}\begin{array}{c}p_{c_{1}}\\ \mathbf{x}_{1}\\ \mathbf{x}_{2}\\ \mathbf{x}_{3}\\ \mathbf{x}_{4}\\ \mathbf{x}_{5}\end{array}\end{array}\begin{array}{c}p_{c_{1}}&p_{u_{1}}&p_{ o_{1}}\\ 1.5&1&3\\ 0&1.5&0&0\\ 0&1&0\\ 0&0&3\end{array}\end{array}\]
The columns are the continuous \(p_{c_{1}}\), unordered \(p_{u_{1}}\), and order \(p_{o_{1}}\) variables. The observered vectors \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\) are identical and thus the distance between them is assigned 0. The vectors \(\mathbf{x}_{3}\), \(\mathbf{x}_{4}\), and \(\mathbf{x}_{5}\) each contain one value in common with both \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\) but the rest of the observations are 0, and the variables in common between \(\mathbf{x}_{3}\), \(\mathbf{x}_{4}\), and \(\mathbf{x}_{5}\) and \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\) are different for each vector.
Consider three cases for the bandwidth vectors: (1) A specified small bandwidth vector \(\boldsymbol{\lambda}_{1}=[0.01,0,0]\), (2) a specified large bandwidth vector \(\boldsymbol{\lambda}_{2}=[10,1,1]\), and bandwidths selected using maximum similarity cross-validation \(\boldsymbol{\lambda}_{3}=[1.027,0.591,4.94\times 10^{-32}]\). We observe that the variable \(p_{u_{1}}\) has nearly reached its upper bounds and will contribute little to the overall distance based on the data, while \(p_{c_{1}}\) and \(p_{o_{1}}\) will contribute more heavily. The results are shown in case 3.
case 1: \(d(X\mid\boldsymbol{\lambda}_{1})\)
\[\begin{array}{c}\begin{array}{c}\mathbf{x}_{1}\\ \mathbf{x}_{1}\\ \mathbf{x}_{2}\\ \mathbf{x}_{3}\\ \mathbf{x}_{4}\\ \mathbf{x}_{5}\end{array}\end{array}\begin{array}{c}\begin{array}{c} \mathbf{x}_{1}\\ \mathbf{x}_{2}\\ \mathbf{x}_{3}\\ 0\\ 0\\ 0\\ 0\\ 0\end{array}\begin{array}{c}\mathbf{x}_{3}\\ \mathbf{x}_{4}\\ \mathbf{x}_{5}\\ 0\\ 0\\ 0\end{array}\end{array}\]
case 2: \(d(X\mid\boldsymbol{\lambda}_{2})\)
\[\begin{array}{c}\begin{array}{c}\mathbf{x}_{1}\\ \mathbf{x}_{1}\\ \mathbf{x}_{2}\\ \mathbf{x}_{3}\\ \mathbf{x}_{5}\end{array}\begin{array}{c}\mathbf{x}_{1}\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\end{array}\begin{array}{c}\mathbf{x}_{2}\\ \mathbf{x}_{3}\\ \mathbf{x}_{4}\\ \mathbf{x}_{5}\end{array}\begin{array}{c}\mathbf{x}_{1}\\ \mathbf{x}_{2}\\ \mathbf{x}_{3}\\ 0\\ 0\\ 0\end{array}\begin{array}{c}\mathbf{x}_{3}\\ \mathbf{x}_{4}\\ \mathbf{x}_{5}\\ 0\\ 0\end{array}\end{array}\]
case 3: \(d(X\mid\mathbf{\lambda}_{3})\)
\[\begin{array}{c}\begin{array}{c}\mathbf{x}_{1}\\ \mathbf{x}_{2}\\ \mathbf{x}_{3}\\ \mathbf{x}_{4}\\ \mathbf{x}_{5}\end{array}&\begin{array}{c}\mathbf{x}_{1}\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ \end{array}&\begin{array}{c}\mathbf{x}_{2}\\ 2.819\\ 1.328\\ 1.328\\ 2.510\\ 0\\ \end{array}&\begin{array}{c}\mathbf{x}_{3}\\ 0\\ 0\\ 1.328\\ 2.510\\ 0\\ \end{array}&\begin{array}{c}\mathbf{x}_{4}\\ 0\\ 0\\ 0\\ \end{array}&\begin{array}{c}\mathbf{x}_{5}\\ 1.328\\ 1.328\\ 2.510\\ 0\\ \end{array}\\ \mathbf{x}_{6}\\ \mathbf{x}_{7}\\ \end{array}\end{array}\]
## 5 Study Descriptions and Results
To evaluate the performance of the KDSUM metric in comparison to established metrics for mixed-type data distance-based clustering, we analyzed simulated and real datasets of continuous, categorical, and mixed-type attributes using agglomerative hierarchical clustering techniques. We establish the performance of the KDSUM metric relative to existing metrics for handling mixed-type data and demonstrate the potential of the KDSUM metric to enhance CA. By comparing the KDSUM metric to these advanced clustering techniques, we demonstrate the flexibility of the KDSUM metric for usage in the clustering of mixed datasets.
### Clustering algorithms
Mixed-type approaches offer a solution to the challenge of clustering datasets that contain both continuous and categorical variables. One approach involves selecting a distance metric that can handle both types of variables, and then clustering the data using methods that depend on the distance function.
#### 5.1.1 Clustering with Distance Metrics
A kernel distance metric can be utilized in any clustering algorithm that accepts a dissimilarity metric. Additionally, this metric can be adapted to centroid, medoid, or prototype-based methods. To test the KDSUM metric for clustering, we follow previous literature that uses agglomerative hierarchical clustering algorithms designed to cluster based on dissimilarity metrics (see, for example, Day & Edelsbrunner, 1984; Murtagh & Contreras, 2012; Bouguettaya et al., 2015; Sasirekha & Baby, 2013; Nielsen, 2016).
Single-linkage calculates the distance between two clusters as the shortest distance between any two points in the two clusters. Similarly, Complete-linkage (e.g., Macnaughton-Smith, 1965) calculates the distance between two clusters as the maximum distance between any two points in the two clusters.
Average-linkage (e.g., Lance & Williams, 1967), on the other hand, considers the average distance between all pairs of points in the two clusters. Ward's method (Ward, 1963) seeks to minimize the total variance within each cluster as the criterion for merging clusters. Median linkage employs the median distance between all pairs of points in the two clusters, while centroid linkage (e.g., Sokal & Michener, 1958) relies on the distance between the centroids of the two clusters.
For the simulated and empirical application that follow, the KDSUM metric with Gaussian, Aitken, and Wang & van Ryzin kernels is used in a modified \(k\)-means clustering algorithm and competing hierarchical clustering methods, including single-, average-, and complete-linkages along with Ward's method. We report only the method with the highest CA and ARI is reported when utilizing the KDSUM to compare to competing methods.
#### 5.1.2 Clustering Evaluation
When evaluating and comparing the effectiveness and accuracy of clustering and classification techniques, we use the two commonly used metrics of CA and the ARI. We chose to evaluate our clustering results using both ARI and CA to provide a comprehensive and well-rounded assessment of our proposed clustering algorithm. While CA offers a straightforward measure of correct assignments, the ARI considers both pairwise agreements and disagreements, normalized for chance. This enables us to better understand the clustering performance in scenarios where clusters may be of varying sizes and complexities, where CA results may become more difficult to interpret.
The ARI is a statistic that quantifies the similarity between the true classification of the data and the classification obtained by a given method (Rand, 1971). The ARI is defined as
\[ARI=\frac{\sum_{ij}{n_{ij}\choose 2}-[\sum_{i}{a_{i}\choose 2}\sum_{j}{b_{j} \choose 2}]/{n\choose 2}}{\frac{1}{2}[\sum_{i}{a_{i}\choose 2}+\sum_{j}{b_{j} \choose 2}]-[\sum_{i}{a_{i}\choose 2}\sum_{j}{b_{j}\choose 2}]/{n\choose 2}},\]
where \(n_{ij}\) is the diagonal sum of the clustering contingency table, and \(a_{i}\), \(b_{j}\) correspond to the row sums and column sums of the contingency table, respectively. The contingency table is a visual depiction that summarizes agreeance and disagreeance between the true class labels and the classification class labels. The index considers the number of pairs of data points that are labelled identically in both sets
and labelled differently in both sets. The ARI then adjusts for a chance agreement based on the expected agreement between the two sets under a null model. The resulting ARI value ranges from 0 to 1, where 0 indicates complete randomness and 1 indicates perfect agreement in classification.
CA is also used to measure the percentage of data points correctly assigned to their corresponding clusters. It is calculated by comparing the true classification labels with those generated by the clustering algorithm, defined as
\[CA(y,\hat{y})=\frac{\sum_{i=1}^{n}\mathds{1}(\hat{y}_{i}=y_{i})}{n},\]
where the indicator function \(\mathds{1}(\hat{y}_{i}=y_{i})=1\) if the class label \(\hat{y}_{i}=y_{i}\), and 0 otherwise. The CA ranges from 0 to 1, where 0 indicates that none of the data points are assigned to the correct clusters, and 1 indicates that all data points are assigned to the correct clusters.
### Simulation Studies
This section describes the simulated data used to investigate the performance of the KDSUM metric for clustering using the methodologies described in Section 5.1. We analyze Monte Carlo simulations for each clustering algorithm using the mixed-type metrics on continuous-only, categorical-only, and mixed-type simulated datasets.
#### 5.2.1 Continuous data
The first four continuous simulated datasets were adapted from Morbieu (2018) to evaluate the ability of KDSUM to effectively handle data that exhibit linear and nonlinear clustering patterns. In all instances, the simulated data comprised two variables and two known classes. Specifically, the first dataset (Sim 1) consisted of 373 observations simulated as in Figure 1, where each Monte Carlo iteration allowed each observation to shift in in four directions. The shift of each observation was drawn from a uniform distribution between \(\pm\)0.5. The second dataset (Sim 2) contained 2050 observations that were simulated using a well-defined large cluster of 2000 observations with low variance, and a small cluster of 50 observations with high variance. The third dataset (Sim 3) consisted of 200 observations that were simulated with one dense spherical cluster that was contained inside a sparse spherical cluster, with both clusters having equal observations. Lastly, the fourth dataset (Sim 4) consisted of two equally-sized clusters that were spiralled within each other. A visualization of the four simulated continuous datasets
is presented in Figure 1.
#### 5.2.2 Categorical data
A categorical-only dataset (Sim 5) consisted of 200 observations, with 5 unordered categorical variables and 3 clusters. Two of the variables were random binary noise variables, and the three unordered categorical variables were randomly selected integers in the interval [0,30), and the first cluster consisted only of values in [0,10) for each of the three variables, while classes two and three consisted of values in [10,20) and [20,30), respectively. Figure 2 shows a single simulation from this dataset, which shows that there is some overlap in the cluster assignments.
#### 5.2.3 Mixed-type data
Sim 6 was constructed with 373 observations and 5 variables, where 2 continuous variables followed the same distribution as Sim 1 with the same variation at each Monte Carlo iteration. The one unordered categorical variable, \(X_{3}\), was binary and generated randomly as noise at each iteration. The two unordered categorical variables ranged from 0 to 2 for the first cluster and 3 to 4 for the second, randomly
Figure 1: Variable distribution with respect to cluster assignment for four continuous simulated datasets. From left to right: Sim 1, Sim 2, Sim 3, Sim 4.
Figure 2: Variable distribution with respect to cluster assignment for Sim 5. \(X_{1}\) and \(X_{2}\) represent the binary noise variables, and \(X_{3}\), \(X_{4}\) and \(X_{5}\) are the meaningful categorical variables, grouped in a 10 unit interval for ease of interpretation.
drawn at each iteration from a uniform distribution then rounded to an integer. Cluster 1 contained 97 observations, while cluster 2 contained 276.
#### 5.2.4 Results
For each of the six data generating processes described in the previous sections, we conducted 1000 Monte Carlo simulations and analyzed them using KDSUM with hierarchical clustering and average-linkage compared to the clustering algorithms in Section 5.1.1. Distributions and average values of the CA and ARI for clustering Monte Carlo simulations are shown in Figure 3 and Table 2, respectively. For Sim 1, KDSUM hierarchical clustering outperformed each of the other methods and had nearly perfect CA. For Sim 2, GMM had the highest CA and ARI, slightly higher than KDSUM hierarchical clustering. This is likely due to the two clusters being circular with some overlap, where a linear mixture model is a correctly specified clustering algorithm for this datatype. For Sim 3 and Sim 4, we see that KDSUM is clustering well on these and demonstrates that the KDSUM metric is able to estimate non-linear datasets. Sim 5 and Sim 6 both contain large values of CA and ARI for KDSUM hierarchical clustering while in the presence of categorical and continuous noise variables. We note for Sim 6 that Gower's distance with hierarchical clustering uses Euclidean distance and a simple matching coefficient with a weight for distance measurement. Gower's distance tends to place more emphasis on the categorical variables as a result, and thus performs well since there is low overlap between categorical variables in terms of clustering. The bandwidths selected via MSCV for KDSUM were very small; thus the KDSUM metric aligned with Gower's distance in the metric calculation.
### Bandwidth Grid Search
In this section, we establish the effect of KDSUM bandwidth selection in clustering. The choice of bandwidths can affect the distance metric calculation, and we show that MSCV selects ideal bandwidths that are small for variables relevant to clustering and large for variables that are irrelevant. Clustering for this section was conducted using agglomerative hierarchical clustering with single-linkage. To examine bandwidth selection influence on clustering through CA and ARI, we analyze simulated data consisting of continuous-only, categorical-only, and mixed-type data.
\begin{table}
\begin{tabular}{c c c c|c c c c} \hline \hline Data & Model & CA & ARI & Data & Model & CA & ARI \\ \hline \hline Sim 1 & KDSUM (Average) & 0.990 & 0.960 & Sim 4 & KDSUM (Average) & 0.983 & 0.962 \\ (cont.) & HC-E (Average) & 0.880 & 0.604 & & HC-E (Average) & 0.580 & 0.033 \\ & PAM-E & 0.752 & 0.253 & (cont.) & PAM-E & 0.599 & 0.038 \\ & \(k-\)means & 0.780 & 0.312 & & \(k-\)means & 0.599 & 0.038 \\ & GMM & 0.577 & 0.038 & & GMM & 0.592 & 0.035 \\ \hline Sim 2 & KDSUM (Average) & 0.989 & 0.929 & Sim 5 & KDSUM (Average) & 0.938 & 0.860 \\ (cont.) & HC-E (Average) & 0.998 & 0.974 & & HC-G (Average) & 0.594 & 0.311 \\ & PAM-E & 0.817 & 0.543 & (cat.) & PAM-G & 0.648 & 0.230 \\ & \(k-\)means & 0.997 & 0.963 & & \(k-\)modes & 0.439 & 0.034 \\ & GMM & 0.999 & 0.989 & & ROCK & 0.787 & 0.498 \\ \hline Sim 3 & KDSUM (Average) & 0.913 & 0.811 & Sim 6 & KDSUM (Average) & 1.000 & 1.000 \\ (cont.) & HC-E (Average) & 0.820 & 0.444 & & HC-G (Average) & 1.000 & 1.000 \\ & PAM-E & 0.869 & 0.545 & (mix.) & PAM-G & 0.523 & 0.000 \\ & \(k-\)means & 0.877 & 0.569 & & \(k-\)proto & 0.828 & 0.459 \\ & GMM & 0.854 & 0.507 & & clustMD & 0.802 & 0.363 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Classification results on Monte Carlo simulated data. The KDSUM metric with agglomerative hierarchical clustering was compared to Partitioning around Medoids (PAM-E / PAM-G), Euclidean or Gower’s distance with hierarchical clustering (HC-E / HC-G), \(k-\)means, \(k-\)modes, \(k-\)prototypes (\(k-\)proto), Gaussian Mixture Model (GMM), ROCK, and clustMD. For hierarchical clustering, the method is reported in brackets. The average ARI and CA of 1000 Monte Carlo simulations is reported for each clustering algorithm.
Figure 3: Boxplots of ARI and CA for KDSUM with hierarchical clustering and average-linkage, compared against competing clustering algorithms for simulated continuous, categorical, and mixed-type data.
#### 5.3.1 Continuous data
For the continuous-only data, we simulated two continuous variables and five distinct clusters. The cluster centres are selected at random from a two-dimensional uniform distribution spanning the interval \([0,12]\times[0,12]\), with each cluster populated through a multivariate normal distribution, with cluster sample sizes selected randomly from 50 to 200. A visual representation of the data is shown in the left panel of Figure 4. A systematic grid search was conducted, spanning bandwidth values from 0 to 10 and incremented in steps of 0.05 and results from clustering using the modified \(k-\)means for distance matrices algorithm, are shown in the right two panels of Figure 4. The MSCV selected bandwidth values are \((0.443,0.483)\) for \(X_{1}\) and \(X_{2}\), respectively. We can see that the performance metrics remain high at elevated bandwidths; however, these two bandwidths must be balanced to allow KDSUM to accurately measure dissimilarity and improve underlying clustering algorithm accuracy. This plot demonstrates that MSCV preferentially selects the smallest bandwidths for significant variables.
#### 5.3.2 Categorical data
Next, we investigate a simulated categorical-only dataset. The data is generated with three distinct clusters based on a binary noise term (\(X_{1}\)) and two categorical variables (\(X_{2}\) & \(X_{3}\)). The first cluster is generated with random integers 1 through 5, the second with random integers 5 through 10. Other than the noise term, the overlap in the ranges of the integers of each cluster allows for cluster overlap. The ratio of the three clusters is 1:1, respectively, with 75 observations per cluster. All variables are treated as unordered categorical. A visual representation of this data is shown in the top three panels of Figure
Figure 4: The left panel is the data generating process of two continuous variables \(X_{1}\) and \(X_{2}\), with \(k=5\) clusters. The middle and right panels depict the CA and ARI, respectively, for the continuous bandwidth grid search, where increments of bandwidths were 0.05 in the range \([0,10]\) for both variables. The red dot on each panel indicates the optimal bandwidth selected via MSCV for the KDSUM metric.
The resultant grid search entails increments of \(0.05\) for each of the three variables in the range \([0,\frac{3}{4}]\), resulting in \(4,096\) permutations. The results, presented in the bottom panel of Figure 5, show that clustering performance is best when we use small bandwidths for relevant variables and large bandwidths for irrelevant variables. The noise variable is shown to provide poor performance (small ARI) when the bandwidth is small, and the optimal bandwidth is a large value that effectively smooths out the noise variable for distance calculations and thus becomes irrelevant to clustering. The important variables for clustering are the two categorical where MSCV selects small bandwidths and provides the highest ARI compared to any other set of bandwidths.
#### 5.3.3 Mixed-type data
We extend this simulation to a mixed dataset that consists of a mixture of the previous two datasets; 200 observations consisting of one continuous variable generated from a normal mixture model, and two categorical variables (one unordered and one ordered) generated from a multinomial mixture model. The overlap between continuous and categorical variables is set to \(5\%\) for one continuous and \(35\%\) for two categorical variables. The 200 observations are portioned to a \(40:60\) ratio among the two clusters. A
Figure 5: The upper three panels are the data generating process with \(k=2\) clusters, with binary noise term (\(X_{1}\)), and two unordered categorical variables (\(X_{2},X_{3}\)). The bottom plot is a parallel coordinates plot for the unordered categorical bandwidth grid search, where increments of bandwidths were \(0.05\) in the range \([0,0.75]\) for all variables, coloured by the ARI for each possible combination. The red lines indicate the optimal bandwidth determined using maximum-likelihood cross-validation with the KDSUM metric.
visual representation of this data is shown in the top three panels of Figure 6.
The resultant grid search entails varied bandwidth assignments: the continuous variable's (\(X_{1}\)) bandwidth is incremented in steps of 0.05 over the interval \([0,10]\), the unordered categorical variable \(X_{2}\) range in \([0,\frac{3}{4}]\) at increments of 0.05, while the ordered categorical variable \(X_{3}\) in \([0,1]\), also in increments of 0.05, resulting in \(67,536\) permutations. Clustering for this simulation was completed with the modified \(k-\)means algorithm for distance matrices, and the results are presented in the bottom two panels of Figure 6. The optimal set of bandwidths selected by MSCV shows that each variable is relevant to the clustering algorithm, where large bandwidths are shown to cause suboptimal ARI in clustering.
### Effect of Sample Size
Monte Carlo simulations were conducted to discern the performance of KDSUM metric with the modified \(k\)-means algorithm for distance matrices for varying cluster sizes. This iteration encompasses five clusters generated from a multivariate normal distribution with fixed cluster centres spanning \([0,10]\). 500 simulations were conducted with sample sizes of 10, 25, 50, 100, 200, 500, and 1000 observations, with single simulation examples shown in the upper panels of Figure 7. Fixed cluster centres were maintained for all simulations. We note that our implementation of the KDSUM metric methodology
Figure 6: The upper three panels are the data generating process with \(k=2\) cluster, with one continuous (\(X_{1}\)), unordered categorical (\(X_{2}\)), and one ordered (\(X_{3}\)) variable. Each of the two categorical variables has four levels. The parallel coordinates plots depicted below are the regular and standardized version of the bandwidth values for each variable, for ease of interpretation. The red lines indicate the regular and standardized version of the optimal bandwidth determined using maximum-likelihood cross-validation in conjunction with the KDSUM method.
is approximately 10 times slower than typical mixed-type metrics; however, with improved coding, the KDSUM methodology execution time could be improved.
ARI, CA, and execution time were calculated for each iteration at sample sizes of 10 to 1000 observations per cluster, and are shown in Figure 7. We can see that the variability of CA decreases as sample size increases, while the median value increases up to 500 observations. At 1000 observations, we see that the median value decreases due to the increasing number of observations in the overlap of clusters. A similar result in CA can be seen in the ARI results. The execution time increases exponentially, where the median execution time for 1000 observations per cluster simulation is approximately 390 seconds or 6.5 minutes.
As we demonstrated the performance of this algorithm for varying sample sizes for continuous-only data, we conducted a simulation study on mixed-type simulated data. The continuous variables are drawn from a normal mixture model, whereas categorical variables follow a multinomial mixture model. Overlap between two clusters for continuous variables is the area of the overlapping region defined by their densities, and for categorical variables, the summed height of overlapping segments is defined by
Figure 7: The top row of seven panels are single simulations out of the 500 Monte Carlo Simulations with five cluster centres, each drawn from the numbers of observations from 10 to 1000. The bottom left panel is a boxplot of the clustering accuracies. The bottom centre panel is a boxplot of the ARIs associated with each Monte Carlo simulation at each sample size. The bottom right panel is a boxplot of the execution of time (in seconds) of each Monte Carlo replication at each sample size.
their point masses. The overlap for all variables is set to 20%, and the two cluster sizes follow a \(1:1\) ratio. The sample sizes span 25, 50, 100, 200, 500, and 1000 observations, where 500 Monte Carlo simulations were executed for each cluster size. The empirical marginal distributions of variables for single simulations are shown in the upper panels of Figure 8.
The results of clustering the mixed-type data Monte Carlo simulations are shown in the lower panels of Figure 8. We see that the variability of CA and ARI decreases as sample size increases, showing that the increase in information improves the accuracy of clustering when using KDSUM. Similarly to the continuous-only data, there is a slight decrease in the median accuracy and ARI as the sample size increases for larger sample sizes caused by hard partitioning for overlapped clusters. Further, the execution time grows exponentially as sample size increases, where the total time for 1000 observations per cluster is approximately 225 seconds or 3.75 minutes. This execution time is smaller than continuous-only, as there are only two clusters for mixed-type data as opposed to the five clusters for the continuous-only data, where continuous-only data had approximately 2.5 times the number of observations.
Figure 8: The top row of four panels are the marginal distributions of a single mixed-type data simulation out of the 500 Monte Carlo Simulations. The bottom left panel is a boxplot of the clustering accuracies. The bottom centre panel is a boxplot of the ARIs associated with each Monte Carlo simulation at each sample size. The bottom right panel is a boxplot of the execution of time (in seconds) of each Monte Carlo replication at each sample size.
Real Data Analysis
This study utilized a diverse range of data, including continuous, categorical, and mixed-type datasets, to evaluate the KDSUM metric for clustering algorithms. Unless otherwise noted, all datasets used in the study are publicly available through the UCI Machine Learning Repository (Dua & Graff, 2017). For each dataset, we removed any observation vectors containing at least one _NA_ value, as many of the clustering methods used (including KDSUM hierarchical) are not designed for missing data.
Continuous-only datasets include the Body dataset with 24 continuous variables and 507 observations with 2 classes, from R-package gclus (Hurley, 2019). Two additional continuous-only datasets are the Wine dataset with 178 observations of 13 continuous variables and three classes, and Iris dataset with 150 observations with four continuous variables and a classification column containing three distinct classes.
Categorical-only datasets include the Soybean dataset (Soybean L.) was also included, which contains 307 observations and a mix of 35 ordered and unordered categorical variables and 18 classes, as well as its smaller version (Soybean S.) consisting of four classes and 45 observations. The categorical Breast Cancer (Breast) dataset with nine ordered categorical variables and two classes, and the Congressional Vote dataset (Vote) was also used, with 435 observations and 15 variables, with two classes. Additionally, we use the Zoo dataset with 101 observations consisting of twelve binary variables and one ordered categorical variable.
Mixed-type variable datasets used include The Australian Credit (Credit) and Auto MPG dataset (Auto), which have a mix of continuous and categorical variables and two classes each. The Credit dataset consists of 690 observations and 14 variables, 8 of which are treated as unordered categorical and 6 as continuous. The Auto dataset consists of 398 observations and 7 variables (after the car name variable was removed), 1 of which is treated as unordered categorical, 1 as ordered categorical, and 5 as continuous. For the Auto MPG dataset, the predicted class was a continuous variable (miles per gallon), and was partitioned into 2 distinct classes with an approximately even dispersion of observations into the two classes, namely, miles per gallon \(<\) 22 and \(\geq\) 22.
### Results
We present the results of ten real datasets in Table 3. The results of the experiments demonstrate the improvements of the KDSUM metric in terms of CA and ARI. For the Body dataset, the Gaussian Mixture model outperformed KDSUM by approximately 4.30% in terms of CA and 0.159% in terms of ARI, but KDSUM outperformed the remaining three methods by at least 6.3% in terms of CA and 0.204% for ARI. For the Wine dataset, KDSUM tied GMMs for highest CA and ARI, and outperformed all other methods by at least 5.1% for CA and 0.139 for ARI. For the Iris dataset, KDSUM did comparably well to PAM and \(k-\)means, but was outperformed by the two other methods, which we discuss after. For both the Soybean small dataset, all methods performed equally as well, with the exception of PAM with Gower's distance, which did slightly worse. For the large version of the Soybean dataset, KDSUM outperformed all other methods by at least 9.9% in terms of CA and 0.201 for ARI. For the Zoo dataset, KDSUM outperformed all others by at least 3.0% for CA, and 0.082 for ARI, while with the Breast dataset, KDSUM performed just slightly poorer than hierarchical clustering with Gower's distance, and outperformed the other three methods by at least 0.2% for CA and 0.011 for ARI. For the Vote dataset, KDSUM outperformed all other methods by at least 0.4% for CA and 0.019 for ARI, while for the Auto dataset, KDSUM outperformed all other methods by at least 0.2% for CA and 0.08 for ARI. Lastly, for the Credit dataset, KDSUM outperformed all method by at least 2.3% for CA and 0.056 for ARI.
For the Iris dataset, the KDSUM method did not perform as well as the competing methods. It is worth mentioning that if we only consider one column (petal width), KDSUM (Ward) achieved CA and ARI of 0.960 and 0.886, respectively, which is the highest value obtained from any combination of variables in this dataset. While the obvious cluster of the setosa species class label was correctly identified, the overlapping nature of the remaining two species, setosa and versicolor, led the KDSUM method to incorrectly classify more than the other methods. Improving the effectiveness of the KDSUM metric for handling overlapping clusters is an active area of consideration.
## 7 Conclusion
In this study, we proposed a novel kernel distance metric for effectively handling mixed-type data. Specifically, we developed a metric based on using kernel functions as similarity functions, where we proved that the Gaussian, Aitken and Wang & van Ryzin kernels are similarity functions. To ensure the viability of our KDSUM metric, we rigorously proved that it satisfies all necessary properties of a distance metric, including non-negativity, symmetry, the triangle inequality, and the identity of indi
\begin{table}
\begin{tabular}{c c c|c c|c c c} \hline \hline Data & Model & CA & ARI & Data & Model & CA & ARI \\ \hline \hline Body & KDSUM (Average) & 0.935 & 0.756 & Zoo & KDSUM (Complete) & 0.921 & 0.940 \\ (cont.) & PAM-E & 0.872 & 0.552 & (cat.) & PAM-G & 0.813 & 0.662 \\ & \(k-\)means & 0.864 & 0.529 & & \(k-\)modes & 0.800 & 0.647 \\ & HC-E (Ward) & 0.868 & 0.540 & & HC-G (Complete) & 0.881 & 0.847 \\ & GMM & 0.978 & 0.915 & & ROCK & 0.891 & 0.858 \\ \hline Wine & KDSUM (Ward) & 0.978 & 0.929 & Breast & KDSUM (Ward) & 0.957 & 0.836 \\ (cont.) & PAM-E & 0.708 & 0.371 & (cat.) & PAM-G & 0.955 & 0.825 \\ & \(k-\)means & 0.702 & 0.377 & & \(k-\)modes & 0.933 & 0.745 \\ & HC-E (Ward) & 0.927 & 0.790 & & HC-G (Ward) & 0.968 & 0.874 \\ & GMM & 0.978 & 0.929 & & ROCK & 0.686 & 0.041 \\ \hline Iris & KDSUM (Ward) & 0.887 & 0.718 & Vote & KDSUM (Ward) & 0.914 & 0.684 \\ (cont.) & PAM-E & 0.893 & 0.730 & (cat.) & PAM-G & 0.867 & 0.535 \\ & \(k-\)means & 0.893 & 0.730 & & \(k-\)modes & 0.867 & 0.535 \\ & HC-E (Ward) & 0.907 & 0.759 & & HC-G (Average) & 0.910 & 0.665 \\ & GMM & 0.967 & 0.904 & & ROCK & 0.828 & 0.504 \\ \hline Soybean S. & KDSUM (Average) & 1.000 & 1.000 & Auto & KDSUM (Average) & 0.913 & 0.682 \\ (cat.) & PAM-G & 0.936 & 0.820 & (mix.) & PAM-G & 0.829 & 0.431 \\ & \(k-\)modes & 1.000 & 1.000 & & \(k-\)Proto & 0.888 & 0.600 \\ & HC-G (Complete) & 1.000 & 1.000 & & HC-G (Average) & 0.911 & 0.674 \\ & ROCK & 1.000 & 1.000 & & clustMD & 0.880 & 0.557 \\ \hline Soybean L. & KDSUM (Ward) & 0.792 & 0.577 & Credit & KDSUM (Ward) & 0.817 & 0.401 \\ (cat.) & PAM-G & 0.693 & 0.376 & (mix.) & PAM-G & 0.794 & 0.345 \\ & \(k-\)modes & 0.673 & 0.320 & & \(k-\)Proto & 0.793 & 0.342 \\ & HC-G (Ward) & 0.628 & 0.315 & & HC-G (Ward) & 0.746 & 0.241 \\ & ROCK & 0.679 & 0.327 & & clustMD & 0.564 & 0.004 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Classification results on real data. The KDSUM metric with agglomerative hierarchical clustering was compared to Euclidean or Gower’s distance with hierarchical clustering (HC-E / HC-G) and Partitioning around Medoids (PAM-E / PAM-G), \(k-\)means, \(k-\)modes, \(k-\)prototypes (\(k-\)proto), Gaussian Mixture Model (GMM), ROCK, and clustMD. For hierarchical clustering, the linkage that provides the best results is reported in parentheses.
cernibles. In doing so, we established the theoretical foundation for our KDSUM metric as a shrinkage methodology and demonstrated its potential for accurately capturing the distances between mixed-type data points.
We conducted extensive experiments on both simulated and real data to evaluate the effectiveness of our KDSUM metric compared to existing mixed-type data metrics and state-of-the-art clustering algorithms designed to handle mixed-type data. Using agglomerative hierarchical clustering techniques, we assessed the performance of our KDSUM metric in terms of CA and the ARI. The KDSUM metric in hierarchical clustering outperformed existing mixed-type data metrics and achieved competitive results compared to state-of-the-art clustering algorithms. Although most existing metrics employ an additive structure for each variable type, which is similar to the KDSUM method, none of the methods analyzed utilize kernels or kernel smoothing techniques to eliminate irrelevant variables for clustering. Instead, they rely on either parametric approaches that require data transformations through importance weighting of categorical variables that can be controlled by the user directly or estimated using optimization techniques, or nonparametric approaches that fail to adapt to the underlying data generating process for metric calculations or clustering approach.
Early versions of this methodology used maximum cross-validated bandwidths from a mixed-type joint kernel density estimated (Hall, Racine, and Li, 2004). This approach demonstrated good results, but further investigation into replacing the likelihood function with a similarity function demonstrated increased improvements. Thus, the results of using a likelihood optimization approach were not included.
This paper demonstrates the first steps towards a generalized distance for mixed-type data. Some improvements to the methodology are possible. We have calculated similarities and distances orthogonally to the clustering algorithm. An investigation of a wider range of bandwidth selection procedures for various distance-based clustering algorithms is also future work. While agglomerative hierarchical clustering was preferred for this study, a new or existing algorithm that incorporates kernels into the clustering loss function may further enhance the classification and clustering of mixed-type data with a kernel distance metric. A detailed analysis of clustering algorithms that require dissimilarity matrices as input and determining the optimal clustering algorithm that pairs with kernel distance metrics is also future
work. Moreover, we identified several promising directions for future research, including applying investigating the effects of using various continuous and categorical kernels on kernel metric calculations, developing numerical methods for determining the optimal number of clusters, and using kernel metrics for fuzzy clustering algorithms. By exploring these research directions, we can further explore the applicability and effectiveness of our KDSUM method for clustering mixed-type data.
## Data Availability
The datasets analyzed during the current study are publicly available in the UCI Learning Repository.
## Code Availability
All code is available upon request from the contact author. The software used for this research is described in Appendix A.
## Conflict of Interest
The authors declare they have no conflict of interest.
|
2306.07239 | Nonparametric empirical Bayes biomarker imputation and estimation | Biomarkers are often measured in bulk to diagnose patients, monitor patient
conditions, and research novel drug pathways. The measurement of these
biomarkers often suffers from detection limits that result in missing and
untrustworthy measurements. Frequently, missing biomarkers are imputed so that
down-stream analysis can be conducted with modern statistical methods that
cannot normally handle data subject to informative censoring. This work
develops an empirical Bayes $g$-modeling method for imputing and denoising
biomarker measurements. We establish superior estimation properties compared to
popular methods in simulations and demonstrate the utility of the estimated
biomarker measurements for down-stream analysis. | Alton Barbehenn, Sihai Dave Zhao | 2023-06-12T17:01:19Z | http://arxiv.org/abs/2306.07239v1 | # Nonparametric Empirical Bayes Biomarker Imputation and Estimation
###### Abstract
Biomarkers are often measured in bulk to diagnose patients, monitor patient conditions, and research novel drug pathways. The measurement of these biomarkers often suffers from detection limits that result in missing and untrustworthy measurements. Frequently, missing biomarkers are imputed so that down-stream analysis can be conducted with modern statistical methods that cannot normally handle data subject to informative censoring. This work develops an empirical Bayes \(g\)-modeling method for imputing and denoising biomarker measurements. We establish superior estimation properties compared to popular methods in simulations and demonstrate the utility of the estimated biomarker measurements for down-stream analysis.
## 1 Introduction
The measurement of biomarkers is a fundamental task in many modern clinical and biomedical studies. Biomarkers are measurable indicators of biological or pathological processes that can be used to provide important insights into disease diagnosis, monitoring, and treatment. However, the measurement of biomarkers is not without challenges. Many medical studies often have small sample sizes until a phenomena is well understood so efficient data use is essential [16]. Beyond the usual measurement errors, limitations in laboratory collection and measurement procedures can result in detection limits for biomarker measurement. Detection limits often manifest as left-censoring, right-censoring, and in cases such as rounding, interval-censoring. For example, in left-censoring may occur when it is impossible to determine if a biomarker is present in small concentrations or simply absent. Detection limits produce missing not at random data, such forms of missingness are non-ignorable and failing to properly handle the missingness can introduce bias in statistical procedures [39, 37, 15]. Properly accounting for detection limits is important in many applications, such as the measurement of IL-6 and IL-10 cytokines for sepsis [16, 21] or CD4\({}^{+}\) T-lymphocytes for human immunodeficiency virus [26, 25].
Much of the work handling biomarker measurements suffering from detection limits can be classified as either directly estimating the missing biomarker or modifying the analysis to account for missing biomarker measurements. These approaches can often be thought of as regression problems that either treat the measured biomarker as a censored response
or predictor, respectively [21]. In this work, we focus on directly estimating the missing biomarkers so that complex downstream analysis can be easily conducted using modern machine learning and data mining methods. Because the data are missing not at random, the usual approach of only utilizing observed data is not viable [1], instead we require an explicit model of the missing mechanism.
Popular methods for estimating missing biomarker measurements span a wide range of complexities. The most basic approach to handle low detection limits are the so-called "fill-in" methods. These methods estimate the missing measurement with some constant function of the detection limit based on the distribution of the censored tail [21]. For example, if a non-negative concentration falls below a limit-of-detection (LOD), it may be estimated as \(LOD\), \(LOD/2\), or \(LOD/\sqrt{2}\). These methods are easy to implement but ignore the relationship between biomarkers and lack variability that may be crucial for latter analysis. Regression based approaches offer a natural extension to the fill-in methods; rather than relying on the censoring mechanism alone, these methods use every measurement of a biomarker to model the distribution of values [25]. Covariates, either demographic or fully observed biomarkers, can be included in the regression model to account for additional variability in the data [23]. Once the regression model is fit, samples can be conditionally drawn to recreate the full data variability [25, 23, 40]. Nearest neighbor methods offer a nonparametric regression alternative for estimating the missing biomarkers [33]. Once the measurements are standardized, the nearest neighbors can be computed as nearest biomarkers or nearest patients. Nearest patients is generally preferable for biomarker estimation because it can capture complex relationships between many biomarkers. Unfortunately, by construction, nearest neighbor methods cannot impute biomarkers whose values lie outside the observed range. Many other nonparametric methods such as random forests [36] and singular value decomposition [12] have been proposed but they often struggle in the missing not at random setting that we are studying [20, 41]. Many of these methods modify a likelihood to handle the informative censoring. If necessary, a modified Box-Cox transformation can be employed to ensure that the data have nearly a Gaussian distribution subject to any detection limits before using any imputation method the builds on the Gaussian model [10].
We are motivated by applications where many biomarkers are measured simultaneously so that their combination can be used to diagnose and monitor one or more conditions [8, 42]. These data are often acquired with tools such as mass spectroscopy [37] or flow cytometry [27]. In these cases, the relationships between biomarkers can be leveraged to estimate missing measurements [33]; however, the introduction of additional censored biomarkers increases the difficulty of the estimation problem.
In this paper we propose addressing these difficulties by developing a nonparametric empirical Bayes method. Empirical Bayes methods estimate the Bayes optimal regression function for denoising biomarkers and, in doing so, provide a very powerful tool for simultaneous estimation problems [7, 14, 35]. The empirical Bayes approach assumes that the true biomarker values are drawn independently from some unknown prior, \(g\), and the corresponding observations are drawn from a known likelihood [31]. Under this Bayesian model, the posterior mean is usual used as the estimate for each biomarker and parameters required to compute the posterior mean are estimated from the observed marginal distribution [14, 6]. There are, of course, many other ways to regularize models to improve estimation such as ridge and LASSO penalties [11]; however, we prefer empirical Bayes methods because they
are tuning-parameter free [18], easy to implement, and have strong theoretical guarantees [14, 35, 30, 34]. We note that empirical Bayes can be seen as a self-supervised regression problem [3], as such, it bridges the conceptual gap between treating the biomarker as a response and a predictor in the regression problems.
In this work we follow the nonparametric empirical Bayes \(g\)-modeling framework [18, 5]. This approach assumes no structure on the prior, \(g\), and produces an estimated prior, \(\hat{g}\), using nonparametric maximum marginal likelihood estimation [17]. The posterior mean is estimated using \(\hat{g}\) and the known likelihood. In cases where there is no corresponding biomarker measurement, for example when there is censoring due to a detection limit, we can still compute the posterior mean given that the biomarker measurement fell within a specific range.
Our key insight is that because popular biomarker estimation methods have established likelihoods for censored biomarker measurements [25, 10, 24], nonparametric empirical Bayes methods can be directly employed to improve the simultaneous estimation of biomarkers without requiring additional domain knowledge or tuning. Using nonparametric empirical Bayes \(g\)-modeling formulation, we show superior estimation and imputation performance in simulations based on real data compared when compared to popular methods. We provide an open-source R package ebTobit ([https://github.com/barbehenna/ebTobit](https://github.com/barbehenna/ebTobit)) for implementing our proposed methods.
## 2 Empirical Bayes Matrix Imputation
### Methodology
We are interested in estimating and imputing \(p\) biomarkers from each of \(n\) patients. Here, true biomarker values of patient \(i\) are denoted as independent samples \((\theta_{i1},\ldots\theta_{ip})\sim g\) on \(\mathbb{R}^{p}\). Assume we observe intervals \([L_{ij},R_{ij}]\) for each patient \(i=1,\ldots,n\) and biomarker \(j=1,\ldots,p\). When \(L_{ij}=R_{ij}\), a noisy observation of the \(\theta_{ij}\) is directly measured; we assume that the error is normally distributed so that the contribution to the likelihood is \(\phi_{\sigma_{ij}}(L_{ij}-\theta_{ij})\), where \(\phi_{\sigma}(\cdot)\) denotes the Gaussian density function with variance \(\sigma^{2}>0\). When \(L_{ij}<R_{ij}\), the observation is interval censored and the contribution to the likelihood is \(\Phi_{\sigma_{ij}}(R_{ij}-\theta_{ij})-\Phi_{\sigma_{ij}}(L_{ij}-\theta_{ij})\), where \(\Phi_{\sigma}(\cdot)\) denotes the Gaussian distribution function with variance \(\sigma^{2}>0\). For example, if a biomarker's concentration falls below a lower limit-of-detection (\(LOD\)), a direct measurement is not possible; however, because concentrations are non-negative, we observe the interval \([0,LOD]\). If a biomarker is successfully measured, \([L_{ij},R_{ij}]\) contains a single noisy point estimate of \(\theta_{ij}\). This data structure is sometimes referred to as general partly interval-censored data [13]. For most of our methodology we focus on the case where \(\sigma_{ij}^{2}\) are known; however, methods allowing for the joint estimation of \(\theta_{ij}\) and \(\sigma_{ij}^{2}\) are discussed below. We represent the full set of observations, in matrix form,
as:
\[\mathbf{L}=\begin{bmatrix}L_{11}&\ldots&L_{1p}\\ L_{21}&\ldots&L_{2p}\\ \vdots&\ddots&\vdots\\ L_{n1}&\ldots&L_{np}\end{bmatrix}\qquad\text{and}\qquad\mathbf{R}=\begin{bmatrix} R_{11}&\ldots&R_{1p}\\ R_{21}&\ldots&R_{2p}\\ \vdots&\ddots&\vdots\\ R_{n1}&\ldots&R_{np}\end{bmatrix}.\]
We use the notation \(L_{i\cdot}\) and \(L_{\cdot j}\) to denote the row vector \((L_{i1},\ldots,L_{ip})\) and the column vector \((L_{1j},\ldots,L_{nj})\), respectively.
Under our Bayesian model, a natural estimator of \(\theta_{i\cdot}\) is the posterior mean \(E(\theta_{i\cdot}\mid L_{i\cdot},R_{i\cdot})\). Observe that the posterior mean is given by
\[E(\theta_{i\cdot}\mid L_{i\cdot},R_{i\cdot})=\frac{\int_{\mathbb{R}^{p}}tP(L_ {i\cdot},R_{i\cdot}\mid\theta_{i\cdot}=t)\ dg(t)}{\int_{\mathbb{R}^{p}}P(L_ {i\cdot},R_{i\cdot}\mid\theta_{i\cdot}=t)\ dg(t)}, \tag{1}\]
where the likelihood \(P(L_{i\cdot},R_{i\cdot}\mid\theta_{i\cdot})\) is given by
\[P(L_{i\cdot},R_{i\cdot}\mid\theta_{i\cdot}) =\prod_{j=1}^{p}P(L_{ij},R_{ij}\mid\theta_{ij})\] \[=\prod_{j=1}^{p}\left\{\phi_{\sigma_{ij}}(L_{ij}-\theta_{ij}) \right\}^{1(L_{ij}=R_{ij})}\left\{\Phi_{\sigma_{ij}}(R_{ij}-\theta_{ij})-\Phi_ {\sigma_{ij}}(L_{ij}-\theta_{ij})\right\}^{1(L_{ij}<R_{ij})}. \tag{2}\]
Each term in the product (2) is a Tobit likelihood with \(\sigma_{ij}^{2}\) variance [23, 38, 29, 2]. We note that underlying physiological conditions may manifest as dependent biomarker expressions; accordingly, we will not impose any independence structures on the prior, \(g\), such as a mean field approximation. Empirical Bayes \(g\)-modeling suggests that estimating \(g\) from the data and plugging \(\hat{g}\) into (1) results in a good estimator [5].
When there are at least two measurements for every \(\theta_{ij}\), empirical Bayes modeling can be extended to estimate both means and variances [9]. Additional measurements of \(\theta_{ij}\) are often called technical replicates; including replicates adds extra overhead to the measurement process but, by allowing for the estimation of the noise levels, we make the results more robust to misspecified noise models. The simplest empirical Bayes approach is to assume a prior on the means and variances of each patient's biomarker measurements, \(g(\theta_{1},\ldots,\theta_{p},\sigma_{1}^{2},\ldots,\sigma_{p}^{2})\), then specify the appropriate likelihood and proceed as we have previously in this section. The increased dimensionality of the prior can make estimation more difficult [9]. Many simplifying assumptions can be made on the distribution to accommodate different physical models. For example, we could continue to assume that the biomarker mean values are arbitrarily related but also assume that the variance of each measurement only depends on the value of the biomarker being measured. This model results in the following Bayesian decomposition of the prior:
\[g(\theta_{1},\ldots,\theta_{p},\sigma_{1}^{2},\ldots,\sigma_{p}^ {2}) =g(\theta_{1},\ldots,\theta_{p})g(\sigma_{1}^{2},\ldots,\sigma_{ p}^{2}\mid\theta_{1},\ldots,\theta_{p})\] \[=g(\theta_{1},\ldots,\theta_{p})\prod_{j=1}^{p}g(\sigma_{j}^{2} \mid\theta_{j}).\]
In this Bayesian decomposition, we reduce the prior's complexity by arguing for conditional independence of the variances. We note that each of the \(g(\sigma_{j}^{2}\mid\theta_{j})\) can be learned as a regression problem in independent control assays or specified to match a physical model. We stress that modeling both location and scale parameters is not possible without measurement replicates and that the choice of model should reflect the needs of the specific assays used.
### Implementation
Estimating the prior, \(g\), can be done in many ways. Proceeding with standard nonparametric empirical Bayes \(g\)-modeling arguments, we model \(g\) in the space of all distributions on \(\mathbb{R}^{p}\) and estimate it using maximum marginal likelihood:
\[\hat{g} =\arg\max_{g}\sum_{i=1}^{n}\log P(L_{i\cdot},R_{i\cdot})\] \[=\arg\max_{g}\sum_{i=1}^{n}\log\int_{\mathbb{R}^{p}}P(L_{i\cdot}, R_{i\cdot}\mid\theta_{i\cdot}=t)\ dg(t). \tag{3}\]
This optimization problem is concave, but infinite-dimensional. Fortunately, Caratheodory's theorem of convex hulls [14, 4] ensures that there is a discrete distribution, \(g^{*}\), with at most \(n+1\) support points that solves (3). Accordingly, we simplify the infinite-dimensional optimization problem, (3), by focusing on distributions supported on a finite set of \(m>0\) support points \(t_{1},\ldots,t_{m}\in\mathbb{R}^{p}\). After fixing the \(m\) support points, \(g\) has the form \(g(t)=\sum_{k=1}^{m}w_{k}\delta_{t_{k}}(t)\), where each \(w_{k}\geq 0\) and \(\sum_{k=1}^{m}w_{k}=1\). The optimization problem is then [14, 18]:
\[\hat{g}=\arg\max_{\mathbf{w}:w_{k}\geq 0,\sum_{k=1}^{m}w_{k}=1}\sum_{i=1}^{n} \log\sum_{k=1}^{m}w_{k}P(L_{i\cdot},R_{i\cdot}\mid\theta_{i\cdot}=t_{k}) \tag{4}\]
With fixed support points, only \(w_{1},\ldots,w_{m}\) need to be estimated, this means that (4) is a finite-dimensional, convex optimization problem that can be solved by many optimization libraries [18]. It is possible to simultaneously estimate both \(t_{k}\) and \(w_{k}\); however, the resulting optimization problem is non-convex.
Selecting the support points for multi-dimensional \(g\) is a nontrivial task for which there is no good solution. The optimal support points for the empirical Bayes problem are known to be \(\theta_{i\cdot}\) themselves [14]; however, since the \(\theta_{i\cdot}\) are unknown in practice, another method must be employed to specify support points with minimal misspecification error. Most approaches to this problem either use a regular grid over the range of the observations [14, 18, 34] or the observations themselves [32] as support points for \(g\). The later method is often referred to as the "exemplar method".
Standard methods for support point selection do not perform well for our problem. The regular grid method suffers from the curse-of-dimensionally: as \(p\) increases, exponentially more support points are required to ensure closeness to the optimal support points. In practice, a dense grid with hundreds of support points per axis is not computationally feasible if \(p\) is greater than 3 or 4. The exemplar method offers direct relief to the curse-of-dimensionality by using the observations as support points, thus avoiding the dependence of dimension on
the support size. Unfortunately, in our application, we do not have direct measurements of every \(\theta_{i}\). because of censoring, so we cannot directly apply the exemplar method.
Briefly, we note that the exemplar method can be generalized to handle our censored observations by using the maximum likelihood of each \(\theta_{i}\) as support points. Under the Tobit likelihood (2), when \(L_{ij}\) and \(R_{ij}\) are finite, the maximum likelihood estimate of \(\theta_{ij}\) is:
\[\hat{\theta}_{ij}=\frac{L_{ij}+R_{ij}}{2}. \tag{5}\]
Using \(\hat{\theta}_{i\cdot}\) as generalized exemplar support does not perform well in our simulations, see Appendix A. We note that when \(R_{ij}-L_{ij}\) is large compared to \(\sigma_{ij}\), the corresponding support point \(\hat{\theta}_{i\cdot}\) may be far from the optimal support point \(\theta_{i\cdot}\). For example, if \(L_{ij}=0\), \(R_{ij}=1000\), and \(\sigma_{ij}^{2}=1\), then, on average, \(\hat{\theta}_{ij}=500\) is a much worse estimate of \(\theta_{ij}\) than a sample from \(N(\hat{\theta}_{ij},\sigma_{ij}^{2}=1)\) for most \(\theta_{ij}\). Additionally, using (5) reduces to the usual exemplar support when there is no censoring. We finally note that when there is a common censoring interval, 5 is an example of a fill-in method [21].
The key insight of the exemplar method is that samples from the uncensored marginal distribution are likely to be close to the oracle support points [32]. This idea inspires us to develop support point selection methods that draw on sampling algorithms; samples from the uncensored marginal distribution are likely to be good support points. Sampling algorithms are not new to biomarker imputation; both Gibbs sampling [21] and bootstrap sampling [23] schemes have been used to impute missing values given fully observed covariates under the Tobit regression model.
We construct a novel, heuristic algorithm, for empirical Bayes matrix estimation under a Tobit likelihood, called "EBM-Tobit". Our key insight is that if we know the prior, \(g\), then sampling from the uncensored marginal distribution according to our Bayesian model is easy. Additionally, the exemplar method suggests that we only need the number of samples from the uncensored marginal to grow like \(n\), thus avoiding the curse-of-dimensionality. Algorithm 1 illustrates our proposed fitting scheme that alternates between estimating \(g\) and using sampling support points from an approximate, uncensored marginal distribution. Many methods can be used to produce a final estimate, for example, one could simply use the final estimated prior along with (1). In Algorithm 1, we draw inspiration from standard sampling methods and average multiple estimated posterior means to be used as the final estimate.
```
1:\(\theta_{i}\), \(\theta_{i
```
0:\(L,R\in\mathbb{R}^{n\times p}\)\(\triangleright\) Observations
0:\(t^{(0)}\in\mathbb{R}^{m\times p}\)\(\triangleright\) Initial support points
1:for\(l\in\{1,\ldots,B\}\)do
2:\(\hat{g}\leftarrow\arg\max_{\mathbf{w}\in\mathbb{R}_{+}^{m}:\mathbf{l^{\prime}w =1}}\sum_{i=1}^{n}\log\sum_{k=1}^{m}w_{k}P(L_{i\cdot},R_{i\cdot}\mid\theta_{i \cdot}=t_{k\cdot}^{(l-1)})\)
3:\(\hat{\theta}^{(l)}\leftarrow\hat{E}(\theta\mid L,R)\)
4:\(\mu_{1}^{(l)},\ldots\mu_{m}^{(l)}\sim_{iid}\hat{g}\)
5:\(t_{k}^{(l)}\mid\mu_{k}^{(l)}\sim N_{p}(\mu_{k}^{(l)},\sigma^{2}I_{p})\)
6:endfor
7:\(\hat{\theta}\gets B^{-1}\sum_{l=1}^{B}\hat{\theta}^{(l)}\)
```
**Algorithm 1** An algorithm to perform support point selection and compute "EBM-Tobit".
## 3 Imputation Simulations
We compared the performance of our method, EBM-Tobit, to other popular methods for censored biomarker measurement in simulations. Our simulation is based on the simulations used in previous missing not at random studies [40] and a bile acid dataset [22] previously used to study censored proteomics. The bile acid dataset contains the log-normal measurements of 34 bile acids for 198 patients; no missing values are present in the data. For each simulation, we generate \(n=1000\) patient biomarker measurements by first log-transforming the bile acid dataset so that it approximately follows a multivariate normal distribution. Next, we sample the true means, \(\theta_{i\cdot}\), from a multivariate normal distribution whose mean and covariance match the empirical mean and covariance of \(p=25\) random bile acids in our dataset. Finally, for \(\theta_{ij}\) falling below a pre-specified biomarker-specific quantile, \(LOD_{j}\), an interval \([LB_{j},LOD_{j}]\), where \(LB_{j}=\min\theta_{\cdot j}-6\) sd\((\theta_{\cdot j})\) is observed. For \(\theta_{ij}\) that are not censored, we observe one independent sample from \(N(\theta_{ij},\sigma_{ij}^{2}=1)\). We use a finite lower bound, \(LB_{j}\), rather than \(-\infty\), to avoid numerical issues in some of the methods; the log-normal interpretation of \(LB_{j}\) is a very small, positive value. Note this simulation setting has at most one censoring interval per column, corresponding to the setting where each biomarker has a fixed lower detection limit.
The performance of our empirical Bayes matrix imputation method is compared to other popular imputation methods for missing not at random, left-censored data. The "Tobit MLE" method is maximum likelihood estimate defined in (5); we note both that this method is a fill-in method in our simulation setting, and that this method simplifies to the \(LOD/2\) fill-in method [21] when the observed interval is \([0,LOD]\). "QRILC" [19] imputes the missing values using random draws from the estimated truncated normal distribution for each bile acid measured. The "zCompositions" method [28] uses relative abundances to impute missing values. The default set-up of "GSimp" [40] imputes the missing values by repeatedly estimating the missing values using the fully observed data by repeatedly fitting an elastic-net model starting with the QRILC values. The "trKNN" method [33] is a nearest neighbors method applied by patient using the average of the nearest three patients' normalized, bile acid measurements to impute the missing values. Additionally, we include "EB Oracle Support" which denotes the nonparametric empirical Bayes \(g\)-modeling estimator, (4), using the optimal support points. This estimator cannot be computed in practice, because the optimal
support points, \(\theta_{i.}\), are unknown, but it demonstrates that the methodology developed in Section 2 works well and that EBM-Tobit achieves performance reasonably close to optimal performance despite the difficulties with support point specification in this problem.
Figure 1 visualizes the marginal distributions produced by the imputation of the methods discussed above in one iteration of simulation where three of the ten columns have about 10% of values below the detection limit. We know from the data generation process that the marginal distribution should be normal, so it is easy to see that QRILC does the best job capturing the marginal distribution, followed by our method, EBM-Tobit, and zCompositions. Our method appears to place more mass in the center of the histogram than QRILC while maintaining some lower tail, illustrating the shrinkage induced by the posterior mean. Furthermore, it is straightforward to see that the trKNN method is biased towards the observed data, GSimp is over-distributed, and the single value fill-in method, Tobit MLE, lacks variability that may make fitting down-stream methods difficult.
We empirically compare the performance of these imputation methods across 200 rounds of simulations. The dimension of the problem is fixed at \(n=1000\) samples and \(p=25\) bile
Figure 1: Each plot is the marginal histogram of a fixed, censored column. Three of ten columns are censored so that roughly 10% of values below the detection limit. The “EBM-Tobit” histogram illustrates the our estimator from Algorithm 1 with \(B=50\) iterations; “Tobit MLE” method is maximum likelihood estimate defined in (5); “QRILC” is the typical QRILC method [19]; “zCompositions” is the log-normal zCompositions method [28]; “GSimp” is the recommended version of GSimp [40]; and “trKNN” is the truncated K-nearest neighbors method [33].
acids and eight of the bile acids have approximately 10% left-censored measurements. Simulations covering different number of censored columns and different levels of censoring are left to Appendix A. Because we are interested in both imputation performance and the ability to estimate the whole matrix, we measure root mean squared error and Spearman's correlation over just the censored values as well as over every value. The metrics are computed with respect to the simulated, true means. Results are visualized in Figure 2.
These simulation results demonstrate that our empirical Bayes matrix estimation method, EBM-Tobit, frequently matches the best imputation performance of popular methods for left-censored, missing not at random data. Moreover, EBM-Tobit greatly outperforms the other methods for whole matrix estimation. We note that zCompositions, which performs as well as EBM-Tobit in Figure 2 Plots A and C, is only applicable to left-censored problems. We additionally note that the oracle empirical Bayes method vastly outperforms popular imputation methods in all simulations, offering strong justification our empirical Bayes approach.
Figure 2: Plots comparing the performance of popular imputation methods for left-censored, missing not at random data to our empirical Bayes matrix estimation method. Plot A compares the mean squared error (on a square-root scale) computed only over \(\theta_{ij}\) that have censored observations (imputation performance), while Plot B compares the the root mean squared error calculated over every \(\theta_{ij}\) (estimation performance). Plots C and D show Spearman’s correlation over the same \(\theta_{ij}\) as Plots A and B. The methods are as follows: “QRILC” is the typical QRILC method [19]; “GSimp” is the recommended version of GSimp [40]; “zCompositions” is the log-normal zCompositions method [28]; “trKNN” is truncated K-nearest neighbors method [33]; “Tobit MLE” method is maximum likelihood estimate defined in (5); “EBM-Tobit” denotes our estimator from Algorithm 1 using \(B=50\) iterations; and “EB Oracle Support” is the nonparametric empirical Bayes \(g\)-modeling estimator using the optimal support points.
Discussion
One of the key advantages of empirical Bayes methods is their ability to induce shrinkage in the estimation problem. By leveraging a data-dependent prior distribution, empirical Bayes methods borrow information across multiple observations and produce more stable and reliable parameter estimates. Figure 2 illustrates that our empirical Bayes estimates are consistently close to the true means and captures variability that is likely to help improve down-stream analysis with tools designed for continuous inputs. We note that because EBM-Tobit is designed to estimate all of the true means, not just the censored ones, it is the only method to have an mean squared error less than one when estimating all of the means.
Our methodology has been focused on the class of all priors on \(\mathbb{R}^{p}\), allowing for arbitrary dependence between biomarker values. This dependence between biomarker values is different than modeling correlated measurement errors and is closer to learning the true physical model for the biological processes. However, in many applications there may be additional domain knowledge that can be incorporated as restrictions on the space of priors. For example, if various sets of biomarkers are known to be unrelated, a corresponding independence structure can be imposed on the class of priors. This allows the estimation problem to be bifurcated, both decreasing the difficulty of each sub-problem and allowing for parallelization of model fitting. Additionally, the support of the prior can be restricted to incorporate knowledge of the biomarker's support, such as non-negativity. By restricting the space of priors, we produce more efficient estimators.
Empirical Bayes models are often discussed in the context of shrinkage estimators. In this case, it is pertinent to ask "where are we shrinkage to?" Since our application mainly concerns imputing left-censored means a reasonable question is: should we shrink towards the global mean given that we know the observation was on the low end? This Efron's relevance problem [6]. It is not necessary that \(\theta_{ij}\) lies in \([L_{ij},R_{ij}]\); however, in the case of detection limits, the fact that a measurement is censored still somewhat informative. This suggests it may be good to include the information that the observation was censored in the estimation procedure. One simple solution is to define a known covariate to indicate whether the observation was censored. Including this binary covariate into the empirical Bayes model results in estimating two separate priors and corresponding posteriors. Because we are partitioning our data in this approach, the estimation of each prior becomes less efficient; for this reason, it may be better to bet on the flexibility of the nonparametric prior we are already using to adapt to these sub-populations especially when the sub-populations are small or our domain expertise is limited.
|
2308.09960 | Towards Self-Adaptive Machine Learning-Enabled Systems Through QoS-Aware
Model Switching | Machine Learning (ML), particularly deep learning, has seen vast
advancements, leading to the rise of Machine Learning-Enabled Systems (MLS).
However, numerous software engineering challenges persist in propelling these
MLS into production, largely due to various run-time uncertainties that impact
the overall Quality of Service (QoS). These uncertainties emanate from ML
models, software components, and environmental factors. Self-adaptation
techniques present potential in managing run-time uncertainties, but their
application in MLS remains largely unexplored. As a solution, we propose the
concept of a Machine Learning Model Balancer, focusing on managing
uncertainties related to ML models by using multiple models. Subsequently, we
introduce AdaMLS, a novel self-adaptation approach that leverages this concept
and extends the traditional MAPE-K loop for continuous MLS adaptation. AdaMLS
employs lightweight unsupervised learning for dynamic model switching, thereby
ensuring consistent QoS. Through a self-adaptive object detection system
prototype, we demonstrate AdaMLS's effectiveness in balancing system and model
performance. Preliminary results suggest AdaMLS surpasses naive and single
state-of-the-art models in QoS guarantees, heralding the advancement towards
self-adaptive MLS with optimal QoS in dynamic environments. | Shubham Kulkarni, Arya Marda, Karthik Vaidhyanathan | 2023-08-19T09:33:51Z | http://arxiv.org/abs/2308.09960v1 | # Towards Self-Adaptive Machine Learning-Enabled Systems Through QoS-Aware Model Switching
###### Abstract
Machine Learning (ML), particularly deep learning, has seen vast advancements, leading to the rise of Machine Learning-Enabled Systems (MLS). However, numerous software engineering challenges persist in propelling these MLS into production, largely due to various run-time uncertainties that impact the overall Quality of Service (QoS). These uncertainties emanate from ML models, software components, and environmental factors. Self-adaptation techniques present potential in managing run-time uncertainties, but their application in MLS remains largely unexplored. As a solution, we propose the concept of a Machine Learning Model Balancer, focusing on managing uncertainties related to ML models by using multiple models. Subsequently, we introduce AdaMLS, a novel self-adaptation approach that leverages this concept and extends the traditional MAPE-K loop for continuous MLS adaptation. AdaMLS employs lightweight unsupervised learning for dynamic model switching, thereby ensuring consistent QoS. Through a self-adaptive object detection system prototype, we demonstrate AdaMLS's effectiveness in balancing system and model performance. Preliminary results suggest AdaMLS surpasses naive and single state-of-the-art models in QoS guarantees, heralding the advancement towards self-adaptive MLS with optimal QoS in dynamic environments.
Self Adaptation, Self-adaptive systems, Software Architecture, ML-Enabled Systems, ML4SA, Unsupervised Learning, Object Detection
## I Introduction
Recent advancements in machine learning, especially deep learning, have spurred the growth of Machine Learning-Enabled Systems (MLS) like ChatGPT [1], Google Bard [2], and DALLE-2 [3]. However, engineering MLS presents multi-faceted software engineering challenges, from the development and integration of ML components to model versioning and data quality [4, 5, 6]. Gartner's report notes that almost half of MLS projects don't make it to production, mainly because of unpredictable run-time challenges like varying model performance and unstable software components [7]. Additionally, environmental factors like infrastructure (cost, energy) and system usage (arrival rate etc.) significantly influence the system QoS. Over the years, self-adaptation techniques have emerged as promising solutions for managing run-time uncertainties [8, 9]. They enable systems to continuously adapt their structure and/or behaviour to satisfy different goals (in terms of QoS, functionalities, etc.). While effective in domains like CPS, IoT, and service-oriented systems [10, 11, 12], their application in MLS is largely unexplored [13]. In MLS, ML model performance can vary significantly due to factors like model architecture--layer count and algorithm type. Given identical input-output specifications, developers can devise a spectrum of models, each with its speed and accuracy trade-offs. Recognizing this variability, we introduce the concept of an ML Model Balancer. This notion encapsulates the idea of dynamically evaluating and switching between models to optimize QoS. For instance, high-traffic situations might favor a faster model, while quieter periods prioritize accuracy. AdaMLS, our novel self-adaptive approach, operationalizes this concept of the ML Model Balancer. Nevertheless, AdaMLS consistently excels in navigating the intricacies of online ML deployments, ensuring superior QoS. This includes: i) monitoring model and system parameters; ii) analyzing model and system quality for QoS violations; iii) using knowledge from lightweight unsupervised learning to dynamically switch models, ensuring QoS; and iv) executing system adaptation. Prioritizing ML model adaptability, AdaMLS shifts from conventional load balancing to QoS-aware dynamic ML model switching. By continuously tuning model selections in response to environmental cues and system demands, AdaMLS guarantees MLS QoS, promoting consistent MLS operation in live settings. This represents a stride towards future-ready self-adaptive MLS, designed to maintain an optimal performance equilibrium amidst changing data and user demands. We evaluate AdaMLS using an object detection use case through utility (refer section II) showcasing a self-adaptive prototype. Our preliminary findings indicate that the runtime model switching, facilitated by lightweight unsupervised learning, effectively manages both system and model performance. This enables AdaMLS to surpass both naive strategies and individual models in terms of Quality of Service (QoS). Our work innovatively adapts the MAPE-K loop to address the uncertainties inherent in MLS, emphasizing dynamic model-switching approach. Through AdaMLS's real-world application, we highlight our move toward self-adaptive MLS that can deftly switch between models based on data shifts and user demands, always maintaining optimal QoS. The paper is structured as follows: Section 2 provides motivation with a running example. Section 3 introduces the AdaMLS approach. Results from its application are in Section 4. Related work is discussed in Section 5, and Section 6 concludes.
## II Running Example
Our AdaMLS1 approach is showcased via an object detection system, a culmination of ML advancements over
decades [14]. The system consists a _web service_ with a REST API, _model_repo_ as the repository, _message_broker_ for image streaming, and _obj_model_ using YOLO [15]. These components mirror services like Google Cloud Vision or Amazon Rekognition, emphasizing real world applicability. In the example, we define a set \(M\) of available models. Each model \(m_{j}\), where \(j\) ranges from 1 to \(n\), is part of \(M\). Here this set includes YOLOv5 models (YOLOv5n, YOLOv5s, YOLOv5m, YOLOv5l, YOLOv5x) provided by Ultralytics [16], pretrained on the COCO 2017 training dataset [17]. Models in \(M\) are quantified as mAP - model's effectiveness in detecting objects, symbolized by \(c\), and performance. Performance is assessed using \(\tau^{\prime}\), \(\tau\), and \(r\). Here, \(\tau^{\prime}\) denotes the processing time per image by system (i.e., individual processing without network or queuing delay), \(\tau\) is the model's processing time, and \(r\) is the system response time in real-world operations, encompassing network, queuing, and processing delays. For instance, YOLOv5n has 1.9M parameters, an mAP of 28, and a 45-ms \(\tau\), while YOLOv5x has 86M parameters, mAP 50.7, and 766 ms \(\tau\)[16]. Different models vary in response time and confidence scores, with none achieving an optimal balance between both. Given this context, and given a set of thresholds including \(C_{\max}\) and \(C_{\min}\), denoting the maximum & minimum confidence score; \(R_{\max}\) and \(R_{\min}\), the maximum & minimum allowed response time; the goal is to maximize the utility function \(U\). This function evaluates the confidence score \(c_{i}\) and response time \(r_{i}\) for each image \(i\). Herein, \(p_{ev}\) and \(p_{dv}\) represent the penalties for violations relative to these thresholds. The total utility \(U\) of the system for all \(k\) unique image ID processed, is given by \(U=\sum_{i=1}^{k}U_{i}\). For the \(i^{th}\) image the utility \(U_{i}\) is defined as \(U_{i}=w_{e}E_{r_{i}}+w_{d}T_{\tau_{i}}\), where \(w_{e}\) and \(w_{d}\) are weights, \(E_{\tau_{i}}\) and \(T_{\tau_{i}}\) are piece-wise functions that represent the \(c\) and \(r\) respectively, defined as:
\[E_{r_{i}}=\begin{cases}c_{i}&\text{if }C_{\min}\leq c_{i}\leq C_{\max}\\ (c_{i}-C_{\max})\cdot p_{ev}&\text{if }c_{i}>C_{\max}\\ (C_{\min}-c_{i})\cdot p_{ev}&\text{if }c_{i}<C_{\min}\end{cases}\]
\[T_{\tau_{i}}=\begin{cases}r_{i}&\text{if }R_{\min}\leq r_{i}\leq R_{\max}\\ (R_{\max}-r_{i})\cdot p_{dv}&\text{if }r_{i}>R_{\max}\\ (r_{i}-R_{\min})\cdot p_{dv}&\text{if }r_{i}<R_{\min}\end{cases}\]
Given the thresholds and constraints, our approach aims to maximize utility function thereby improving the overall QoS.
## III AdaMLS Approach
AdaMLS provides a robust solution to two pivotal learning problems [18]: adaptation policy development and resource usage analysis.Leveraging unsupervised learning to identify unknown patterns in MLS's runtime performance, the Learning Engine (LE) provides adaptation rules. As outlined in Table I and defined in [19] and [13], these rules, when executed by MAPE-K, effectively mitigate uncertainties.
### _Learning Engine_
The Learning Engine (LE) initializes with the _ML Models Executor_, operating all models \(m_{j}\in M\) on an evaluation dataset (e.g., COCO Test 2017) from the _Data Store_. It saves detection outcomes in dataset \(d_{j}\) for each model. Outputs include KPIs like \(c\), \(\tau\), \(r\), and \(s\); where \(s\) signifies CPU Consumption(%), aiding in uncertainty mitigation as said in Table I.The _Unsupervised Model builder_ uses K-Means clustering on each \(d_{j}\) based on \(\tau^{\prime}\), grouping models by performance attributes, thereby hastening model selection during runtime adjustments. Clustering with \(\tau^{\prime}\) is strategic, given our hierarchical approach where response time is paramount. \(\tau^{\prime}\) isn't arbitrary but marks our primary metric. This ensures models first align with the vital response-time criteria before secondary metrics such as accuracy. This clustering navigates model spectrum effectively, grouping models with akin performance, facilitating dynamic switches to select from relevant QoS subsets. Each image \(i\) of model \(m_{j}\) gets a cluster label \(l\).
The _Performance Evaluator_ constructs a performance matrix for each model \(m_{j}\in M\), by collating KPIs across models
Fig. 1: AdaMLS Approach
(\(m_{q}\in M\) where \(q\neq j\)) for the same request. Let's consider model \(m_{j}\), known as 'nano' (Yolov5n). For each image \(i\), KPIs are collated across all models using the Image ID, resulting in a comprehensive matrix for 'nano'. Each row in this matrix represents an image, while columns correspond to KPIs from all models and the assigned cluster number from 'nano'.
The _Adaptation Rule Creator_ calculates the 90% confidence intervals (CI) for each cluster \(l\) of every model \(m_{j}\). These upper and lower limit intervals provide a statistically likely range for KPIs, thereby reducing uncertainty in system adaptations as per Table I. To illustrate, consider model 'nano' as \(m_{j}\). For each cluster \(l\) in 'nano', CIs are calculated for all data points, using only those images within the same cluster. Simultaneously, the CIs for the same images are calculated from the performance metrics of all other models \(m_{k}\). This process results in a CI matrix for 'nano', encapsulating potential performance variations across models. Repeating this for all models, LE produces a set of CI matrices. Each matrix maps out performance variations within each cluster of the respective model \(m_{j}\). Through LE, executed periodically in batches, AdaMLS develops a holistic understanding of potential model performance shifts, enabling statistical predictions of system KPIs impacts due to model switching.
### _MAPE-K Loop_
#### Iii-B1 Knowledge
As per the structure depicted in Figure 1, the Knowledge (K) base in our system is primarily a repository divided into three sections: the _Log Repository_, the _Adaptation Rule Repository_, and the _System Metrics Repository_. The _Log Repository_ stores system logs, including vital KPIs. For instance, in our running example, this would mean processing time per image by the system \(\tau\), the system response time \(r\), model confidence \(c\), and CPU consumption \(s\) for all processed requests \(k\), as recorded by the MLS. The _Adaptation Rule Repository_ houses the CI matrix generated by the LE, acting as a set of adaptation rules for the Planning phase. Lastly, the _System Metrics Repository_ keeps track of various system metrics such as real-time incoming request rate per second denoted as \(v\) and system logs if any.
#### Iii-B2 Monitor
The _ML Metric Monitor_ component continuously tracks system QoS and KPIs. In our use case, this includes the avg. number of detection boxes in processed image \(b\),\(\tau\),\(r\),\(c\) and \(s\) from the last \(i\) processed requests denoted as \(k^{\prime}\), and for _System QoS Monitor_ component, it is the current model in use denoted as \(m^{\prime}\) and \(v\) to mitigate environmental uncertainty as per Table I. The system also tracks the number of pending requests \(i_{w}\), which provides insight into the workload. The monitored data is sent to the Analyze function for potential adaptations maintaining system self-awareness also logged in Knowledge appropriately.
#### Iii-B3 Analyzer
In Analyzer phase, the _Planner Initiator_ utilizes the _System Evaluator_ to analyze the data from the Monitor, to determine if a system adaptation is necessary. The _System Evaluator_ identifies the closest cluster, \(l\), from the CI matrix for \(m^{\prime}\), to the current system state. This state is defined as the mean of the most recent \(k^{\prime}\) (e.g. 50) results for \(m^{\prime}\). This identification process is grounded in the two KPIs exhibiting the highest variance. Process mitigates data drift uncertainty as outlined in Table I. Upon identifying cluster \(l\), a feasible request rate range [\(v_{min},v_{max}\)] is determined from CI matrix of \(m^{\prime}\) and is computed using the inverse of the upper and lower confidence interval bounds for \(\tau\) for \(m^{\prime}\). Subsequently, the _Load Calculator_ computes the adjusted request rate \(v_{adj}\) by adding \(i_{w}\) (those exceeding \(v_{max}\)) to \(v\).
If \(v_{adj}\) not in [\(v_{min}\), \(v_{max}\)], the _Planner Initiator_ waits for a brief period \(t_{wait}\) (e.g. 0.25 sec) to avoid unnecessary system adaptations and then initiates Planner with \(v_{adj}\), \(m^{\prime}\), and \(l\).
#### Iii-B4 Planner
During the Planning phase, the _Strategy Formulator_ uses the output from the Analyzer and Knowledge base to devise an adaptation strategy. The strategy identifies potential models from \(M\) that can accommodate \(v_{adj}\) and belong to cluster \(l\). The compatibility of a model is determined by comparing \(v_{adj}\) with the inverse of lower value of CI bound for \(\tau\); for the current model \(m^{\prime}\), the most recent \(n\) results are used, whereas for other models in \(M\), the CI matrix of \(m^{\prime}\) is referenced. The _Model Selector_ then picks \(m_{best}\), the model with the highest lower confidence interval value for \(c\), thereby mitigating goal and model drift uncertainties as per Table I. If \(m^{\prime}=m_{best}\), the Planner refrains from taking further action. Otherwise, the Planner signals a model switch to the Execution phase. The system persists with \(m^{\prime}\) if no suitable model is identified.
#### Iii-B5 Executor
In the Execution phase, the _Adaptation Executor_ carries out the plan. If the Planner signals a model switch, i.e., \(m_{best}\neq m^{\prime}\), the system transitions to \(m_{best}\). If no switch is signaled, the system persists with the current model. Both situations guarantee system autonomy, adaptability to changing conditions, and enhanced learning through updates logged in the Knowledge base.
## IV Preliminary Results
We implement AdaMLS on an object detection system (refer section II) using YOLOv5 variants alongside FastAPI. For testing, we emulate a FIFA98 situation [20], with up to 28 parallel requests/sec and a total of 25,000 requests. We employ the COCO 2017 unlabelled dataset [17] as our testing dataset, with the COCO 2017 test dataset as evaluation dataset. Our data clustering is facilitated through Python and PySpark's MLlib [21], with the optimal clusters being defined by the elbow method [22]. The complete specifics of our implementation, parameters and the ensuing results are detailed in [23]. In our evaluation, AdaMLS is evaluated against both the naive approach and individual YOLOv5 models. The naive approach transitions between models based on preset \(v\) thresholds. Referring to the data in Table II, it is clear that AdaMLS effectively decreases the average response time \(r\) while simultaneously reducing the occurrence of penalties associated with response time and confidence. Parameters used for utility are: \(p_{cv}\) = 1,\(p_{dv}\) = 1,\(C_{\max}\) = 1,\(C_{\min}\) = 0.5, \(R_{\max}\) = 1s, \(R_{\min}\) = 0.1s, \(t_{wait}\) = 0.25s and \(k^{\prime}\) = 50. A key aspect of our results is the utility metric, detailed in Table III. While utility, a measure quantifying the effectiveness and efficiency of a
model in various operational contexts (as detailed in Section II), offers an effective measure, it's not the sole criterion for evaluating QoS in ML systems. Although AdaMLS does not consistently lead in all individual metrics, it demonstrates unparalleled efficacy in terms of utility. Specifically, AdaMLS achieves an overall increased utility, particularly when equal emphasis is placed on response time and confidence score, surpassing the Yolov5n model by as much as 39%. This remarkable performance in utility, even when not consistently leading in every individual metric, is significant. It underscores our method's ability to integrate these metrics effectively for optimal outcomes. Therefore, utility acts as a measure of our system's proficiency in addressing challenges and maintaining high-quality performance. Moreover, our refined architecture reduces the time required for model transitions, ensuring it remains below the crucial threshold of 0.01 seconds, further solidifying AdaMLS's contribution to improving the QoS of ML-driven systems.
## V Related Work
The inception of self-adaptive systems dates back to IBM's autonomic computing vision [24]. The principle's application to ML Systems (MLS) is commenced by seminal work [13], which doesn't consider model switching as an adaptive tactic despite studies on retraining ML components' cost-benefit trade-offs [25]. In addition, the concept of self-adaptation of ML was introduced as a primer in [5]. However, it wasn't realized or elaborated in detail. While Convolutional Neural Networks (CNNs) have enhanced object detection [26], the optimal architectural selection remains a real-world challenge. Recent advancements have primarily concentrated on enhancing individual models [27, 28, 29, 30, 31], neglecting system-wide adaptability. A recent survey summarizes the use of ML for engineering self-adaptive systems and it also highlights the underutilization of unsupervised learning [18]. AdaMLS, our solution, fills this gap by combining unsupervised learning and model switching to boost MLS adaptability. and echoes the need for robust architectural practices for ML systems [32].
## VI Conclusion
To conclude, we introduced AdaMLS, an innovative solution that engineers self-adaptive Machine Learning Systems (MLS) by employing unsupervised learning for dynamic model switching. Preliminary evaluations, based on an object detection system example indicate that AdaMLS can effectively mitigate run-time uncertainties and outperforms both traditional and standalone models. Thereby, offering significant QoS improvements. AdaMLS stands as a significant advancement, showcasing the potential of engineering MLS with self-adaptation capabilities. Importantly, it paves the way for MLS to execute seamless model switching to maintain optimal QoS under varying run-time uncertainties. Looking forward, we intend to explore a diverse range of learning techniques and model-switching strategies to further enhance the adaptability of AdaMLS. Emphasis will also be laid on broadening its applications to different domains, and on improving the environmental and economic sustainability of MLS, thereby revolutionizing the future of MLS implementation and design.
Fig. 3: Utility Function Over Requests processed
Fig. 2: Model Switching: Naive Vs. AdaMLS |
2303.01096 | Geometric Spanning Trees Minimizing the Wiener Index | The Wiener index of a network, introduced by the chemist Harry Wiener, is the
sum of distances between all pairs of nodes in the network. This index,
originally used in chemical graph representations of the non-hydrogen atoms of
a molecule, is considered to be a fundamental and useful network descriptor. We
study the problem of constructing geometric networks on point sets in Euclidean
space that minimize the Wiener index: given a set $P$ of $n$ points in
$\mathbb{R}^d$, the goal is to construct a network, spanning $P$ and satisfying
certain constraints, that minimizes the Wiener index among the allowable class
of spanning networks.
In this work, we focus mainly on spanning networks that are trees and we
focus on problems in the plane ($d=2$). We show that any spanning tree that
minimizes the Wiener index has non-crossing edges in the plane. Then, we use
this fact to devise an $O(n^4)$-time algorithm that constructs a spanning tree
of minimum Wiener index for points in convex position. We also prove that the
problem of computing a spanning tree on $P$ whose Wiener index is at most $W$,
while having total (Euclidean) weight at most $B$, is NP-hard.
Computing a tree that minimizes the Wiener index has been studied in the area
of communication networks, where it is known as the optimum communication
spanning tree problem. | A. Karim Abu-Affash, Paz Carmi, Ori Luwisch, Joseph S. B. Mitchell | 2023-03-02T09:30:09Z | http://arxiv.org/abs/2303.01096v1 | # Geometric Spanning Trees Minimizing the Wiener Index
###### Abstract
The Wiener index of a network, introduced by the chemist Harry Wiener [30], is the sum of distances between all pairs of nodes in the network. This index, originally used in chemical graph representations of the non-hydrogen atoms of a molecule, is considered to be a fundamental and useful network descriptor. We study the problem of constructing geometric networks on point sets in Euclidean space that minimize the Wiener index: given a set \(P\) of \(n\) points in \(\mathbb{R}^{d}\), the goal is to construct a network, spanning \(P\) and satisfying certain constraints, that minimizes the Wiener index among the allowable class of spanning networks.
In this work, we focus mainly on spanning networks that are trees and we focus on problems in the plane (\(d=2\)). We show that any spanning tree that minimizes the Wiener index has non-crossing edges in the plane. Then, we use this fact to devise an \(O(n^{4})\)-time algorithm that constructs a spanning tree of minimum Wiener index for points in convex position. We also prove that the problem of computing a spanning tree on \(P\) whose Wiener index is at most \(W\), while having total (Euclidean) weight at most \(B\), is NP-hard.
Computing a tree that minimizes the Wiener index has been studied in the area of communication networks, where it is known as the _optimum communication spanning tree problem_.
Keywords:Wiener Index Optimum communication spanning tree Minimum routing cost spanning tree.
## 1 Introduction
The _Wiener index_ of a weighted graph \(G=(V,E)\) is the sum, \(\sum_{u,v\in V}\delta_{G}(u,v)\), of the shortest path lengths in the graph between every pair of vertices, where \(\delta_{G}(u,v)\) is the weight of the shortest (minimum-weight) path between \(u\) and \(v\) in \(G\). The Wiener index was introduced by the chemist Harry Wiener in 1947 [30]. The Wiener index and its several variations have found applications in chemistry,
e.g., in predicting the antibacterial activity of drugs and modeling crystalline phenomena. It has also has been used to give insight into various chemical and physical properties of molecules [28] and to correlate the structure of molecules with their biological activity [20]. The Wiener index has become part of the general scientific culture, and it is still the subject of intensive research [2, 10, 12, 32]. In its applications in chemistry, the Wiener index is most often studied in the context of unweighted graphs. The study of minimizing the sum of interpoint distances also arises naturally in the network design field, where the problem of computing a spanning tree of minimum Wiener index is known as the _Optimum Communication Spanning Tree_ (OCST) problem [15, 18].
Given a undirected graph \(G=(V,E)\) and a (nonnegative) weight function on the edges of \(G\), representing the delay on each edge, the routing cost \(c(T)\) of a spanning tree \(T\) of \(G\) is the sum of the weights (delays) of the paths in \(T\) between every pair of vertices: \(c(T)=\sum_{u,v\in V}\delta_{T}(u,v)\), where \(\delta_{T}(u,v)\) is the weight of the (unique) path between \(u\) and \(v\) in \(T\). The OCST problem aims to find a minimum routing cost spanning tree of a given weighted undirected graph \(G\), thereby seeking to minimize the expected cost of a path within the tree between two randomly chosen vertices. The OCST was originally introduced by Hu [18] and is known to be NP-complete in graphs, even if all edge weights are \(1\)[19]. Wu et al. [31] presented a polynomial time approximation scheme (PTAS) for the OCST problem. Specifically, they showed that the best \(k\)-star (a tree with at most \(k\) internal vertices) yields a \((\frac{k+3}{k+1})\)-approximation for the problem, resulting in a \((1+\varepsilon)\)-approximation algorithm of running time \(O\big{(}n^{2\lceil\frac{2}{\varepsilon}\rceil-2}\big{)}\).
While there is an abundance of research related to the Wiener index, e.g., computing and bounding the Wiener indexes of specific graphs or classes of graphs [16, 17, 24] and explicit formulas for the Wiener index for special classes of graphs [3, 23, 26, 29, 30], to the best of our knowledge, the Wiener index has not received much attention in geometric settings. In this work, we study the Wiener index and the optimum communication spanning tree problem in selected geometric settings, hoping to bring this important and highly applicable index to the attention of computational geometry researchers.
Our Contributions and Overview.Let \(P\) be a set of \(n\) points in the plane. we study the problem of computing a spanning tree on \(P\) that minimizes the Wiener index when the underlying graph is the complete graph on \(P\), with edge weights given by their Euclidean lengths. In Section 2, we prove that the optimal tree (that minimizes the Wiener index) has no crossing edges. As our main algorithmic result, in Section 3, we give a polynomial-time algorithm to solve the problem when the points \(P\) are in convex position; this result strongly utilizes the structural result that the edges of an optimal tree do not cross, which enables us to devise a dynamic programming algorithm to optimize. Then, in Section 4, we prove that the "Euclidean Wiener Index Tree Problem", in which we seek a spanning tree on \(P\) whose Wiener index is at most \(W\), while having total (Euclidean) weight at most \(B\), is (weakly) NP-hard. Finally, in Section 5, we discuss the problem of finding a minimum Wiener index _path_ spanning \(P\).
Related Work.A problem related to ours is the minimum latency problem, also known as the traveling repairman problem TRP: Compute a path, starting at point \(s\), that visits all points, while minimizing the sum of the distances (the "latencies") along the path from \(s\) to every other point (versus between _all_ pairs of points, as in the Wiener index). There is a PTAS for TRP (and the \(k\)-TRP, with \(k\) repairmen) in the Euclidean plane and in weighted planar graphs [27].
Wiener index optimization also arises in the context of computing a non-contracting embedding of one metric space into another (e.g., a line metric or a tree metric) in order to minimize the average distortion of the embedding (defined to be the sum of all pairs distances in the new space, divided by the sum of all pairs distances in the original space). It is NP-hard to minimize average distortion when embedding a tree metric into a line metric; there is a constant-factor approximation (based on the \(k\)-TRP) for minimizing the average distortion in embedding a metric onto a line (i.e., finding a spanning path of minimum Wiener index) [11], which, using [27], gives a \((2+\varepsilon)\)-approximation in the Euclidean plane.
A related problem that has recently been examined in a geometric setting is the computation of the Beer index of a polygon \(P\), defined to be the probability that two randomly (uniformly) distributed points in \(P\) being visible to each other [1]; the same paper also studies the problem of computing the expected distance between two random points in a polygon, which is, like the Wiener index, based on computing the sum of distances (evaluated as an integral in the continuum) between all pairs of points.
Another area of research that is related to the Wiener index is that of _spanners_: Given a weighted graph \(G\) and a real number \(t>1\), a _\(t\)-spanner_ of \(G\) is a spanning sub-graph \(G^{*}\) of \(G\), such that \(\delta_{G^{*}}(u,v)\leq t\cdot\delta_{G}(u,v)\), for every two vertices \(u\) and \(v\) in \(G\). Thus, the shortest path distances in \(G^{*}\) approximate the shortest path distances in the underlying graph \(G\), and the parameter \(t\) represents the approximation ratio. The smallest \(t\) for which \(G^{*}\) is a \(t\)-spanner of \(G\) is known as the _stretch factor_. There is a vast literature on spanners, especially in geometry (see, e.g., [4, 5, 6, 7, 13, 22, 25]) In a geometric graph, \(G\), the _stretch factor_ between two vertices, \(u\) and \(v\), is the ratio between the Euclidean length of the shortest path from \(u\) to \(v\) in \(G\) and the Euclidean distance between \(u\) and \(v\). The _average stretch factor_ of \(G\) is the average stretch factor taken over all pairs of vertices in \(G\). For a given weighted connected graph \(G=(V,E)\) with positive edge weights and a positive value \(W\), the _average stretch factor spanning tree_ problem seeks a spanning tree \(T\) of \(G\) such that the average stretch factor (over \({n\choose 2}\) pairs of vertices) is bounded by \(W\). For points in the Euclidean plane, one can construct in polynomial time a spanning tree with constant average stretch factor [9].
## 2 Preliminaries
Let \(P\) be a set of \(n\) points in the plane and let \(G=(P,E)\) be the complete graph over \(P\). For each edge \((p,q)\in E\), let \(w(p,q)=|pq|\) denote the weight of \((p,q)\)
given by the Euclidean distance, \(|pq|\), between \(p\) and \(q\). Let \(T\) be a spanning tree of \(P\). For points \(p,q\in P\), let \(\delta_{T}(p,q)\) denote the weight of the (unique) path between \(p\) and \(q\) in \(T\). Let \(W(T)=\sum_{p,q\in P}\delta_{T}(p,q)\) denote the Wiener index of \(T\), given by the sum of the weights of the paths in \(T\) between every pair of points. Finally, for a point \(p\in P\), let \(\delta_{p}(T)=\sum_{q\in P}\delta_{T}(p,q)\) denote the total weight of the paths in \(T\) from \(p\) to every point of \(P\).
Theorem 3.1: _Let \(T^{*}\) be a spanning tree of \(P\) that minimizes the Wiener index. Then, \(T^{*}\) is planar._
Proof: Assume towards a contradiction that there are two edges \((a,c)\) and \((b,d)\) in \(T\) that cross each other. Let \(F\) be the forest obtained by removing the edges \((a,c)\) and \((b,d)\) from \(T\). Thus \(F\) contains three sub-trees. Assume, w.l.o.g., that \(a\) and \(b\) are in the same sub-tree \(T_{ab}\), and \(c\) and \(d\) are in separated sub-trees \(T_{c}\) and \(T_{d}\), respectively; see Figure 1. Let \(n_{ab}\), \(n_{c}\), and \(n_{d}\) be the number of points in \(T_{ab}\), \(T_{c}\), and \(T_{d}\), respectively. Thus,
\[W(T^{*}) =W(T_{ab})+n_{c}\cdot\delta_{a}(T_{ab})+n_{d}\cdot\delta_{b}(T_{ ab})\] \[+W(T_{c})+(n_{ab}+n_{d})\cdot\delta_{c}(T_{c})+n_{c}(n_{ab}+n_{d })\cdot|ac|\] \[+W(T_{d})+(n_{ab}+n_{c})\cdot\delta_{d}(T_{d})+n_{d}(n_{ab}+n_{c })\cdot|bd|\] \[+n_{c}\cdot n_{d}\cdot\delta_{T^{*}}(a,b)\,.\]
Let \(T^{\prime}\) be the spanning tree of \(P\) obtained from \(T^{*}\) by replacing the edge \((b,d)\) by the edge \((a,d)\). Similarly, let \(T^{\prime\prime}\) be the spanning tree of \(P\) obtained from \(T^{*}\) by replacing the edge \((a,c)\) by the edge \((b,c)\). Thus,
\[W(T^{\prime}) =W(T_{ab})+(n_{c}+n_{d})\cdot\delta_{a}(T_{ab})\] \[+W(T_{c})+(n_{ab}+n_{d})\cdot\delta_{c}(T_{c})+n_{c}(n_{ab}+n_{d })\cdot|ac|\] \[+W(T_{d})+(n_{ab}+n_{c})\cdot\delta_{d}(T_{d})+n_{d}(n_{ab}+n_{c })\cdot|ad|\,,\]
\[W(T^{\prime\prime}) =W(T_{ab})+(n_{c}+n_{d})\cdot\delta_{b}(T_{ab})\] \[+W(T_{c})+(n_{ab}+n_{d})\cdot\delta_{c}(T_{c})+n_{c}(n_{ab}+n_{d}) \cdot|bc|\] \[+W(T_{d})+(n_{ab}+n_{c})\cdot\delta_{d}(T_{d})+n_{d}(n_{ab}+n_{c}) \cdot|bd|\,.\]
Therefore,
\[W(T^{*})-W(T^{\prime}) =n_{d}\big{(}\delta_{b}(T_{ab})-\delta_{a}(T_{ab})\big{)}+n_{d}( n_{ab}+n_{c})\big{(}|bd|-|ad|\big{)}\] \[+n_{c}\cdot n_{d}\cdot\delta_{T^{*}}(a,b)\,,\]
and
\[W(T^{*})-W(T^{\prime\prime}) =n_{c}\big{(}\delta_{a}(T_{ab})-\delta_{b}(T_{ab})\big{)}+n_{c}( n_{ab}+n_{d})\big{(}|ac|-|bc|\big{)}\] \[+n_{c}\cdot n_{d}\cdot\delta_{T^{*}}(a,b)\,.\]
If \(W(T^{*})-W(T^{\prime})>0\) or \(W(T^{*})-W(T^{\prime\prime})>0\), then this contradicts the minimality of \(T^{*}\), and we are done.
Assume that \(W(T^{*})-W(T^{\prime})\leq 0\) and \(W(T^{*})-W(T^{\prime\prime})\leq 0\). Since \(n_{c}>0\) and \(n_{d}>0\), we have
\[\delta_{b}(T_{ab})-\delta_{a}(T_{ab})+(n_{ab}+n_{c})\big{(}|bd|-|ad|\big{)}+n_ {c}\cdot\delta_{T^{*}}(a,b)\leq 0\,,\]
and
\[\delta_{a}(T_{ab})-\delta_{b}(T_{ab})+(n_{ab}+n_{d})\big{(}|ac|-|bc|\big{)}+n_ {d}\cdot\delta_{T^{*}}(a,b)\leq 0\,.\]
Thus, by summing these inequalities, we have
\[(n_{ab}+n_{c})\big{(}|bd|-|ad|\big{)}+(n_{ab}+n_{d})\big{(}|ac|-|bc|\big{)}+(n _{c}+n_{d})\cdot\delta_{T^{*}}(a,b)\leq 0\,.\]
That is,
\[n_{ab}\big{(}|bd|+|ac|-|ad|-|bc|\big{)} +n_{c}\big{(}|bd|+\delta_{T^{*}}(a,b)-|ad|\big{)}\] \[+n_{d}\big{(}|ac|+\delta_{T^{*}}(a,b)-|bc|\big{)}\leq 0\,.\]
Since \(n_{ab},n_{c},n_{d}>0\), and, by the triangle inequality, \(|bd|+|ac|-|ad|-|bc|>0\), \(|bd|+\delta_{T^{*}}(a,b)-|ad|>0\), and \(|ac|+\delta_{T^{*}}(a,b)-|bc|>0\), this is a contradiction.
## 3 An Exact Algorithm for Points in Convex Position
Let \(\{p_{1},p_{2},\ldots,p_{n}\}\) denote the vertices of the convex polygon that is obtained by connecting the points in \(P\), ordered in clockwise-order with an arbitrary first point \(p_{1}\); see Figure 2. For simplicity of presentation, we assume that all indices are taken modulo \(n\). For each \(1\leq i\leq j\leq n\), let \(P[i,j]\subseteq P\) be the set \(\{p_{i},p_{i+1},\ldots,p_{j}\}\). Let \(T_{i,j}\) be a spanning tree of \(P[i,j]\), and let \(W(T_{i,j})\) denote
its Wiener index. For a point \(x\in\{i,j\}\), let \(\delta_{x}(T_{i,j})\) be the total weight of the shortest paths from \(p_{x}\) to every point of \(P[i,j]\) in \(T_{i,j}\). That is \(\delta_{x}(T_{i,j})=\sum_{p\in P[i,j]}\delta_{T_{i,j}}(p_{x},p)\).
Let \(T^{*}\) be a minimum Wiener index tree of \(P\) and let \(W^{*}\) be its Wiener index. Notice that, for any \(1\leq i<j\leq n\), the points in \(P[i,j]\) are in convex position, since the points in \(P\) are in convex position. Since \(T^{*}\) is a spanning tree, each point, particularly \(p_{1}\), is adjacent to at least one edge in \(T^{*}\). Let \(p_{j}\) be the point with maximum index \(j\) that is connected to \(p_{1}\) in \(T^{*}\). Moreover, there exists an index \(1\leq i\leq j\) such that all the points in \(P[1,i]\) are closer to \(p_{1}\) than to \(p_{j}\) in \(T^{*}\), and all the points in \(P[i+1,j]\) are closer to \(p_{j}\) than to \(p_{1}\) in \(T^{*}\). Hence,
\[W^{*} =W(T_{1,i})+(n-i)\cdot\delta_{1}(T_{1,i}) \tag{1}\] \[+W(T_{i+1,j})+(n-j+i)\cdot\delta_{j}(T_{i+1,j})\] (2) \[+W(T_{j,n})+(j-1)\cdot\delta_{j}(T_{j,n})\] (3) \[+i(n-i)\cdot|p_{1}p_{j}|. \tag{4}\]
Thus, in order to compute \(W^{*}\), we compute (1), (2), (3), and (4) for each \(i\) between \(2\) and \(n\) and for each \(j\) between \(1\) and \(i\), and take the minimum over the sum of these values. In general, for every \(1\leq i<j\leq n\), let \(W_{j}[i,j]=W(T_{i,j})+(n-j+i-1)\cdot\delta_{j}(T_{i,j})\) be the minimum value obtained by a spanning tree \(T_{i,j}\) of \(P[i,j]\) rooted at \(p_{j}\). Similarly, let \(W_{i}[i,j]=W(T_{i,j})+(n-j+i-1)\cdot\delta_{i}(T_{i,j})\) be the minimum value obtained by a spanning tree \(T_{i,j}\) of \(P[i,j]\) rooted at \(p_{i}\). Thus, we can compute \(W_{j}[i,j]\) and \(W_{i}[i,j]\) recursively using the following formulas; see also Figure 3.
\[W_{j}[i,j]=\min_{\begin{subarray}{c}i\leq k<j\\ k\leq l<j\end{subarray}}\left\{W_{k}[i,k]+W_{k}[k,l]+W_{j}[l+1,j]+(l-i+1)(n-l+i- 1)\cdot|p_{k}p_{j}|\right\},\]
Figure 2: The convex polygon that is obtained from \(P\). \(p_{1}\) is connected to \(p_{j}\) in \(T^{*}\).
and
\[W_{i}[i,j]=\min_{\begin{subarray}{c}i<k\leq j\\ i\leq l<k\end{subarray}}\left\{W_{i}[i,l]+W_{k}[l+1,k]+W_{j}[k,j]+(j-l)(n-j+l) \cdot|p_{i}p_{k}|\right\}.\]
We compute \(W_{j}[i,j]\) and \(W_{i}[i,j]\), for each \(1\leq i<j\leq n\), using dynamic programming as follows. We maintain two tables \(\stackrel{{\rightarrow}}{{M}}\) and \(\stackrel{{\leftarrow}}{{M}}\) each of size \(n\times n\), such that \(\stackrel{{\rightarrow}}{{M}}[i,j]=W_{j}[i,j]\) and \(\stackrel{{\leftarrow}}{{M}}[i,j]=W_{i}[i,j]\), for each \(1\leq i<j\leq n\). We fill in the tables using Algorithm 1.
```
1:\(n\leftarrow|P|\)
2:for each \(i\gets 1\) to \(n\)do\(\stackrel{{\rightarrow}}{{M}}[i,i]\gets 0\)\(\stackrel{{\leftarrow}}{{M}}[i,i]\gets 0\)
3:for each \(j\gets n\) to \(1\)dofor each \(i\gets j\) to \(n\)do\(\stackrel{{\rightarrow}}{{M}}[i,j]\leftarrow\min_{ \begin{subarray}{c}i\leq k<j\\ k\leq l<j\end{subarray}}\left\{\stackrel{{\rightarrow}}{{M}}[i,k]+ \stackrel{{\leftarrow}}{{M}}[k,l]+\stackrel{{ \rightarrow}}{{M}}[l+1,j]+(l-i+1)(n-l+i-1)\cdot|p_{k}p_{j}|\right\}\)\(\stackrel{{\leftarrow}}{{M}}[i,j]\leftarrow\min_{ \begin{subarray}{c}i<k<j\\ i\leq l<k\end{subarray}}\left\{\stackrel{{\leftarrow}}{{M}}[i,l]+ \stackrel{{\rightarrow}}{{M}}[l+1,k]+\stackrel{{ \rightarrow}}{{M}}[k,j]+(j-l)(n-j+l)\cdot|p_{i}p_{k}|\right\}\)
4:return\(\stackrel{{\leftarrow}}{{M}}[1,n]\)
```
**Algorithm 1**\(ComputeOptimal(P)\)
Notice that when we fill the cell \(\stackrel{{\rightarrow}}{{M}}[i,j]\), all the cells \(\stackrel{{\rightarrow}}{{M}}[i,k]\), \(\stackrel{{\leftarrow}}{{M}}[k,l]\), and \(\stackrel{{\rightarrow}}{{M}}[l+1,j]\), for each \(i\leq k<j\) and for each \(k\leq l<j\), are already computed,
Figure 3: A sub-problem defined by \(P[i,j]\). (a) Computing \(W_{j}[i,j]\). (b) Computing \(W_{i}[i,j]\).
and when we fill the cell \(\stackrel{{\leftarrow}}{{M}}[i,j]\), all the cells \(\stackrel{{\leftarrow}}{{M}}[i,l]\), \(\stackrel{{\rightarrow}}{{M}}[l+1,k]\), and \(\stackrel{{\rightarrow}}{{M}}[k,j]\), for each \(i<k\leq j\) and for each \(i\leq l<k\), are already computed. Therefore, each cell in the table is computed in \(O(n^{2})\) time, and the whole table is computed in \(O(n^{4})\) time.
The following theorem summarizes the result of this section.
Theorem 3.1: _Let \(P\) be a set of \(n\) points in convex position. Then, a spanning tree of \(P\) of minimum Wiener index can be computed in \(O(n^{4})\) time._
## 4 Hardness Proof
Let \(P\) be a set of points in the plane and let \(T\) be a spanning tree of \(P\). We define the Wiener index of \(T\) as \(W(T)=\sum_{p,q\in P}\delta_{T}(p,q)\) and the weight of \(T\) as \(wt(T)=\sum_{(p,q)\in T}|pq|\), where \(\delta_{T}(p,q)\) is the length of the path between \(p\) and \(q\) in \(T\) and \(|pq|\) is the Euclidean distance between \(p\) and \(q\). For a edge \((p,q)\), let \(N_{T}(p)\) (resp., \(N_{T}(q)\)) be the number of points in \(T\) that are closer to \(q\) than \(q\) (resp., to \(q\) than \(p\)). It is well known [21] that \(W(T)\) can be formulated as:
\[W(T)=\sum_{(p,q)\in T}N_{T}(p)\cdot N_{T}(q)\cdot|pq|.\]
In this section, we prove that the following problem is NP-hard.
Euclidean Wiener Index Tree Problem:Given a set \(P\) of points in the plane, a cost \(W\), and a budget \(B\), decide whether there exists a spanning tree \(T\) of \(P\), such that \(W(T)\leq W\) and \(wt(T)\leq B\).
Theorem 4.1: _The Euclidean Wiener Index Tree Problem is weakly NP-hard._
Proof: Inspired by Carmi and Chaitman-Yerushalmi [8], we reduce the Partition problem, which is known to be NP-hard [14], to the Euclidean Wiener Index Tree Problem. In the Partition problem, we are given a set \(X=\{x_{1},x_{2},\ldots,x_{n}\}\) of \(n\) positive integers with even \(R=\sum_{i=1}^{n}x_{i}\), and the goal is to decide whether there is a subset \(S\subseteq X\), such that \(\sum_{x_{i}\in S}x_{i}=\frac{1}{2}R\).
Given an instance \(X=\{x_{1},x_{2},\ldots,x_{n}\}\) of the Partition problem, where \(x_{i}\)'s are integers, we construct a set \(P\) of \(m=n^{3}+3n\) points as follows. The set \(P\) consists of \(n\) points \(p_{1},p_{2},\ldots,p_{n}\) located equally spaced on a circle of radius \(nR\), a cluster \(C\) of \(n^{3}\) points located on the center of the circle. Moreover, for each \(1\leq i\leq n\), we locate two points \(l_{i}\) and \(r_{i}\) both of distance \(x_{i}\) from \(p_{i}\) and the distance between them is \(\frac{1}{2}x_{i}\); see Figure 4. Finally, we set
\[B = \Big{(}n^{2}+\frac{7}{4}\Big{)}R,\,\text{and}\] \[W = 3n^{2}\big{(}m-3\big{)}R+\Big{(}\frac{9}{4}m-\frac{13}{4}\Big{)}R\] \[= 3n^{5}R+\frac{45}{4}n^{3}R-9n^{2}R+\frac{27}{4}nR-\frac{13}{4}R\,.\]
Assume that there exists a set \(S\subseteq X\), such that \(\sum_{x_{i}\in S}x_{i}=\frac{1}{2}R\). We construct a spanning tree \(T\) for the points in \(P\) as follows:
* Select an arbitrary point \(s\in C\) and connect it to all the points in \(C\cup\{p_{1},p_{2},\ldots,p_{n}\}\) as a star centered at \(s\).b
* For each \(1\leq i\leq n\), connect the points \(p_{i}\) and \(l_{i}\).
* For each \(x_{i}\in S\), connect the points \(p_{i}\) and \(r_{i}\).
* For each \(x_{i}\in X\setminus S\), connect the points \(r_{i}\) and \(l_{i}\); see Figure 4.
It is easy to see that \(wt(T)=n^{2}R+R+\frac{3}{4}R=\big{(}n^{2}+\frac{7}{4}\big{)}R=B\). Moreover, the Wiener index of \(T\) is:
\[W(T) = \sum_{(p,q)\in T}N_{T}(p)\cdot N_{T}(q)\cdot|pq|\] \[= 3(n^{3}+3n-3)n^{2}R+\sum_{x_{i}\in S^{\prime}}2(n^{3}+3n-1)x_{i}\] \[\quad+\sum_{x_{i}\notin S^{\prime}}\Big{(}(n^{3}+3n-1)\frac{1}{2} x_{i}\Big{)}+\sum_{x_{i}\notin S^{\prime}}\Big{(}2(n^{3}+3n-2)x_{i}\Big{)}\] \[= 3n^{5}R+9n^{3}R-9n^{2}R+(n^{3}+3n-1)R\] \[\quad+\frac{1}{4}(n^{3}+3n-1)R+(n^{3}+3n-2)R\] \[= 3n^{5}R+\frac{45}{4}n^{3}R-9n^{2}R+\frac{27}{4}nR-\frac{13}{4}R =W\,.\]
Conversely, let \(T^{\prime}\) be a spanning tree of \(P\) with \(wt(T^{\prime})\leq B\) and \(W(T^{\prime})\leq W\).
Figure 4: The set \(P\) produced by the reduction. Connecting the points \(l_{j}\), \(r_{j}\), and \(p_{j}\) for \(x_{j}\in S\) (blue) and connecting the points \(l_{i}\), \(r_{i}\), and \(p_{i}\) for \(x_{i}\in X\setminus S\) (red).
Claim: The number of edges \((p,q)\in T^{\prime}\), such that \(p\in C\) and \(q\in P\setminus C\) is \(n\).
Proof: Assume there are \(k\) such edges. The weight of each such edge is at least \(nR\) thus the \(wt(T^{\prime})\geq knR\), since \(B=(n^{2}+\frac{7}{4})R\) we get that \(k\leq n\). We have
\[W(T^{\prime}) > (3knR+3(n-k)(nR+2\pi R))n^{3}\] \[= (3kn+3n^{2}+6n\pi-3kn-6k\pi)n^{3}R\] \[= (3n^{2}+6\pi(n-k))n^{3}R\] \[= 3n^{5}R+6\pi(n-k)n^{3}R\,.\]
Thus, if \(k<n\), then we get that \(W(T^{\prime})>3n^{5}R+6\pi n^{3}R>W\), for sufficiently large \(n\).
Let \(G_{i}=\{p_{i},l_{i},r_{i}\}\), for every \(1\leq i\leq n\). From the proof of Claim 4, if follows that for every \(1\leq i\leq n\), there is an exactly one edge \((p,q)\) in \(T^{\prime}\), where \(q\in G_{i}\) and \(p\in C\). Moreover, it is easy to see that \(q=p_{i}\). Thus, in every \(G_{i}\), we have \((p_{i},l_{i})\in T^{\prime}\) or \((p_{i},r_{i})\in T^{\prime}\). Assume w.l.o.g., that \((p_{i},l_{i})\in T^{\prime}\). Therefore, either \((p_{i},r_{i})\in T^{\prime}\) or \((l_{i},r_{i})\in T^{\prime}\). Let \(S^{\prime}\subseteq X\), such that \(x_{i}\in S^{\prime}\) if and only if \((p_{i},r_{i})\in T^{\prime}\), and let \(R^{\prime}=\sum_{x_{i}\in S^{\prime}}x_{i}\).
Thus, to finish the proof we show that if \(R^{\prime}\neq\frac{1}{2}R\), then either \(wt(T^{\prime})>B\) or \(W(T)>W\).
**Case 1:**\(R^{\prime}>\frac{1}{2}R\). In this case, we have
\[wt(T^{\prime}) \geq n^{2}R+\sum_{x_{i}\in S^{\prime}}2x_{i}+\sum_{x_{i}\notin S^{ \prime}}\frac{3}{2}x_{i}\ =\ n^{2}R+2R^{\prime}+\frac{3}{2}(R-R^{\prime})\] \[= n^{2}R+\frac{1}{2}R^{\prime}+\frac{3}{2}R\ >\ n^{2}R+\frac{1}{4}R+ \frac{3}{2}R\ =\ \big{(}n^{2}+\frac{7}{4}\big{)}R\ =B\,.\]
Therefore, \(wt(T^{\prime})>B\).
**Case 2:**\(R^{\prime}<\frac{1}{2}R\). In this case, we have
\[W(T) = \sum_{(p,q)\in T}N_{T}(p)\cdot N_{T}(q)\cdot|pq|\] \[= 3(n^{3}+3n-3)n^{2}R+\sum_{x_{i}\in S^{\prime}}2(n^{3}+3n-1)x_{i}\] \[\ \ \ +\sum_{x_{i}\notin S^{\prime}}\Big{(}(n^{3}+3n-1)\frac{1}{2} x_{i}\Big{)}+\sum_{x_{i}\notin S^{\prime}}\Big{(}2(n^{3}+3n-2)x_{i}\Big{)}\] \[= 3n^{5}R+9n^{3}R-9n^{2}R+2(n^{3}+3n-1)R^{\prime}\] \[\ \ \ +\frac{1}{2}\Big{(}n^{3}+3n-1\Big{)}(R-R^{\prime})+2(n^{3}+3 n-2)(R-R^{\prime})\]
\[= 3n^{5}R+9n^{3}R-9n^{2}R+2(n^{3}+3n-2)R\] \[\quad-\Big{(}\frac{1}{2}\Big{(}n^{3}+3n-1\Big{)}-2\Big{)}R^{\prime} +\frac{1}{2}\Big{(}n^{3}+3n-1\Big{)}R\] \[> 3n^{5}R+9n^{3}R-9n^{2}R+2(n^{3}+3n-2)R\] \[\quad-\frac{1}{2}\Big{(}\frac{1}{2}\Big{(}n^{3}+3n-1\Big{)}-2 \Big{)}R+\frac{1}{2}\Big{(}n^{3}+3n-1\Big{)}R\] \[= 3n^{5}R+\frac{45}{4}n^{3}R-9n^{2}R+\frac{27}{4}nR-\frac{13}{4}R= W\,.\]
## 5 Paths that Optimize Wiener Index
We consider now the case of spanning paths that optimize the Wiener index.
Theorem 5.1: _Let \(P\) be a set of \(n\) points. The path that minimizes the Wiener index among all Hamiltonian paths of \(P\) is not necessarily planar._
Proof: Consider the set \(P\) of \(n=2m+2\) points in convex position as shown in Figure 5. The set \(P\) consists of two clusters \(P_{l}\) and \(P_{r}\) and two points \(p\) and \(q\), where \(|P_{l}|=|P_{r}|=m\). The points in cluster \(P_{l}\) are arbitrarily close to the origin \((0,0)\), and the points in cluster \(P_{r}\) are arbitrarily close to coordinate \((6,0)\). The point \(p\) is located on coordinate \((5,1)\) and the point \(q\) is located on coordinate \((5,-1)\).
For simplicity of computation, we assume that a path connecting the points in \(P_{l}\) has a Wiener index zero, and also a path connecting the points in \(P_{r}\) has a Wiener index zero. Thus, any path \(\Pi\) of \(P\) that aims to minimize the Wiener index will connect the points in \(P_{l}\) by a path and the points in \(P_{r}\) by a path. We computed the Wiener index of all possible Hamiltonian paths defined on points \((0,0)\), \((6,0)\), \(p\), and \(q\); see Figure 6. This computation shows that the Hamiltonian path of the minimum Wiener index is not planar (for sufficiently large \(n\)).
Theorem 5.2: _For points in the Euclidean plane, it is NP-hard to compute a Hamiltonian path minimizing Wiener index._
Figure 5: A set \(P\) of \(n=2m+2\) points in a convex position.
Proof: We reduce from Hamiltonicity in a grid graph (whose vertices are integer grid points and whose edges join pairs of grid points at distance one). First, observe that the Wiener index of a Hamiltonian path of \(n\) points, where each edge is of length one, is \(\sum_{i=1}^{n-1}i(n-i)={n+1\choose 3}\); see Figure 7. Thus, it is easy to see that a grid graph \(G\) has a Hamiltonian path if and only if there exists a path of Wiener index \({n+1\choose 3}\).
Theorem 4.1: _There exists a set \(P\) of \(n\) points in the plane, such that the Wiener index of any Hamiltonian path is at least \(\Theta(\sqrt{n})\) times the Wiener index of the complete Euclidean graph over \(P\)_
Proof: Let \(P\) be a set of \(n\) points located on a\(\sqrt{n}\times\sqrt{n}\) integer grid. The Wiener index of any Hamiltonian path of \(P\) is at least \({n+1\choose 3}\), which is the Wiener index of a Hamiltonian path whose all its edges are of length one. Thus, the Wiener index of any Hamiltonian path of \(P\) is at least \(\Theta(n^{3})\). On the other hand, the Wiener index of the complete graph over \(P\) is \(\Theta(n^{2.5})\).
Figure 6: The Wiener index of the 12 possible Hamiltonian paths that are defined on points \((0,0)\), \((6,0)\), \(p\), and \(q\) (assuming that the \(m\) points on \((0,0)\) are connected by a path, and the \(m\) points on \((6,0)\) are connected by a path, both of Wiener index zero).
Figure 7: A grid graph \(G\) and a Hamiltonian path with Wiener index \({n+1\choose 3}\) in \(G\). |
2304.05257 | Multi-granulariy Time-based Transformer for Knowledge Tracing | In this paper, we present a transformer architecture for predicting student
performance on standardized tests. Specifically, we leverage students
historical data, including their past test scores, study habits, and other
relevant information, to create a personalized model for each student. We then
use these models to predict their future performance on a given test. Applying
this model to the RIIID dataset, we demonstrate that using multiple
granularities for temporal features as the decoder input significantly improve
model performance. Our results also show the effectiveness of our approach,
with substantial improvements over the LightGBM method. Our work contributes to
the growing field of AI in education, providing a scalable and accurate tool
for predicting student outcomes. | Tong Zhou | 2023-04-11T14:46:38Z | http://arxiv.org/abs/2304.05257v3 | # Multi-granularity Time-based Transformer for Knowledge Tracing
###### Abstract
In this paper, we present a transformer architecture for predicting student performance on standardized tests. Specifically, we leverage students' historical data, including their past test scores, study habits, and other relevant information, to create a personalized model for each student. We then use these models to predict their future performance on a given test. Applying this model to the RIIID dataset, we demonstrate that using multiple granularities for temporal features as the decoder input significantly improve model performance. Our results also show the effectiveness of our approach, with substantial improvements over the LightGBM method. Our work contributes to the growing field of AI in education, providing a scalable and accurate tool for predicting student outcomes.
Transformer, Multi-granularity, Education, RIIID, Deep Learning
## I Introduction
Knowledge tracing is an important field of research in educational data mining, as it can help to improve the effectiveness and efficiency of learning. The first application is personalized learning. [1] surveys that by tracking the histories of individual students over time, instructors can tailor instructional materials to meet specific needs of each student. The second application is early intervention. [2] recognizes that knowledge tracing enables instructors to intervene early and offer extra assistance or resources to students who need it in order to succeed by identifying those who are having difficulty with specific concepts or skills. Knowledge tracing can also facilitate adaptive learning, which can change the level of difficulty and pace of instruction based on the understanding and performance of individual students. [3] shows that learning and engagement can be enhanced since students are appropriately challenged can can avoid frustration or boredom.
Recent advances in artificial intelligence (AI) have shown great promise for improving educational outcomes. In particular, deep learning models have been developed to predict student performance on standardized tests, such as the SAT and TOEIC, based on a variety of factors, including their previous academic records, socio-economic background, and personal characteristics. One of the key challenges in building such models is capturing the dynamic and complex nature of student behavior, which can vary widely over time and across individuals.
In this paper, we propose a Transformer that leverages users' histories for educational performance prediction. The Transformer is a state-of-the-art neural network architecture that has achieved impressive results in various natural language processing tasks, such as machine translation, question answering, and text generation. Our approach extends the Transformer to model student performance by incorporating their past academic records, study habits, and other relevant information.
To evaluate the effectiveness of our approach, we conducted experiments on real-world educational datasets, including the Kaggle Riid AIEd Challenge dataset. We demonstrate that converting temporal features into multiple categorical features with different granularities can greatly improve model's performance. The results even show that the lecture information is irrelevant in the present of the multi-granularity temporal features. Our results also demonstrate that our model outperforms traditional LightGBM, achieving state-of-the-art performance in predicting student performance on standardized tests. Moreover, we show that our model can be used to provide personalized recommendations to students based on their historical data, enabling them to improve their academic performance.
## II Related Work
Works of knowledge tracing models mainly follow two different approaches: BKT (Bayesian Knowledge Tracing) and DKT (Deep Knowledge Tracing). BKT is a probabilistic model where student knowledge is modeled as a latent variable and other observed context information and learning performance are used to identify the latent structure represented by a Hidden Markov Model. Hidden Markov Model (HMM) is a statistical model used to analyze sequential data. HMMs are made up of a number of observable states, which reflect the observed data, and a number of hidden states, which represent the unobserved variables underlying the data. The Baum-Welch algorithm is used to estimate the transition probabilities between hidden states and the emission probabilities from hidden states to observable states using the training data. HMMs have been widely used in speech recognition [4], network analysis [5], and even social sciences [6]. HMM's applications in knowledge tracing can be found in [2, 7, 8, 9, 10, 11]. BKT, however, has some innate limitations, such as its implausible assumption of independence between skills, and its inability
to address correlations between different skills or knowledge components.
The Deep Knowledge Tracing (DKT) has received increasing attentions. DKT relies on a recurrent neutral network (RNN) architecture, using a Long Short-Term Memory (LSTM) network to take a sequence of student responses and other context information as input. By its design, DKT is able to handle correlations between different knowledge components and capture complex interactions between these components over time [12, 13, 14].
Using Transformer architecture has gained more popularity in recent years, as its self attention mechanism has demonstrated great effectiveness for sequential prediction tasks. Three notable papers based on Transformer are SAKT [15], AKT [16], SAINT [17] and SAINT+ [18]. SAINT+ is mostly related to our paper. We present a slightly different neural network architecture and a notable improvement in feature engineering for temporal features. A comprehensive survey on knowledge tracing model can be found in [19].
## III Methods
We present our Transformer architecture in Figure 1. The encoder in our model consists of four components: question embeddings, part embeddings, position embeddings and prior-question-had-explanation embeddings. The decoder in our model consists of 6 components: position embeddings, response embeddings, prior-elapsed-time embeddings, lag-time-1 categorical embeddings, lag-time-3 categorical embeddings.One of the notable features of our Transformer architecture is that we included three separate embedding layers for representing lag time in three different temporal granularities: by seconds, by minutes, and by days.
This architecture captures multiple types of information. The encoder captures information about the questions and parts of the questions, as well as the presence or absence of explanations. This allows the model to capture multiple types of information that can affect student learning, and to learn meaningful representations of these factors.
It also incorporates temporal information. The encoder uses position embedding to encode the position of each token within the sequence of interactions between the student and the instruction materials. This allows the model to capture the temporal dynamics of the data and learn representations that can account for changes in student learning over time.
It can model complex interactions: The use of multiple embedding layers in both the encoder and decoder components of the model allows for the modeling of complex interactions between different factors that can affect student learning. For example, the model can learn to capture the relationship between a student's prior response and their subsequent performance on a related question, or the interaction between elapsed time and lag time in predicting student learning outcomes.
### _Data_
We use the dataset provided by Kaggle from the "Riiid! Answer Correctness Prediction" competition1, which challenges participants to build machine learning models that can predict students' responses to questions on an educational platform.
Footnote 1: [https://www.kaggle.com/c/riiid-test-answer-prediction/](https://www.kaggle.com/c/riiid-test-answer-prediction/)
The dataset contains over 100 million rows of data from student interactions with the platform, including information on the questions, answers, and explanations provided, as well as metadata such as the time elapsed since the previous interaction and the student's performance history. The goal is to predict whether or not a student will correctly answer the next question in a sequence, based on their past interactions with the platform.
### _Transformer_
Our knowledge tracing model is based Transformer [20] architecture which includes an encoder and a decoder. A student's interactions with the online learning system are modeled as a sequence of events, where each event corresponds to a particular action taken by the student. These actions may include answering a question, skipping a question, or requesting help, among others. To convert this sequence of events into a format that can be used by the Transformer, the data is typically preprocessed and encoded in two separate steps: one for the encoder and one for the decoder.
Our paper makes use of upper triangular masks as the multi-head attention (and self-attention). This setting in both the
Fig. 1: Model Architecture
encoder and decoder is only allowed to attend to positions that have already been processed. In other words, each token can only attend to previous tokens in the sequence, but not to tokens that come after it. In order to avoid the model from "cheating" by taking information from the future, it first makes sure that the model only uses data that is available at each stage of the sequence.
### _Encoder_
1. **Question embeddings**: this feature encodes question ID. We used an embedding layer of \(128\) dimensions. The question embedding layer is intended to help the neural network identify the connections between various abilities or concepts and their significance to the student's overall learning development. The network may learn to spot links between question difficulty and student performance by modeling the questions and tasks using an embedding layer.
2. **Part embeddings**: this feature encodes different sections of test in dataset. This embedding layer is to help the neural network understand the connections between different parts of the questions and their significance to the student's overall learning progress. The network may learn to capture the connections between category difficulty and student performance by encoding the categories using an embedding layer.
3. **Position embeddings**: this feature serves to encode the relative position of each token in the input sequence, since the self-attention mechanism treats all tokens equally regardless of their position in the sequence.
4. **Prior-question-had-explanation embeddings**: this feature encodes whether or not a user saw the explanation of a question after answering it, with a vocabulary size of \(3\). This embedding layer is intended to help the neural network identify the connection between the user's attempt to see an explanation and their overall learning outcomes.
### _Decoder_
1. **Position embeddings**: this feature serves to encode the relative position of each token in the input sequence, since the self-attention mechanism treats all tokens equally regardless of their position in the sequence.
2. **response embeddings**: this feature encodes if a user's answer if correct or not, with a vocabulary size of 3. The goal of the response embedding layer is to help the neural network understand the connection between the accuracy of the student's response and their overall progress of learning. The network can learn to distinguish between right and wrong responses can can record patterns and links between response accuracy and student performance by modelling the response using an embedding layer. This can increase the network's predictive accuracy and make it easier to spot areas where a student may be struggling or excelling.
3. **prior-elapsed-time embeddings**: this feature encodes the amount of time that a user used to answer the question, with a vocabulary size of \(301\). The goal of the elapsed time embedding is to assist the neural network in learning meaningful representations of the temporal information and in capturing the amount of time that has passed between the current task or question ant the previous one. The network may learn to distinguish between various time periods and capture patterns and correlations between elapsed time and student performance by expressing elapsed time using an embedding layer. This can increase the network's prediction accuracy and make it easier to spot long-term trends in student learning.
4. **lag-time-1 categorical embeddings**: this feature encodes lag time in seconds, with a vocabulary size of \(301\).
5. **lag-time-2 categorical embeddings**: this feature encodes lag time for representing lag time in minutes, with a vocabulary size of \(1441\).
6. **lag-time-3 categorical embedding**: this feature encodes lag time for representing lag time in days, with a vocabulary size of \(366\).
### _Training_
The predicted probability of the user's response being correct at each position is obtained after the transformer's output passes through a dense layer with sigmoid activation. Binary cross-entropy loss is used to evaluate our model. Since the dataset contains almost 100 million rows of data, we simply use \(97.5\%\) of data as the training data, and \(2.5\%\) as the validation data. The ultimate hyper-parameters are as follows
* Max sequence: 100
* Embedding size: 128
* Number of layers in encoder: \(2\)
* Number of layers in decoder: \(2\)
* Batch size: \(256\)
* Dropout: \(0.1\)
* Epoch: \(10\)
* Number of heads: \(8\)
* Optimizer: AdamW
* Learning rate: \(5\times 10^{-4}\)
## IV Results
We did three experiments for comparison purposes: Transformer without lecture information, Transformer with lecture information, and LightGBM
Table I shows that the multiple temporal granularities are significant such that even lecture information is ignored,
they can capture the complex interactions between question difficulty and student performance. Moreover, LightGBM necessitated extensive feature engineering efforts. To avoid data leakage, which entails using future information to predict past information, each feature at a given time point in the model had to be computed based on its information up to that time point. Suppose we intend to employ the mean of lag time; in that case, we cannot just utilize the average time across all periods for each student. Instead, we need to consider the accumulated mean time up to time \(t\) to circumvent data leakage. Another challenge when utilizing LightGBM is performing cross-validation for time series data. For the same reason, we opted for rolling window cross-validation.
Therefore, our Transformer model has shown promising results in predicting student performance on the RIIID dataset, achieving state-of-the-art performance with fewer feature engineering efforts. This suggests that the Transformer's self-attention mechanism is able to effectively learn the relevant features from the data without requiring extensive manual feature engineering. Additionally, our model's ability to handle sequential data makes it well-suited for other time series prediction tasks in education and beyond. Overall, our proposed Transformer architecture offers a powerful and efficient approach for predicting student performance on educational assessments.
## V Conclusion
In this paper, we introduced a novel Transformer architecture for knowledge tracing, which is designed to capture multiple types of information that can affect student learning. Our architecture includes a sophisticated encoder component that captures information about the questions, parts of questions, and presence or absence of explanations, as well as a decoder component that incorporates temporal information about the sequence of interactions between the student and the learning materials. Notably, we included three separate embedding layers to represent lag time in different temporal granularities, allowing the model to capture the temporal dynamics of the data.
Our architecture also has the ability to model complex interactions between different factors that can affect student learning, including the relationship between a student's prior response and their subsequent performance on a related question, or the interaction between elapsed time and lag time in predicting student learning outcomes. Overall, our Transformer architecture provides a powerful tool for accurately predicting student performance and identifying areas where additional support or guidance may be needed.
## Acknowledgment
The author would like to thank Kaggle and Riiid AIEd for organizing the competition and providing such high quality data.
|
2310.17914 | 3D-Aware Visual Question Answering about Parts, Poses and Occlusions | Despite rapid progress in Visual question answering (VQA), existing datasets
and models mainly focus on testing reasoning in 2D. However, it is important
that VQA models also understand the 3D structure of visual scenes, for example
to support tasks like navigation or manipulation. This includes an
understanding of the 3D object pose, their parts and occlusions. In this work,
we introduce the task of 3D-aware VQA, which focuses on challenging questions
that require a compositional reasoning over the 3D structure of visual scenes.
We address 3D-aware VQA from both the dataset and the model perspective. First,
we introduce Super-CLEVR-3D, a compositional reasoning dataset that contains
questions about object parts, their 3D poses, and occlusions. Second, we
propose PO3D-VQA, a 3D-aware VQA model that marries two powerful ideas:
probabilistic neural symbolic program execution for reasoning and deep neural
networks with 3D generative representations of objects for robust visual
recognition. Our experimental results show our model PO3D-VQA outperforms
existing methods significantly, but we still observe a significant performance
gap compared to 2D VQA benchmarks, indicating that 3D-aware VQA remains an
important open research area. | Xingrui Wang, Wufei Ma, Zhuowan Li, Adam Kortylewski, Alan Yuille | 2023-10-27T06:15:30Z | http://arxiv.org/abs/2310.17914v1 | # 3D-Aware Visual Question Answering
###### Abstract
Despite rapid progress in Visual question answering (_VQA_), existing datasets and models mainly focus on testing reasoning in 2D. However, it is important that VQA models also understand the 3D structure of visual scenes, for example to support tasks like navigation or manipulation. This includes an understanding of the 3D object pose, their parts and occlusions. In this work, we introduce the task of 3D-aware VQA, which focuses on challenging questions that require a compositional reasoning over the 3D structure of visual scenes. We address 3D-aware VQA from both the dataset and the model perspective. First, we introduce Super-CLEVR-3D, a compositional reasoning dataset that contains questions about object parts, their 3D poses, and occlusions. Second, we propose PO3D-VQA, a 3D-aware VQA model that marries two powerful ideas: probabilistic neural symbolic program execution for reasoning and deep neural networks with 3D generative representations of objects for robust visual recognition. Our experimental results show our model PO3D-VQA outperforms existing methods significantly, but we still observe a significant performance gap compared to 2D VQA benchmarks, indicating that 3D-aware VQA remains an important open research area. The code is available at [https://github.com/XingruiWang/3D-Aware-VQA](https://github.com/XingruiWang/3D-Aware-VQA).
## 1 Introduction
Visual question answering (_VQA_) is a challenging task that requires an in-depth understanding of vision and language, as well as multi-modal reasoning. Various benchmarks and models have been proposed to tackle this challenging task, but they mainly focus on 2D questions about objects, attributes, or 2D spatial relationships. However, it is important that VQA models understand the 3D structure of scenes, in order to support tasks like autonomous navigation and manipulation.
An inherent property of human vision is that we can naturally answer questions that require a comprehensive understanding of the 3D structure in images. For example, humans can answer the questions shown in Fig. 1, which ask about the object parts, their 3D poses, and occlusions. However, current VQA models, which often rely on 2D bounding boxes to encode a visual scene [2; 59; 25] struggle to answer such questions reliably (as can be seen from our experiments). We hypothesize this is caused by the lack of understanding of the 3D structure images.
In this work, we introduce the task of 3D-aware VQA, where answering the questions requires compositional reasoning over the 3D structure of the visual scenes. More specifically, we focus on challenging questions that require multi-step reasoning about the object-part hierarchy, the 3D poses of the objects, and the occlusion relationships between objects or parts.
We address the challenging 3D-aware VQA task from both the dataset and the model perspective. From the dataset perspective, we introduce Super-CLEVR-3D, which extends the Super-CLEVR dataset [32] with 3D-aware questions. Given the visual scenes from Super-CLEVR that contain randomly placed vehicles of various categories, we define a set of 3D-aware reasoning operations and automatically generate 3D questions based on these operations. Fig. 1 shows examples of the images, questions and the underlying 3D operations for the questions. From the model perspective, we introduce PO3D-VQA, a VQA model that marries two powerful ideas: probabilistic neural symbolic program execution for reasoning and a deep neural network with 3D generative representations of objects for robust visual scene parsing. Our model first recovers a 3D scene representation from the image and a program from the question, and subsequently executes the program on the 3D scene representation to obtain an answer using a probabilistic reasoning process that takes into account the confidence of predictions from the neural network. We refer to our system as PO3D-VQA, which stands for Parts, Poses, and Occlusions in **3D** Visual **Q**uestion **A**nswering.
On Super-CLEVR-3D, we experiment with existing representative models, their variants, and our model PO3D-VQA. The results show that our model outperforms existing methods significantly, leading to an improvement in accuracy of more than 11%, which shows the advantage of the generative 3D scene parser and the probabilistic neural symbolic reasoning process. Moreover, further analysis on questions with different difficulty levels reveals that the improvements of our model are even greater on harder questions with heavy occlusions and small part sizes. Our results indicate that a reliable 3D understanding, together with the modular reasoning procedure, produces a desirable 3D-aware VQA model.
In summary, our contributions are as follows. (1) We introduce the challenging task of 3D-aware VQA and propose the Super-CLEVR-3D dataset, where 3D visual understanding about parts, 3D poses, and occlusions are required. (2) We propose a 3D-aware neural modular model PO3D-VQA that conducts probabilistic reasoning in a step-wise modular procedure based on robust 3D scene parsing. (3) With experiments, we show that 3D-aware knowledge and modular reasoning are crucial for 3D-aware VQA, and suggest future VQA methods take 3D understanding into account.
## 2 Related Work
**Visual Question Answering (VQA).** Rapid progress has been made in VQA [4] in both the datasets and the models. To solve the challenging VQA datasets [15; 61; 17; 45] with real images, multiple models are developed including two-stream feature fusion [2; 14; 28; 55; 23; 44; 30] or transformer-based pretraining [48; 36; 31; 59; 25]. However, the real datasets are shown to suffer from spurious correlations and biases [42; 16; 41; 1; 15; 26; 27]. Alternatively, synthetic datasets like CLEVR [24] and Super-CLEVR [32], are developed to study the compositional reasoning ability of VQA systems, which are also extended to study other vision-and-language tasks [34; 29; 53; 58; 6; 47; 20]. The synthetic datasets promote the development of neural modular methods [3; 54; 40; 22], where the reasoning is done in a modular step-by-step manner. It is shown that the modular methods have nice properties including interpretability, data efficiency [54; 40], better robustness [32] and strong performance on synthetic images [54]. However, most existing methods rely on region features [2; 59] extracted using 2D object detectors [46] for image encoding, which is not 3D-aware. We follow the works on the synthetic dataset and enhance the modular methods with 3D understanding.
Figure 1: Examples from Super-CLEVR-3D. We introduce the task of 3D-aware VQA, which requires 3D understanding of the image, including the parts, 3D poses, and occlusions.
**VQA in 3D.** Multiple existing works study VQA under the 3D setting, such as SimVQA [8], SQA3D [39], 3DMV-VQA [19], CLEVR-3D [51], ScanQA [52], 3DQA [52], and EmbodiedQA [13], which focus on question answering on the 3D visual scenes like real 3D scans [39; 51; 5; 52], simulated 3D environments [9; 13], or multi-view images [19]. PTR [20] is a synthetic VQA dataset that requires part-based reasoning about physics, analogy and geometry. Our setting differs from these works because we focus on 3D in the _questions_ instead of 3D in the _visual scenes_, since our 3D-aware questions explicitly query the 3D information that can be inferred from the 2D input images.
**3D scene understanding.** One popular approach for scene understanding is to use the CLIP features pretrained on large-scale text-image pairs and segment the 2D scene into semantic regions [10; 43]. However, these methods lack a 3D understanding of the scene and cannot be used to answer 3D-related questions. Another approach is to adopt category-level 6D pose estimation methods that can locate objects in the image and estimate their 3D formulations. Previous approaches include classification-based methods that extend a Faster R-CNN model for 6D pose estimation [60; 38] and compositional models that predicts 6D poses with analysis-by-synthesis [38]. We also notice the huge progress of 3D vision language foundation models, which excel in multiple 3D vision-language understanding tasks [19; 37; 21]. Still, we focus on the reasoning with compositional reasoning that brings more interpretability and robustness [32].
## 3 Super-CLEVR-3D Dataset
To study 3D-aware VQA, we propose the Super-CLEVR-3D dataset, which contains questions explicitly asking about the 3D object configurations of the image. The images are rendered using scenes from the Super-CLEVR dataset [32], which is a VQA dataset containing synthetic scenes of randomly placed vehicles from 5 categories (car, plane, bicycle, motorbike, bus) with various of sub-types (_e.g_. different types of cars) and attributes (color, material, size). The questions are generated by instantiating the question templates based on the image scenes, using a pipeline similar to Super-CLEVR. In Super-CLEVR-3D, three types of 3D-aware questions are introduced: part questions, 3D pose questions, and occlusion questions. In the following, we will describe these three types of questions, and show the new operations we introduced for our 3D-aware questions about object parts, 3D poses, and occlusions. Examples of the dataset are shown in Fig. 1.
**Part questions.** While in the original Super-CLEVR dataset refers to objects using their holistic names or attributes, objects are complex and have hierarchical parts, as studied in recent works [33; 11; 20]. Therefore, we introduce part-based questions, which use parts to identify objects (_e.g_. "which vehicle has red door") or query about object parts (_e.g_. "what color is the door of the car"). To enable the generation of part-based questions, we introduce two new operations into the reasoning programs: part_to_object(\(\cdot\)), which find the objects containing the given part, and object_to_part(\(\cdot\)), which select all the parts of the given object. We also modify some existing operations (_i.e_. filter, query and unique), enabling them to operate on both object-level and part-level. With those reasoning operations, we collect 9 part-based templates and instantiate them with the image scene graph to generate questions.
**3D pose questions.** Super-CLEVR-3D asks questions about the 3D poses of objects (_e.g_. "which direction is the car facing in"), or the pair-wise pose relationships between objects (_e.g_. "which object has vertical direction with the red car"). The pose for an individual object (_e.g_. "facing left") can be processed in a similar way as attributes like colors, so we extend the existing attribute-related operations like filter and query to have them include pose as well. For pair-wise pose relationship between objects, we add three operations, _i.e_. same_pose, opposite_pose and vertical_pose, to deal with the three types of pose relationships between objects. For example, opposite_pose(\(\cdot\)) returns the objects that are in the opposite pose direction with the given object. 17 templates are collected to generate 3D pose questions.
**Occlusion questions.** Occlusion questions ask about the occlusion between entities (_i.e_. objects or parts). Similar to 3D poses, occlusion can also be regarded as either an attributes for an entity (_e.g_. "which object is occluded"), or as a relationship between entities (_e.g_. "which object occludes the car door"). We extend the attribute-related operations, and introduce new operations to handle the pair-wise occlusion relationships: filter_occludee which filters the entities that are being occluded, relate_occluding which finds the entities that are occluded by the given entity, and relate_occluded which finds the entities that are occluding the given entity. Using these operations, 35 templates are collected to generate the occlusion questions.
## 4 Method
In this section, we introduce PO3D-VQA, which is a parse-then-execute modular model for 3D-aware VQA. The overview of our system is shown in Fig. 2. We first parse the image into a scene graph representation that is aware of 3D information like object parts, 3D poses and occlusion relations, then we parse the question into a reasoning program and execute the program on the derived scene representations in a probabilistic manner. In Sec. 4.1, we define the scene representation required; in Sec. 4.2, we describe how we parse the image into the scene representation based on a multi-class 6D pose estimation model with non-trivial extensions; in Sec. 4.3, we describe how the question is executed on the derived scene representation to predict the answer.
### 3D-aware scene representation
Given an input image \(I\), we parse it into a 3D-aware scene representation \(R\) that contains the **objects** (\(O\)) with attributes (\(A^{o}\)), the **parts** (\(P\)) with attributes (\(A^{p}\)), the **hierarchical relationships** between objects and parts (\(H\)), and the **occlusion relationships** between them (\(S\)). The attributes include the 3D poses and locations of objects or parts, as well as their colors, materials, and sizes. The scene representation \(R=\{O,P,A^{o},A^{p},H,S\}\) is comprehensive and therefore we can directly execute the symbolic reasoning module on this representation without taking into account the image any further.
In more detail, **objects** are represented as a matrix \(O\in\mathbb{R}^{n\times N_{obj}}\) containing the probability scores of each object being a certain instance, where \(n\) is the number of objects in the given image and \(N_{obj}\) is the number of all possible object categories in the dataset (_i.e._ vocabulary size of the objects). Similarly, **parts** are represented as \(P\in\mathbb{R}^{p\times N_{prt}}\), where \(p\) is the number of parts in the image and \(N_{prt}\) is the vocabulary size of the object parts. The **object-part hierarchy** is represented by a binary matrix \(H\in\mathbb{R}^{n\times p}\), where \(H_{ij}=1\) if the object \(i\) contains the part \(j\) or \(H_{ij}=0\) otherwise. The attributes \(A^{o}\in\mathbb{R}^{n\times N_{att}}\) and \(A^{p}\in\mathbb{R}^{p\times N_{att}}\) containing probability scores of each object or part having a certain attribute or the value of bounding box. Here \(N_{att}\) is the number of attributes including the 3D poses, location coordinates, colors, materials and sizes. **Occlusion relationships** are represented by \(S\in\mathbb{R}^{(n+p)\times n}\), where each element \(S_{ij}\) represents the score of object (or part) \(i\) being occluded by object \(j\).
### Multi-class 6D Scene Parsing
While most existing VQA methods [2; 59] encode the image using pretrained object detectors like Faster-RCNN [46], we build our 6D-aware scene parser in a different way, based on the idea of analysis-by-synthesis through inverse rendering [49] which has the following advantages: first, the model prediction is more robust [49] as the render-and-compare process can naturally integrate a robust reconstruction loss to avoid distortion through occlusion; second, while the object parts
Figure 2: An overview of our model PO3D-VQA. The image is parsed into 3D-aware scene representations (blue box) using our proposed scene parser based on the idea of render-and-compare (green box). The question is parsed into a program composed of reasoning operations (orange box). Then the operations are executed on the 3D-aware scene representations to predict the answer.
are usually very challenging for Faster-RCNN to detect due to their small size, they can be much easier located using the 3D object shape, by first finding the object and estimating its 3D pose, and subsequently locating the parts using the 3D object shape (as shown in our experimental evaluation).
However, we observe two open challenges for applying existing 6D pose estimators that follow a render-and-compare approach [38, 49]: (a) these pose estimators assume that the object class is known, but in Super-CLEVR-3D the scene parser must learn to estimate the object class jointly with the pose; and (b) the scenes in Super-CLEVR-3D are very dense, containing multiple close-by objects that occlude each other. In order to address these two challenges, we introduce several improvements over [38] that enable it to be integrated into a 3D-aware VQA model.
In the following, we first describe neural meshes [49, 38], which were proposed in prior work for pose estimation of _single objects_ following an analysis-by-synthesis approach. Subsequently, we extend this method to complex scenes with densely located and possibly occluded objects to obtain a coherent scene representation, including object parts and attributes.
**Preliminaries.** Our work builds on and significantly extends Neural Meshes [38] that were introduced for 6D pose estimation through inverse rendering. The task is to jointly estimate the 6D pose (2D location, distance to the camera and 3D pose) of objects in an image. An object category is represented with a category-level mesh [49]\(M_{y}=\{v_{n}\in\mathbb{R}^{3}\}_{n=1}^{N}\) and a neural texture \(T_{y}\in\mathbb{R}^{N\times c}\) on the surface of the mesh \(M_{y}\), where \(c\) is the dimension of the feature and \(y\) is the object category. Given the object 3D pose in camera view \(\alpha\), we can render the neural mesh model \(O_{y}=\{M_{y},T_{y}\}\) into a feature map with soft rasterization [35]: \(F_{y}(\alpha)=\mathfrak{R}(O_{y},\alpha)\). Following prior work in pose estimation [49] we formulate the render-and-compare process as an optimization of the likelihood model:
\[p(F\mid O_{y},\alpha_{y},B)=\prod_{i\in\mathcal{FG}}p(f_{i}\mid O_{y},\alpha_ {y})\prod_{i\in\mathcal{BG}}p(f_{i}^{\prime}\mid B) \tag{1}\]
where \(\mathcal{FG}\) and \(\mathcal{BG}\) are the set of foreground and background locations on the 2D feature map and \(f_{i}\) is the feature vector of \(F\) at location \(i\). Here the foreground and background likelihoods are modeled as Gaussian distributions.
To train the feature extractor \(\Phi\), the neural texture \(\{T_{y}\}\) and the background model \(B\) jointly, we utilize the EM-type learning strategy as originally introduced for keypoint detection in CoKe[7]. Specifically, the feature extractor is trained using stochastic gradient descent while the parameters of the generative model \(\{T_{y}\}\) and \(B\) are trained using momentum update after every gradient step in the feature extractor, which was found to stabilize training convergence.
At inference time, the object poses \(\alpha\) can be inferred by minimizing the negative log-likelihood w.r.t. the 3D pose \(\alpha\) using gradient descent [38].
**Multi-object competition with 3D-NMS.** We extend Neural Meshes to predict the 6D object pose and class label in complex multi-object scenes. In particular, we introduce 3D-Non-Maximum-Suppression (3D-NMS) into the maximum likelihood inference process. This introduces a competition between Neural Meshes of different categories in explaining the feature map. In contrast to classical
Figure 3: Visualization of intermediate steps in our scene parser. Given an image (a), per-category feature activation maps (shown in II) are computed through render-and-compare. Then the category-wise competition (3D-NMS) is performed (results shown in b) and a post-filtering step is taken to remove mis-detected objects (c). Based on the pose estimation results (d), we project the 3D object mesh back onto the image to locate parts and occlusions(e).
2D-NMS, our 3D-NMS also takes into account the distance of each object to the camera and hence naturally enables reasoning about occlusions of objects in the scene.
We denote the 6D pose as \(\gamma=\{x,l\}\), where \(x=\{\alpha,\beta\}\) represents the 3D object pose \(\alpha\) and object distance to the camera \(\beta\), and \(l\) is the 2D object location in the feature map. We first detect the 6D poses of each object category independently and apply 2D-NMS such that for each 2D location \(l^{\prime}\) in a neighborhood defined by radius \(r\), the predicted 6D pose \(\{x,l\}\) yields the largest activation:
\[\max_{x}\ p(F\mid x,l)\ \ s.t.\ \ p(F\mid x,l)>p(F\mid x,l^{\prime}),\ \ \forall l^{\prime}\in\{l^{\prime}\mid 0 <|l^{\prime}-l|<r\} \tag{2}\]
We enable multi-category 6D pose estimation by extending this formulation to a 3D non-maximum suppression (3D-NMS). Using \(\mathcal{Y}\) to represent the set of all object categories, we model the category label \(y\) from a generative perspective:
\[\max_{x}\ p(F\mid x,l,y) \ \ s.t.\ \ p(F\mid x,l,y)>p(F\mid x,l^{\prime},y),\ \ \forall l^{\prime}\in\{l^{\prime}\mid 0 <|l^{\prime}-l|<r\} \tag{3}\] \[and\ \ p(F\mid x,l,y)>p(F\mid x,l,y^{\prime}),\ \ \forall y^{\prime} \neq y\in\mathcal{Y} \tag{4}\]
**Dense scene parsing with greedy proposal generation.** Typically, object detection in complex scenes requires well chosen thresholds and detection hyperparameters. Our render-and-compare approach enables us to avoid tedious hyperparameter tuning by adopting a greedy approach to maximize the model likelihood (Eq. (1)) using a greedy proposal strategy. In particular, we optimize the likelihood greedily by starting from the object proposal that explains away the most parts of the image with highest likelihood, and subsequently update the likelihood of the overlapping proposals taking into account, that at every pixel in the feature map only one object can be visible [56]. Formally, given a list of objects proposals \(\{o_{i}=(O_{y,i},\alpha_{y,i})\}_{i=1}^{k}\) (with predicted category label \(y\) and 6D pose \(\alpha\)), we first order the object proposals based on their likelihood score \(s=p(F|o_{i},B)\) such that \(s_{i}\leq s_{j}\) for \(i<j\). Based on the ordering, we greedily update the 6D pose \(\alpha_{j}\) and the corresponding proposal likelihood for object \(o_{j}\) by masking out the foreground regions of previous objects \(o_{i}\) with \(1\leq i\leq j-1\). In this way, we can largely avoid missing close-by objects or duplicated detection.
**Part and attribute prediction.** Given the predicted location and pose of each object, we project the object mesh back onto the image to get the locations for each part. To predict the attributes for the objects and parts, we crop the region containing the object or part from the RGB image, and train an additional CNN classifier using the cropped patches to predict the attributes (color, size, material) and the fine-grained classes (_i.e._ different sub-types of cars) of each patch using a cross-entropy loss. The reason why this additional CNN classifier is needed instead of re-using the features from the 6D pose estimator is that the pose estimation features are learned to be invariant to scale and texture changes, which makes it unsuitable for attribute prediction.
**Post-filtering.** Finally, we post-process the located objects using the fine-grained CNN classifier. We compare the category labels predicted by the 6D pose estimator with the ones predicted by the CNN classifier, and remove the objects for which these two predictions do not agree. This post-filtering step helps with the duplicated detections that cannot be fully resolved with the 3D-NMS.
**Summary.** Fig. 2 provides an overview of our scene parser and Fig. 3 visualize the intermediate results. With the idea of render-and-compare (shown in the green box of Fig. 2), the model first computes an activation map for each possible object category (Fig. 3II). Next, to infer the category for each object, the category-wise competition 3D-NMS is performed (Fig. 3b) and a post-filtering step is taken to remove mis-detected objects (Fig. 3c). Fig. 3d shows the 6D pose estimation results. To predict parts, we project the 3D object mesh back onto the image to locate parts based on projected objects (Fig. 3e). In this way, the input image can be parsed into a 3D-aware representation, which is ready for the question reasoning with program execution.
### Program execution
After the 3D-aware scene representations are predicted for the given image, the question is parsed into a reasoning program, which is then executed on the scene representation to predict the answer. The question parsing follows previous work [54], where a LSTM sequence-to-sequence model is trained to parse the question into its corresponding program. Like P-NSVQA [32], each operation in the program is executed on the scene representation in a probabilistic way. In the following, we describe the execution of the new operations we introduced.
The part-related operators are implemented by querying the object-part hierarchy matrix \(H\), so that the object containing a given part (part_to_object) and the parts belonging to the given object
(object_to_part) can be determined. The pose-related operators are based on the estimated 3D pose in the object attributes \(A^{o}\). For the filter and query operations regarding pose, the 3D poses are quantified into four direction (left, right, front, back). For the pair-wise pose relationships, the azimuth angle between two objects is used to determine the same/opposite/vertical directions. The occlusion-related operations are implemented by querying the occlusion matrix \(S\). Based on the occlusion scores \(S_{ij}\) representing whether entity \(i\) being occluded by entity \(j\), we can compute the score of one entity being occluded \(\sum_{j}S_{ij}\) (filter_occlude), find the entities that occlude a given entity (relate_occluded), or find the entities that are occluded by a given entity (relate_occluded).
## 5 Experiments
### Evaluated methods
We compare our model with three representative VQA models: FiLM [44], mDETR [25], and PNSVQA [32]. Additionally, we introduce a variant of PNSVQA, PNSVQA+Projection, to analyze the benefit of our generative 6D pose estimation approach.
**FiLM [44]**_Feature-wise Linear Modulation_ is a representative two-stream feature fusion method. The FiLM model merges the question features extracted with GRU [12] and image features extracted with CNN and predicts answers based on the merged features.
**mDETR [25]**: mDETR is a pretrained text-guided object detector based on transformers. The model is pretrained with 1.3M image and text pairs and shows strong performance when finetuned on downstream tasks like referring expression understanding or VQA.
**PNSVQA [32]**: PNSVQA is a SoTA neural symbolic VQA model. It parses the scene using MaskRCNN [18] and an attribute extraction network, then executes the reasoning program on the parsed visual scenes with taking into account the uncertainty of the scene parser. To extend PNSVQA to the 3D questions in Super-CLEVR-3D, we add a regression head in the attribute extraction network to predict the 3D posefor each object; parts are detected in a similar way as objects by predicting 2D bounding boxes; the part-object associations and occlusions are computed using intersection-over-union: a part belongs to an intersected object if the part label matches the object label, otherwise it is occluded by this object.
**PNSVQA+Projection**: Similar with NSVQA, this model predicts the 6D poses, categories and attributes using MaskRCNN and the attribute extraction network. The difference is that the parts and occlusions are predicted by projecting the 3D object models onto the image using the predicted 6D pose and category (same with how we find parts and occlusions in our model). This model helps us ablate the influence of the two components in our model, 6D pose prediction by render-and-compare, and part/occlusion detection with mesh projection.
### Experiment setup
**Dataset.** Our Super-CLEVR-3D dataset shares the same visual scenes with Super-CLEVR dataset. We re-render the images with more annotations recorded (camera parameters, parts annotations, occlusion maps). The dataset splits follow the Super-CLEVR dataset, where we have 20k images for training, 5k for validation, and 5k for testing. For question generation, we create 9 templates for part questions, 17 templates for pose questions, 35 templates for occlusion questions (with and without parts). For each of the three types, 8 to 10 questions are generated for each image by randomly sampling the templates. We ensure that the questions are not ill-posed and cannot be answered by taking shortcuts, _i.e_. the questions contain no redundant reasoning steps, following the no-redundancy setting in [32]. More details including the list of question templates can be found in the Appendix.
**Implementation details.** We train the 6D pose estimator and CNN attribute classifier separately. We train the 6D pose estimator (including the contrastive feature backbone and the nerual mesh models for each of the 5 classes) for 15k iterations with batch size 15, which takes around 2 hours on NVIDIA RTX A5000 for each class. The attribute classifier, which is a ResNet50, is shared for objects and parts. It is trained for 100 epochs with batch size 64. During inference, it takes 22s for 6D pose estimation and 10s for object mesh projection for all the objects in one image. During inference of the 6D pose estimator, we assume the theta is 0. During 3D NMS filtering, we choose the radius \(r\) as 2, and we also filter the object proposals with a threshold of 15 on the score map.
### Quantitative Results
We trained our model and baselines on Super-CLEVR-3D's training split, reporting answer accuracies on the test split in Tab. 1. Accuracies for each question type are detailed separately.
**Comparison with baselines.** First, among all the baseline methods, the neural symbolic method PNSVQA performs the best (64.4% accuracy), outperforming the end-to-end methods mDETR and FiLM by a large margin (\(>8\%\)). This shows the advantage of the step-wise modular reasoning procedure, which agrees with the findings in prior works that the modular methods excel on the simulated benchmarks that require long-trace reasoning. Second, our model achieves 75.6% average accuracy, which significantly outperforms all the evaluated models. Especially, comparing our PO3D-VQA with its 2D counterpart NSVQA, we see that the injection of 3D knowledge brings a large performance boost of 11%, suggesting the importance of the 3D understanding.
**Comparison with PNSVQA variants.** By analyzing the results of PNSVQA variants (_PNSVQA_, _PNSVQA+Projection_, and our _PO3D-VQA_), we show (a) the benefit of estimating object 3D poses using our analysis-by-synthesis method over regression and (b) the benefit of object-part structure knowledge. First, by detecting part using 3D model projection, _PNSVQA+Projection_ improves the _PNSVQA_ results by 4%, which indicates that locating parts based on objects using the object-part structure knowledge is beneficial. Second, by estimating object 6D poses with our generative render-and-compare method, our _PO3D-VQA_ outperforms _PNSVQA+Projection_ by 7% (from 68.2% to 75.6%), showing the advantage of our render-and-compare model. Moreover, looking at the per-type results, we find that the improvement of our PO3D-VQA is most significant on the part-related questions (21% improvement over PNSVQA) and part-with-occlusion questions (14%), while the accuracy on pose-related questions does not improve. The reason is that part and occlusion predictions require precise pose predictions for accurate mesh projection, while the pose questions only require a rough pose to determine the facing direction.
### Analysis and discussions
To further analyze the advantage of PO3D-VQA over other PNSVQA variants, we compare the models on questions of different difficulty levels. It is shown that the benefit our model is the most significant on hard questions. In Fig. 4, we plot the relative accuracy drop 3 of each model on questions with different occlusion ratios and questions with different part sizes.
Footnote 3: Relative accuracy drop means the ratio of absolute accuracy drop and the original accuracy. For example, if a model’s accuracy drops from 50% to 45%, its relative accuracy drop is 10%.
**Questions with different occlusion ratios.** We sort pose-related questions into different sub-groups based on their _occlusion ratios_ and evaluate the models on each of the sub-groups. The _occlusion ratio \(r\)_ of a question is the _minimum_ of occlusion ratios for all the objects in its reasoning trace. We choose \(r\) from \(0\%\) to \(30\%\), in increment of \(5\%\). The results are shown is Fig. 4 (a). Our PO3D-VQA is much more robust to occlusions compared to the other two methods: while the performances of all
\begin{table}
\begin{tabular}{l|c|c c c c} \hline \hline & Mean & Part & Pose & Occ. & Part+Occ. \\ \hline FiLM [44] & 50.53 & 38.24 & 67.82 & 51.41 & 44.66 \\ mDETR [25] & 55.72 & 41.52 & 71.76 & 64.99 & 50.47 \\ PNSVQA [32] & 64.39 & 50.61 & **87.78** & 65.80 & 53.35 \\ \hline PNSVQA+Projection & 68.15 & 56.30 & 86.70 & 70.70 & 58.90 \\ \hline
**PO3D-VQA (Ours)** & **75.64** & **71.85** & 86.40 & **76.90** & **67.40** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Model accuracies on the Super-CLEVR-3D testing split, reported for each question type, _i.e_. questions about parts, 3D poses, occlusions between objects, occlusions between objects and parts.
Figure 4: Analysis on questions of different difficulty levels. The plots show the relative accuracy drop of models, on pose questions w.r.t. different occlusion ratios (a), on part questions w.r.t. different part sizes (b), and on part+occlusion questions w.r.t. different part sizes (c).
the three models decrease as the occlusion ratio increases, the relative drop of ours is much smaller than others. The results show that our render-and-compare scene parser is more robust to heavy occlusions compared with the discriminative methods.
**Questions with different part sizes.** Questions about small parts are harder than the ones about larger parts. We sort the questions into different part size intervals \((s,t)\), where the _largest_ part that the question refers to has an area (number of pixels occupied) larger than \(s\) and smaller than \(t\). We compare the models on the part questions and the part+occlusion questions with different part sizes in Fig. 4 (b) and (c). In (b), the accuracy drop of PO3D-VQA is smaller than PNSVQA+Projection and PNSVQA when parts get smaller. In (c), PNSVQA+Projection is slightly better than our model and they are both better than the original PNSVQA.
In summary, by sorting questions into different difficulty levels based on occlusion ratios and part sizes, we show the advantage of our PO3D-VQA on harder questions, indicating that our model is robust to occlusions and small part sizes.
**Qualitative results.** Fig. 5 shows examples of predictions for our model and PNSVQA variants. In (a), the question asks about occlusion, but with a slight error in the pose prediction, PNSVQA+Projection misses the occluded bus and predicts the wrong answer, while our model is correct with **accurate pose**. In (b), the question refers to the heavily occluded minivan that is difficult to detect, but our model gets the correct prediction thanks to its **robustness to occlusions**.
**Limitations and failure cases.** Due to the difficulties of collecting real images with compositional scenes and 3D annotations, our work is currently limited by its synthetic nature. For PO3D-VQA, it sometimes fails to detect multiple objects if they are from the same category and heavily overlap (see Appendix D for more visualizations). 3D NMS can effectively improve the dense scene parsing results when objects are from different categories, but conceptually it is limited when objects are from the same category. However, 6D pose estimation in dense scenes is a challenging problem, whereas many current works on 6D pose estimation are still focusing on simple scenes with single objects [38; 50; 57].
## 6 Further Discussion
In this section, we discuss two meaningful extensions of our work: the incorporation of z-direction questions and the application of our model to real-world images.
**Z-direction questions**. While the proposed Super-CLEVR-3D dataset has been designed with 3D-aware questions, all objects within it are placed on the same surface. Introducing variability in the z direction can further enrich our dataset with more comprehensive 3D spatial relationships.
We consider the scenario where aeroplane category, is in different elevations, introducing the z dimension into the spatial relationships (see Fig. 6). This allowed us to formulate questions that probe the model's understanding of height relationships and depth perception. We create a subset containing 100 images and 379 questions and test our PO3D-VQA model directly on it without retraining the 6D
Figure 5: Examples of models’ predictions. Our model (a) predicts the object pose accurately and (b) is robust to heavy occlusions. Red boxes are for visualization only.
parser. On this dataset, our PO3D model achieves 90.33% accuracy on height relationship questions and 78.89% on depth-related questions, suggesting that our model can successfully handle questions about height. As the baseline models only use the bounding box to determine the spatial relationship between objects, they are not able to determine the height relationships.
**Extension to real-world images** While our PO3D-VQA model has demonstrated impressive performance on the synthetic Super-CLEVR-3D dataset, an essential research direction is extending it to real images or other 3D VQA datasets (such as GQA and FE-3DGQA). However, it's not trivial to truly evaluate it on these real-world problems, and a primary challenge is the lack of 3D annotations and the highly articulated categories (like the human body) in these datasets.
However, we show that our PO3D-VQA model can, in principle, work on realistic images. We generate several realistic image samples manually using the vehicle objects (e.g. car, bus, bicycle) from ImageNet with 3D annotation (see Fig. 7) and real-image background. In this experiment, the pose estimator is trained on the PASCAL3D+ dataset, and is used to predict the poses of objects from the image before pasting, as shown in (b). The attribute (color) prediction module is trained on Super-CLEVR-3D and the object shapes are predicted by a ResNet trained on ImageNet. Our model can correctly predict answers to questions about the object pose, parts, and occlusions, e.g. "Which object is occluded by the mountain bike".
## 7 Conclusion
In this work, we study the task of 3D-aware VQA. We propose the Super-CLEVR-3D dataset containing questions explicitly querying 3D understanding including object parts, 3D poses, and occlusions. To address the task, a 3D-aware neural symbolic model PO3D-VQA is proposed, which enhances the probabilistic symbolic model with a robust 3D scene parser based on analysis-by-synthesis. With the merits of accurate 3D scene parsing and symbolic execution, our model outperforms existing methods by a large margin. Further analysis shows that the improvements are even larger on harder questions. With the dataset, the model, and the experiments, we highlight the benefit of symbolic execution and the importance of 3D understanding for 3D-aware VQA.
## Acknowledgements
We thank the anonymous reviewers for their valuable comments. We thank Qing Liu, Chenxi Liu, Elias Stengel-Eskin, Benjamin Van Durme for the helpful discussions on early version of the project. This work is supported by Office of Naval Research with grants N00014-23-1-2641, N00014-21-1-2812. A. Kortylewski acknowledges support via his Emmy Noether Research Group funded by the German Science Foundation (DFG) under Grant No.468670075.
Figure 6: Example images and questions of objects with different elevations.
Figure 7: Examples of results on realistic images. Given a realistic image (a1, a2), our model can successfully estimate the 6D poses of objects (b1, b2) and answer the 3D-aware questions (c1, c2). |
2305.02611 | Involutions on the product of Quaternionic Projective space and Sphere | Let G = Z2 act on a finite CW-complex X having mod 2 cohomology isomorphic to
the product of quaternionic projective space and sphere HPn x Sm, n, m > or =
1. This paper is concerned with the connected fixed point sets and the orbit
spaces of free involutions on X. | Dimpi, Hemant Kumar Singh | 2023-05-04T07:40:27Z | http://arxiv.org/abs/2305.02611v1 | # Involutions on the product of quaternionic projective space and sphere
###### Abstract.
Let \(G=\mathbb{Z}_{2}\) act on a finite CW-complex \(X\) having mod \(2\) cohomology isomorphic to the product of quaternionic projective space and sphere \(\mathbb{H}P^{n}\times\mathbb{S}^{m},n,m\geq 1.\) This paper is concerned with the connected fixed point sets and the orbit spaces of free involutions on \(X.\)
Key words and phrases:Fixed Point Sets; Orbit Spaces; Fibration; Totally nonhomologous to zero; Leray-Serre spectral sequence 2020 Mathematics Subject Classification: Primary 57S17; Secondary 55M35 The first author of the paper is supported by SRF of UGC, New Delhi, with reference no.: 201610039267.
free involutions on the product of projective spaces and sphere \(\mathbb{F}P^{n}\times\mathbb{S}^{m},\mathbb{F}=\mathbb{R}\) or \(\mathbb{C}\), have been determined in [6]. In continuation, in this paper, we have determined the possibilities of the connected fixed point sets of involutions on \(X\sim_{2}\mathbb{H}P^{n}\times\mathbb{S}^{m}\) and discussed the orbit spaces of free involutions on \(X\).
## 2. Preliminaries
In this section, we recall some known facts that will be used in this paper. Let \(G=\mathbb{Z}_{p}\) act on a finite CW-complex \(X\). Let \(G\hookrightarrow E_{G}\to B_{G}\) be universal \(G\)-bundle, where \(E_{G}\) is contractible space and \(B_{G}\) is finite CW-complex. Then the projection map \(X\times E_{G}\to E_{G}\) is \(G\)-equivalent map and gives a fibration \(X\overset{i}{\hookrightarrow}X_{G}\overset{\pi}{\rightarrow}B_{G}\) (called Borel fibration), where \(X_{G}=(X\times E_{G})/G\) is Borel space obtained by diagonal action of \(G\) on space \(X\times E_{G}\). Suppose \(F\neq\emptyset\), and let \(x\in F\) and \(\eta_{x}:B_{G}\hookrightarrow X_{G}\) be a cross section of projection map \(\pi:X_{G}\to B_{G}\), where \(B_{G}\approx(\{x\}\times E_{G})/G\), then \(H^{*}(X_{G})\cong ker\ \eta_{x}^{*}\oplus im\ \pi^{*}\). The induced homomorphism \(\eta_{x}^{*}\) depends on the component \(F_{0}\) of fixed point set \(F\) in which \(x\) lies. If \(\alpha\in H^{n}(X_{G})\) such that \(\alpha\in Ker\ \eta_{x}^{*}\) then the image of \(\alpha\) under the restriction of \(j:(F_{G},x_{G})\hookrightarrow(X_{G},x_{G})\) on \((F_{0})_{G}\) does not involved the elements of \(H^{0}(F_{0},x_{G})\)[3].
Recall that a space \(X\) is said to be totally nonhomologous to zero (TNHZ) in \(X_{G}\) if the inclusion map \(i:X\hookrightarrow X_{G}\) induces a surjection in the cohomology \(i^{*}:H^{*}(X_{G})\to H^{*}(X)\). We have used the following Propositions:
**Proposition 2.1**.: ([1]) Let \(G=\mathbb{Z}_{2}\) act on a finite CW-complex \(X\) and \(\sum\text{rk}\ H^{i}(X,\mathbb{Z}_{2})<\infty\). Then, the following statements are equivalent:
(a) \(X\) is TNHZ (mod 2) in \(X_{G}\).
(b) \(\sum\) rk \(H^{i}(F,\mathbb{Z}_{2})=\sum\) rk \(H^{i}(X,\mathbb{Z}_{2})\).
(c) \(G\) acts trivially on \(H^{*}(X;\mathbb{Z}_{2})\) and spectral sequence \(E_{2}^{r,q}\) of \(X_{G}\to B_{G}\) degenerates.
**Proposition 2.2**.: ([2]) Let \(X\) be TNHZ in \(X_{G}\) and \(\{\gamma_{j}\}\) be a set of homogeneous elements in \(H^{*}(X_{G};\mathbb{Z}_{p})\) such that \(\{i^{*}(\gamma_{j})\}\) forms \(\mathbb{Z}_{p}\)-basis of \(H^{*}(X;\mathbb{Z}_{p})\). Then, \(H^{*}(X_{G};\mathbb{Z}_{p})\) is the free \(H^{*}(B_{G})\)-module generated by \(\{\gamma_{j}\}\).
**Proposition 2.3**.: ([1]) Let \(G=\mathbb{Z}_{2}\) act on the finite CW-complex \(X\) and \(A\subset X\) be closed and invariant subspace. Suppose that \(H^{i}(X,A;\mathbb{Z}_{2})=0\) for \(i>n\). Then, the homomorphism
\[j^{*}:H^{k}(X_{G},A_{G};\mathbb{Z}_{2})\to H^{k}(F_{G},F_{G}\cap A_{G}; \mathbb{Z}_{2})\]
is an isomorphism for \(k>n\). If \((X,A)\) is TNHZ (mod 2) in \((X_{G},A_{G},)\) then \(j^{*}\) is a monomorphism for all \(k\).
**Proposition 2.4**.: ([1]) Let \(G=\mathbb{Z}_{2}\) act on a finite CW-complex \(X\) and \(X\) is TNHZ in \(X_{G}\). Then, \(a|F\) is nontrivial element of the fixed point set \(F\), for any class \(a\in H^{n}(X;\mathbb{Z}_{2})\) such that \(a^{2}\neq 0\).
We know that \(H^{*}(\mathbb{H}P^{n}\times\mathbb{S}^{m};\mathbb{Z}_{2})=\mathbb{Z}_{2}[a,b]/<a^{n +1},b^{2}>\), where \(\deg\,a=4\) and \(\deg\,b=m\). Throughout the paper, \(H^{*}(X)\) will denote the Cech cohomology of a space \(X\), and \(X\sim_{2}Y\), means \(H^{*}(X;\mathbb{Z}_{2})\cong H^{*}(Y;\mathbb{Z}_{2})\).
## 3. Main Theorems
Let \(G=\mathbb{Z}_{2}\) act on a finite CW-complex \(X\sim_{2}\mathbb{H}P^{n}\times\mathbb{S}^{m}\), where \(n,m\geq 1\). In this section, we determined the possibilities of the connected fixed point sets of involutions on \(X\), and orbit spaces of free involutions on \(X\).
First, we have determined the fixed point sets of involutions on \(X\).
**Theorem 3.1**.: Let \(G=\mathbb{Z}_{2}\) act on a finite CW-complex \(X\sim_{2}\mathbb{H}P^{n}\times\mathbb{S}^{m}\), \(n,m\geq 1\). If \(X\) is TNHZ in \(X_{G}\) and the fixed point set \(F\) is nonempty and connected, then \(F\) must be one of the following:
1. \(F\sim_{2}\mathbb{S}^{3}\times\mathbb{S}^{q}\) or \(F\sim_{2}\mathbb{F}P^{n}\times\mathbb{S}^{q}\), where \(\mathbb{F}=\mathbb{R}\), \(\mathbb{C}\) or \(\mathbb{H}\), \(1\leq q\leq m\).
2. \(F\sim_{2}\mathbb{F}P^{n+1}\#\mathbb{F}P^{n+1}\), where \(\mathbb{F}=\mathbb{R},\mathbb{C}\) or \(\mathbb{H}\).
3. \(H^{*}(F)\) is generated by \(c\) and \(d\), with \(c^{n+1}=d^{2}+c^{s}=d^{2l+2}=0\), where \(\deg\,c=2\), \(\deg\,d=q\) and \(l=[\frac{n}{s}]\), \(s=\frac{q}{2}\) if \(q\) is even and \(s=q\) if \(q\) is odd. Moreover, for \(q=1\), \(F\sim_{2}\mathbb{R}P^{2n+1}\) and for \(q=2\), \(F\sim_{2}\mathbb{C}P^{2n+1}\).
4. \(H^{*}(F)\) is generated by \(c\) and \(d\), with \(c^{\frac{r}{s}+1}=d^{\frac{r}{q}+1}=c^{\frac{r}{s}}+d^{\frac{r}{q}}=cd=0\), where \(\deg\,c=s,s=1,2\), \(\deg\,d=q,q=1,2,4\) or \(8\), \(r=sq(2n+2)/(q+s)\) and \(n=\frac{(q+s)k}{2}-1\) for some \(k\in\mathbb{N}\).
5. \(H^{*}(F)\) is generated by \(c\) and \(d\), with \(c^{\frac{r}{s}+1}=c^{\frac{qj}{s}}+d^{j}=c^{\frac{r-q}{s}+1}d=0\), where \(\deg\,c=s,s=1,2\), \(\deg\,d=q<n\), \(r=\frac{s(2n+2)}{j}+qj-(q+1),n+1=jk\) for some \(k\in\mathbb{N}\), or \(j=2k,k=1\) or \(2\), and \(n>\frac{(q+1)j}{2s}-1\).
Proof.: Let \(x\in F\) and \(\{a,\cdots,a^{n},b,ab,\cdots a^{n}b\}\) be a generating set of \(H^{*}(X,x)\), where \(\deg\,a=4\) and \(\deg\,b=m\). Since \(X\) is TNHZ in \(X_{G}\), we get rk \(H^{*}(F)=2n+2\), \(\pi_{1}(B_{G})\) acts trivially on \(H^{*}(X_{G},x_{G})\) and the \(E_{2}\)-term \(E_{2}^{p,q}=H^{p}(B_{G})\otimes H^{q}(X)\) of Leray-Serre spectral sequence of the Borel fibration \(X\stackrel{{ i}}{{\hookrightarrow}}X_{G}\stackrel{{ \pi}}{{\rightarrow}}B_{G}\) is \(E_{\infty}^{p,q}\). So, the elements \(\{1\otimes a,1\otimes a^{2},\cdots,1\otimes a^{n},1\otimes b,1\otimes ab, \cdots 1\otimes a^{n}b\}\) are permanent cocycles. Assume that \(\alpha\in H^{4}(X_{G},x_{G})\) represents generator \(a\in H^{4}(X,x)\) and \(\beta\in H^{m}(X_{G},x_{G})\) represents generator \(b\in H^{m}(X,x)\) such that \(\eta_{x}^{*}(\alpha)=\eta_{x}^{*}(\beta)=0\), where \(\eta_{x}:\frac{\{x\}\times E_{G}}{G}\hookrightarrow\frac{X\times E_{G}}{G}\) is the inclusion map. By Proposition 2.2, \(\{\alpha,\alpha^{2},\alpha^{3},\cdots\alpha^{n},\beta,\alpha\beta,\alpha^{2} \beta,\cdots\alpha^{n}\beta\}\) is a generating set of \(H^{*}(X_{G},x_{G})\) over \(H^{*}(B_{G})\)-module. As \(H^{m}(F_{G},x_{G})=\bigoplus_{i=0}^{m}H^{m-i}(B_{G})\otimes H^{i}(F,x)\) and \(\eta_{x}(\beta)=0\). We may assume that
\[j^{*}(\beta)=1\otimes d_{m}+t\otimes d_{m-1}+\cdots+t^{k}\otimes d_{m-k}\cdots +t^{m-2}\otimes d_{2}+t^{m-1}\otimes d_{1},\]
where \(d_{i}\in H^{i}(F,x)\), and \(j^{*}(\alpha)=B_{1}t^{3}\otimes c_{1}+B_{2}t^{2}\otimes c_{2}+B_{3}t\otimes c _{3}+B_{4}1\otimes c_{4}\), where \(c_{i}\in H^{i}(F,x)\) and \(B_{i}\in\mathbb{Z}_{2},1\leq i\leq 4\). We know that \(i_{1}^{*}j^{*}=j_{1}^{*}i^{*}\) where
and \(j_{1}:F\hookrightarrow X\) are the inclusion maps. So, we get \(c_{4}=a|F\). If \(c_{4}\neq 0\), then \(B_{4}=1\). Clearly, \(c_{4}^{n+1}=0\). Thus, \(j^{*}(\alpha)=1\otimes c_{4}+\sum_{i=1}^{3}B_{i}t^{4-i}\otimes c_{i}\), where \(B_{i}\in\mathbb{Z}_{2},1\leq i\leq 3\). So, we consider eight cases according as \(B_{1},B_{2}\) and \(B_{3}\) are zero or nonzero.
**Case (1):** If \(B_{1}=B_{2}=B_{3}=0\), then \(j^{*}(\alpha)=1\otimes c_{4}\).
In this case, \(c_{4}^{i}\neq 0\) for \(1\leq i\leq n\). As \(j^{*}\) is injective, \(d_{j}\neq c_{4}^{j}\), for some \(j\), \(1\leq j\leq n\), where \(\deg\,d_{j}=j\). Suppose \(d_{q}=d_{m-k}=d\) is the least degree element such that \(d_{q}\neq c_{4}^{j}\). As \(j^{*}\) is onto on high degrees, for sufficiently large value of \(r\), we can write
\[t^{k+r}\otimes d=j^{*}(A_{1}t^{r+m-4}\alpha+\cdots+A_{n}t^{r+m-4n}\alpha^{n}+A_ {m}t^{r}\beta+\cdots+A_{m+n}t^{r-4n}\alpha^{n}\beta),\]
where \(A_{i}^{\prime}s\) are in \(\mathbb{Z}_{2}\). After comparing the coefficient of \(t^{k+r}\otimes d\), we get \(A_{m}=1\). So, we have
\(t^{r}\otimes d_{m}+\cdots+t^{r+k-1}\otimes d_{m-(k-1)}+t^{r+k+1}\otimes d_{m- (k+1)}+\cdots+t^{r+m-1}\otimes d_{1}=-j^{*}(A_{1}t^{r+m-4}\alpha+\cdots+A_{n}t^ {r+m-4n}\alpha^{n}+A_{m+1}t^{r-4}\alpha\beta+\cdots+A_{m+n}t^{r-4n}\alpha^{n} \beta)\).
From the above equation, we get that if \(q\equiv 1,2,3\) (mod 4), then \(d_{4i}=c_{4}^{i},d_{4i+q}=c_{4}^{i}d,1\leq i\leq n\), and if \(q\equiv 0\) (mod 4), then \(d_{4k}=c_{4}^{k}+c_{4}^{k-1}d\), where \(1\leq i\leq n\), and zero otherwise. Thus, we get \(j^{*}(\alpha^{n}\beta)=t^{k}\otimes c_{4}^{n}d\). As \(\alpha^{n}\beta\neq 0\), we get \(c_{4}^{n}d\neq 0\), and hence \(c_{4}^{i}d\neq 0\) for \(1\leq i\leq n\). Clearly, if \(d^{2}=0\), then \(F\sim_{2}\mathbb{H}P^{n}\times\mathbb{S}^{q},1\leq q\leq m\). If \(d^{2}\neq 0\), then either \(q\equiv 0\)(mod 4) or \(q\equiv 2\)(mod 4). First, suppose that \(q\equiv 2\)(mod 4). So, we have \(d^{2}=c_{4}^{\frac{q}{2}}\). Consequently, \(d^{2l+1}=c_{4}^{\frac{lq}{2}}d\), where \(l=[\frac{2n}{q}]\) and \(d^{2l+2}=0\). Thus, we get \(c_{4}^{n+1}=d^{2l+2}=d^{2}+c_{4}^{\frac{q}{2}}=0\). In particular, for \(q=2\), we get \(F\sim_{2}\mathbb{C}P^{2n+1}\). This realizes possibility (3). Next, suppose that \(q\equiv 0\)(mod 4) then we have either \(d^{2}=c_{4}^{\frac{q}{4}}\) or \(d^{2}=c_{4}^{\frac{q}{4}}d\). If \(d^{2}=c_{4}^{\frac{q}{4}}\), then by suitable change of basis, we get \(F\sim_{2}\mathbb{H}P^{n}\times\mathbb{S}^{q},1\leq q\leq m\). If \(d^{2}=c_{4}^{\frac{q}{4}}d\), then \(q\) must be 4. By the change of basis \(d^{\prime}=d+c_{4}\), we get \(d^{\prime n+2}=d^{n+2}=d^{\prime n+1}+d^{n+1}=dd^{\prime}=0\). Thus, \(F\sim_{2}\mathbb{H}P^{n+1}\#\mathbb{H}P^{n+1}\). This realizes possibility (2) for \(\mathbb{F}=\mathbb{H}\).
**Case (2):** If \(B_{1}=B_{2}=0\) and \(B_{3}=1\), then \(j^{*}(\alpha)=1\otimes c_{4}+t\otimes c_{3}\).
Assume that \(H^{1}(F)\neq 0\). Further, we consider cases according as \(c_{3}=c_{1}^{3}\) or \(c_{3}\neq c_{1}^{3}\).
First, consider \(c_{3}=c_{1}^{3}\). Suppose \(H^{*}(F)\) has one generator. Then, \(c_{4}=c_{1}^{4}\) and \(j^{*}(\alpha^{n})=\sum_{r=0}^{n}t^{r}\otimes c_{1}^{4n-r}\). By the injectivity of homomorphism \(j^{*}\), we get \(c_{1}^{3n}\neq 0\). Clearly, rk \(H^{*}(F)>2n+2\), a contradiction. Suppose \(H^{*}(F)\) has two generators. Then, either \(c_{4}=c_{1}^{4}\) or \(c_{4}\neq c_{1}^{4}\).
Let \(c_{4}=c_{1}^{4}\). Then, we also have rk \(H^{*}(F)>2n+2\), a contradiction.
Let \(c_{4}\neq c_{1}^{4}\). Further, if \(c_{1}^{4}=0\), then
\[j^{*}(\alpha^{n})=\begin{cases}1\otimes c_{4}^{n}&\text{if $n$ is even}\\ 1\otimes c_{4}^{n}+t\otimes c_{4}^{n-1}c_{1}^{3}&\text{if $n$ is odd}.\end{cases}\]
As \(j^{*}(\alpha^{n})\neq 0\), \(c_{4}^{n}\neq 0\), for \(n\) even. If \(n\) is odd and \(c_{4}^{n}=0\), then \(\text{rk}H^{*}(F)=4n>2n+2,n>1\) a contradiction. Clearly, this case is not possible for \(n=1\). Thus, we have
\(c_{4}^{n}\neq 0\). As \(F\) is poincare duality space, again we get rk \(H^{*}(F)>2n+2\), a contradiction. Now, if \(c_{4}\neq c_{1}^{4}\) & \(c_{1}^{4}\neq 0\), then we must have
\[j^{*}(\alpha^{n}\beta)=\sum_{r=0}^{n}\sum_{i=0}^{m-1}{n\choose r}t^{r+i}\otimes( \oplus_{j+4l=m-i}c_{4}^{n-r+l}c_{1}^{j+3r}).\]
If the cup product \(c_{1}c_{4}=0\), then \(c_{1}^{3n+1}\neq 0\). This gives rk \(H^{*}(F)>2n+2\), a contradiction. If the cup product \(c_{1}c_{4}\neq 0\), then the rank of \(H^{*}(F)\) further increases, which is not possible.
Next, consider \(c_{3}\neq c_{1}^{3}\). As \(H^{*}(F)\) has at most two generators, we get \(c_{4}=c_{1}^{4}\). Thus
\[j^{*}(\alpha^{n}\beta)=\sum_{r=0}^{n}\sum_{i=0}^{m-1}{n\choose r}t^{r+i} \otimes(\oplus_{j+3l=m-i}c_{1}^{4n-4r+j}c_{3}^{r+l}).\]
We get \(c_{1}^{4n+1},c_{1}^{4n-3}c_{3},\cdots,c_{1}c_{3}^{n}\) are least degree elements when \(r=0,1,\cdots,n\), respectively. In any case, it is easy to observed that rk \(H^{*}(F)>2n+2\), a contradiction.
Now, Suppose that \(H^{1}(F)=0\). Further, assume that \(H^{2}(F)\neq 0\). Then, we must have \(c_{4}=c_{2}^{2}\). Then,
\[j^{*}(\alpha^{n}\beta)=\sum_{r=0}^{n}\sum_{i=0}^{m-1}{n\choose r}t^{r+i} \otimes(\oplus_{2l+3j=m-i}c_{2}^{2n-2r+l}c_{3}^{r+j}).\]
Note that \(c_{2}^{2n+1},c_{2}^{2n-1}c_{3}.c_{2}^{2n-3}c_{2}^{2}.\cdots c_{2}^{2}c_{3}^{n- 1},c_{2}c_{3}^{n}\) are the least degree elements when \(r=0,1,2\cdots n\) respectively. So, we always have rk \(H^{*}(F)>2n+2\), a contradiction.
Next, assume that \(H^{2}(F)=0\). Then,
\[j^{*}(\alpha^{n}\beta)=\sum_{r=0}^{n}\sum_{i=0}^{m-1}{n\choose r}t^{r+i} \otimes(\oplus_{3l+4j=m-i}c_{4}^{n-r+j}c_{3}^{r+l}).\]
The least degree elements in the above expression \(c_{4}^{n}c_{3},c_{4}^{n-1}c_{3}^{2},\cdots c_{4}c_{3}^{n}\) and \(c_{3}^{n+1}\). Let \(c_{3}c_{4}\neq 0\). If \(c_{3}^{2}=0\), then \(c_{4}^{n}c_{3}\neq 0\). Thus, \(F\sim_{2}\mathbb{H}P^{n}\times\mathbb{S}^{3}\). If \(c_{4}^{2}=0\) and \(c_{3}^{n+1}=0\), then \(F\sim_{2}X\times\mathbb{S}^{4}\), where \(X\) has truncated polynomial ring \(\frac{\mathbb{Z}_{2}[x]}{<x^{n+1}>}\), deg \(x=3\). By the Theorem 4.5 in [10], this is not possible. If \(c_{4}^{2}=0\) and \(c_{3}^{n+1}\neq 0\), then rk \(H^{*}(F)>2n+2\), a contradiction.
Now, let \(c_{3}c_{4}=0\). Then, \(c_{3}^{\frac{r}{3}}=c_{4}^{\frac{r}{4}}\) is generator of \(H^{r}(F)\) and \(c_{3}^{\frac{i}{3}}\neq c_{4}^{\frac{i}{4}}\) for \(i<r\). As, rk \(H^{*}(F)=2n+2\), we get \(r=\frac{24n+24}{7}\). So, \(n=\frac{7k-2}{2},k\) even. Thus, \(c_{3}^{4k+1}=c_{4}^{3k+1}=c_{3}c_{4}=c_{3}^{4k}+c_{4}^{3k}=0\). Thus, \(F\sim_{2}Y\#Z\), where \(Y\) and \(Z\) both are truncated polynomials with generator \(c_{3}\) and \(c_{4}\), respectively. But by the Theorem 4.5 in [10], this is not possible.
**Case (3):** If \(B_{1}=B_{3}=0\) and \(B_{2}=1\), then \(j^{*}(\alpha)=1\otimes c_{4}+t^{2}\otimes c_{2}\).
If \(H^{*}(F)\) has one generator then \(F\sim_{2}\mathbb{R}P^{2n+1}\) or \(F\sim_{2}\mathbb{C}P^{2n+1}\) according as \(H^{1}(F)\neq 0\) or \(H^{1}(F)=0\). This realizes possibility (3) for \(q=1\) and \(q=2\), respectively. Now, assume that \(H^{*}(F)\) has two generators. We consider two subcases according as \(c_{4}\neq c_{2}^{2}\) or \(c_{4}=c_{2}^{2}\).
**Subcase(i):** Assume that \(c_{4}\neq c_{2}^{2}\).
First, assume that \(H^{1}(F)=0\). If \(c_{2}^{2}=0\) then
\[j^{*}(\alpha^{n})=\begin{cases}1\otimes c_{4}^{n}&\text{if $n$ is even}\\ 1\otimes c_{4}^{n}+t^{2}\otimes c_{4}^{n-1}c_{2}&\text{if $n$ is odd.}\end{cases}\]
By the injectivity of \(j^{*}\), we get \(c_{4}^{n}\neq 0\). If \(c_{4}^{n+1}=0\) then \(F\sim_{2}\mathbb{H}P^{n}\times\mathbb{S}^{2}\). If \(c_{4}^{n+1}\neq 0\) then rk \(H^{*}(F)>2n+2\), a contradiction.
If \(c_{2}^{2}\neq 0\), then we get \(j^{*}(\alpha^{n}\beta)=\sum_{r=0}^{n}\sum_{i=0}^{m-1}\binom{n}{r}t^{2r+i} \otimes(\oplus_{2l+4j=m-i}c_{4}^{n-r+j}c_{2}^{r+l})\). We get that the least degree elements are \(c_{4}^{n}c_{2},c_{4}^{n-1}c_{2}^{2},c_{4}^{n-2c_{2}^{3}}\cdots c_{4}^{2}c_{2}^ {n-1},c_{4}c_{2}^{n}\) and \(c_{2}^{n+1}\). Note that if \(c_{4}^{n-k}c_{2}^{k+1}\neq 0\), for any \(0\leq k\leq n-1\), then rk \(H^{*}(F)>2n+2\), a contradiction. So, at least one of \(c_{2}^{n+1}\) or \(c_{2}^{n}c_{4}\) must be nonzero.
Let \(c_{2}c_{4}=0\), then we must have \(c_{2}^{n+1}\neq 0\). Thus, \(c_{2}^{\frac{r}{2}}=c_{4}^{\frac{r}{2}}\) is the generator of \(H^{r}(F)\). This implies that rk \(H^{*}(F)=\frac{r}{2}+\frac{r}{4}=2n+2\). Thus, \(r=8k\), where \(n=3k-1\), \(k\in\mathbb{N}\). Hence, \(F\sim_{2}\mathbb{C}P^{2k}\#\mathbb{H}P^{2k},k\in\mathbb{N}\). This realizes possibility (4) for \(s=2\) & \(q=4\).
If \(c_{2}c_{4}\neq 0\), then for \(c_{2}^{n+1}=0\) & \(c_{4}^{2}=0\), clearly, \(F\sim_{2}\mathbb{C}P^{n}\times\mathbb{S}^{4}\). For \(c_{4}^{2}\neq 0\), we must have \(c_{4}^{2}=c_{2}^{3}\). By the change of basis \(d^{\prime}=c_{2}^{2}+c_{4}\), we get the cohomology ring is given by \(c_{2}^{n+1}=d^{\prime 2}=0\). This realizes possibility (1) for \(\mathbb{F}=\mathbb{C}\) & \(q=4\).
If \(c_{2}^{n+1}\neq 0\) and \(c_{2}^{n}c_{4}\neq 0\), then rk \(H^{*}(F)>2n+2\), a contradiction.
If \(c_{2}^{n+1}\neq 0\). Then, \(c_{2}^{\frac{r}{2}}=c_{2}^{\frac{r-4j}{2}}c_{4}^{j}\), \(j>1\) forms generator of \(H^{r}(F)\). Which implies that \(c_{2}^{2j}=c_{4}^{j}\) is generator of \(H^{4j}(F)\). We get rk \(H^{*}(F)=j(\frac{r-4j}{2})+\frac{4j}{2}+j\) which must be \(2n+2\) so, \(r=\frac{4n+4}{j}+4j-2\). We must have either \(n+1=jk\), for some \(k\in\mathbb{N}\) or \(j=2k,k=1,2\). Note that \(c_{2}^{\frac{r-4j}{2}+1}c_{4}=0\), as if \(c_{2}^{\frac{r-4j}{2}+1}c_{4}\neq 0\). Then, poincare dual of \(c_{2}^{\frac{r-4j}{2}+1}c_{4}\) are namely, \(c_{2}^{q}\) and \(d^{2}\), which is not possible. Hence, the cohomology ring is given by \(c_{2}^{\frac{r}{2}}=c_{2}^{2j}+c_{4}^{j}=c_{2}^{\frac{r-4j}{2}+1}c_{4}=0\). This realizes possibility (5) for \(s=2\) & \(q=4\).
Now, suppose that \(H^{1}(F)\neq 0\), then we consider two possibility accordingly \(c_{2}=c_{1}^{2}\) or \(c_{2}\neq c_{1}^{2}\).
If \(c_{2}=c_{1}^{2}\), then we get \(c_{1}^{2n}\neq 0\). which leads contradiction.
If \(c_{2}\neq c_{1}^{2}\), then we must have \(c_{4}=c_{1}^{4}\). It is easy to observed that either \(c_{2}^{2}\) is zero or non zero, we get rk \(H^{*}(F)>2n+2\), a contradiction.
**Subcase(ii):** Assume that \(c_{4}=c_{2}^{2}\).
If \(H^{1}(F)\neq 0\), then for \(c_{2}=c_{1}^{2}\), we get \(c_{1}^{2n+1}\) must be non zero. Thus, rk \(H^{*}(F)>2n+2\), a contradiction. Now, for \(c_{2}\neq c_{1}^{2}\), we have \(j^{*}(\alpha^{n})=\sum_{r=0}^{n}\binom{n}{r}t^{2r}\otimes c_{2}^{2n-r}\), which implies that \(c_{2}^{n}\) must be nonzero. If \(c_{1}^{2}=0\) and \(c_{2}^{n+1}=0\), then \(F\sim_{2}\mathbb{C}P^{n}\times\mathbb{S}^{1}\). Which realizes possibility (1) for \(\mathbb{F}=\mathbb{C}\) and \(q=1\). If \(c_{2}^{n+1}\neq 0\), then rank of \(H^{*}(F)\) exceed \(2n+2\), a contradiction. Now, suppose that \(c_{1}^{2}\neq 0\) then this case only possible when \(n=2\) and cup product \(c_{1}c_{2}=0\), otherwise rk \(H^{*}(F)>2n+2\). For \(n=2\), we get \(c_{1}^{4}=c_{2}^{2}\) is generator of \(H^{4}(F)\). Thus, \(F\sim_{2}\mathbb{R}P^{4}\#\mathbb{C}P^{2}\). This realizes possibility (4) for \(n=2\) and \(q=2\) & \(s=1\).
If \(H^{*}(F)=0\), then \(j^{*}(\alpha^{n})=\sum_{r=0}^{n}\binom{n}{r}t^{2r}\otimes c_{2}^{2n-r}\), which implies that \(c_{2}^{n}\neq 0\).
For \(c_{2}^{n+1}=0\), we get \(j^{*}(\alpha^{n}\beta)=c_{2}^{n}d\neq 0\), where deg \(d=q\), \(d\neq c_{2}^{i},1\leq i\leq n\). If \(d^{2}=0\), then \(F\sim_{2}\mathbb{C}P^{n}\times\mathbb{S}^{q},1\leq q\leq m\). If \(d^{2}\neq 0\), then either \(d^{2}=c_{2}^{q}\) or \(d^{2}=c_{2}^{\frac{q}{2}}d\).
Suppose that \(d^{2}=c_{2}^{q}\), then if \(q\equiv 0\) (mod 2), then by the change of basis \(d^{\prime}=d+c_{2}^{\frac{q}{2}}\), we get \(d^{\prime 2}=0\) and \(c_{2}^{i}d^{\prime}\neq 0\) for \(1\leq i\leq n\). Thus, \(F\sim_{2}\mathbb{C}P^{n}\times\mathbb{S}^{q},1\leq q\leq m\). This realizes possibility (1) for \(\mathbb{F}=\mathbb{C}\). If \(q\not\equiv 0\) (mod 2), then \(d^{2l+1}=c_{2}^{lq}d\), where \(l=[\frac{n}{q}]\) and \(d^{2n+2}=0\). This realizes possibility (3) for \(s=q\) & \(q\) odd. Moreover, if \(q=1\), then clearly \(F\sim_{2}\mathbb{R}P^{2n+1}\).
If \(d^{2}=c_{2}^{q/2}d\), then \(q\) must be 2. As if \(2<q\)(even) \(\leq n\), then \(F\) does not satisfy poincare duality. By the change of basis \(d^{\prime}=d+c_{2}\), we get \(d^{\prime n+2}=d^{n+2}=d^{\prime n+1}+d^{n+1}=dd^{\prime}=0\). Thus, \(F\sim_{2}\mathbb{C}P^{n+1}\#\mathbb{C}P^{n+1}\). This realizes possibility (2) for \(\mathbb{F}=\mathbb{C}\).
For \(c_{2}^{n+1}\neq 0\), we get \(j^{*}(\alpha^{n}\beta)=\sum_{r=0}^{n}\sum_{i=1}^{m-1}\binom{n}{r}t^{2r+i} \otimes(\oplus_{2l+qj=m-i}c_{2}^{2n-r+l}d^{j})_{j=\{0,1\}}\)
\[=\sum_{r=0}^{n}\sum_{i=1}^{m-1}\binom{n}{r}t^{2r+i}\otimes c_{2}^{2n-r+\frac{ m-i}{2}}+\sum_{r=0}^{n}\sum_{i=q}^{m-1}A_{l,j}\binom{n}{r}t^{2r+i}\otimes( \oplus_{2l+q=m-i}c_{2}^{2n-r+l}d).\]
From above expression we get \(c_{2}^{n+1}\) amd \(c_{2}^{n}d\) are least degree elements. Clearly, if \(c_{2}^{n}d\neq 0\) then rk \(H^{*}(F)>2n+2\), a contradiction. Now, suppose that \(c_{2}^{n}d=0\) & \(c_{2}^{n+1}\neq 0\).
If \(c_{2}d=0\), then we must have \(d^{2}\neq 0\), otherwise, we cannot have poincare dual of \(d\). It is easy to observe that \(c_{2}^{\frac{r}{2}}=d^{\frac{r}{q}}\) is the generator of \(H^{r}(F)\), where \(r\) is the formal dimension of \(H^{*}(F)\). As rk \(H^{*}(F)=2n+2\), we get \(r=\frac{4q(n+1)}{q+2}\). So, \((q+2)|(4n+4)\), and hence, \(n=(q+2)k-1\) for \(q\equiv 0,1\), or \(3\) (mod 4) & \(n=(\frac{q+2}{2})k-1\) for \(q\equiv 2\) (mod 4). Thus, \(c_{2}^{\frac{r}{s}+1}=d^{\frac{r}{q}+1}=c_{2}^{\frac{r}{s}}+d^{\frac{r}{q}}=c _{2}d=0\). This realizes possibility (4) for \(s=2\).
If \(c_{2}d\neq 0\) and \(c_{2}^{n}d=0\). Let \(r\) be the formal dimension of \(F\). In this case, we show that the generators of \(H^{r}(F)\) would be \(c_{2}^{\frac{r}{2}}\) which is equal to \(c_{2}^{\frac{r-qj}{2}}d^{j}\), where \(j>1\). Assume that \(c_{2}^{\frac{r}{2}}=0\). Now, if \(c_{2}^{\frac{r-q}{2}}d\) is generator of \(H^{r}(F)\), then \(c_{2}^{\frac{r}{2}}\neq c_{2}^{\frac{i-q}{2}}d,q<i<r\), otherwise, \(c_{2}^{\frac{r}{2}}=c_{2}^{\frac{r-q}{2}}d\), a contradiction. Thus, rk \(H^{*}(F)\geq 2n+4>2n+2\), which contradicts our hypothesis. Similarly, \(c_{2}^{\frac{r-qj}{2}}d^{j}\), where \(j>1\), cannot be generator of \(H^{r}(F)\). Thus, \(c_{2}^{\frac{r}{2}}\neq 0\). Further, if \(c_{2}^{\frac{r}{2}}=c_{2}^{\frac{r-q}{2}}d\) is a generator of \(H^{r}(F)\), then \(c_{2}^{\frac{r-q}{2}}\) would have two poincare duals namely, \(c_{2}^{\frac{q}{2}}\) and \(d\), again a contradiction. Hence, \(c_{2}^{\frac{r}{2}}=c_{2}^{\frac{r-qi}{2}}d^{j}\) is generator of \(H^{r}(F)\), where \(j>1\).
As \(c_{2}^{n}d=0\), we must have \(q\leq n\) and \(c_{2}^{\frac{r}{2}}=c_{2}^{\frac{r-qi}{2}}d^{j},j>1\), generates \(H^{r}(F)\), where \(r\) is even. Thus, \(c_{2}^{\frac{qi}{2}}\neq d^{i}\) for \(1\leq i\leq j-1\) and \(c_{2}^{\frac{qi}{2}}=d^{j}\), where \(qj<r\). We get rk \(H^{*}(F)=j(\frac{r-qj}{2})+\frac{qj}{2}+j\) which must be \(2n+2\) so, \(r=\frac{4n+4}{j}+qj-(q+2)\). We must have either \(n+1=jk\), for some \(k\in\mathbb{N}\) or \(j=2k,k=1,2\). Hence, the cohomology ring \(H^{*}(F)\) is generated by \(c_{2}\) and \(d\), \(c_{2}^{\frac{r}{2}+1}=c_{2}^{\frac{qi}{2}}+d^{j}=c_{2}^{\frac{r-qi}{2}+1}d=0\). As \(qj<r\), we get \(\frac{(q+1)j}{4}-1<n\). This realizes possibility (5) for \(s=2\).
**Case (4):** If \(B_{2}=B_{3}=0\) and \(B_{1}=1\), then \(j^{*}(\alpha)=1\otimes c_{4}+t^{3}\otimes c_{1}\).
In this case, if \(H^{*}(F)\) has one generator then \(F\sim_{2}\mathbb{R}P^{2n+1}\). Suppose that \(H^{*}(F)\) has two generators. Now, we consider two subcases according as \(c_{4}=c_{1}^{4}\) or \(c_{4}\neq c_{1}^{4}\).
**Subcase(i):** Assume that \(c_{4}=c_{1}^{4}\).
We have \(j^{*}(\alpha^{n})=\sum_{r=0}^{n}\binom{n}{r}t^{3r}\otimes c_{1}^{4n-3r}\). Which implies that \(c_{1}^{n}\) must be non zero. Let \(d\neq c^{i},1\leq i\leq n\) be the generator of \(H^{*}(F)\) having degree \(q\). We get
\[j^{*}(\alpha^{n}\beta)=\sum_{r=0}^{n}\sum_{i=1}^{m-1}\binom{n}{r}t^{3r+i} \otimes c_{1}^{4n-3r+m-i}+\sum_{r=0}^{n}\sum_{i=q}^{m-1}A_{l,j}\binom{n}{r}t^{ 3r+i}\otimes c_{1}^{4n-3r+(m-i-q)}d.\]
After expanding the above expression we get \(c_{1}^{n+1}\) and \(c_{1}^{n}d\) are the least possible degree elements. If \(c_{1}^{n+1}=0\), then \(c_{1}^{n}d\) must be non zero. Thus, for \(d^{2}=0\), we get \(F\sim_{2}\mathbb{R}P^{n}\times\mathbb{S}^{q},1\leq q\leq m\). This realizes possibility (1) for \(\mathbb{F}=\mathbb{R}\). And for \(d^{2}\neq 0\), we have two possibility either \(d^{2}=c_{1}^{2q}\) or \(d^{2}=c_{1}^{q}d\).
If \(d^{2}=c_{1}^{2q}\), then by the change of basis \(d^{\prime}=d+c_{1}^{q}\), we get realizes possibility (1) for \(\mathbb{F}=\mathbb{R}\).
If \(d^{2}=c_{1}^{q}d\), then for \(1<q\leq n\), \(F\) does not satisfy poincare duality. So, we must have \(q=1\). Again, by the change of basis \(d^{\prime}=c+d\), we get \(d^{\prime n+2}=d^{n+2}=d^{\prime n+1}+d^{\prime n+1}=dd^{\prime}=0\). Thus, \(F\sim_{2}\mathbb{R}P^{n+1}\#\mathbb{R}P^{n+1}\). This realizes possibility (2) for \(\mathbb{F}=\mathbb{R}\).
Now, Assume that \(c_{1}^{n+1}\neq 0\) then \(c_{1}^{n}d\) either zero or non zero. Obviously, \(c_{1}^{n}d\neq 0\) is not possible. Suppose that \(c_{1}^{n}d=0\).
If \(c_{1}d=0\), then we get \(c_{1}^{r+1}=d^{\frac{r}{q}+1}=c_{1}d=c_{1}^{r}+d^{\frac{r}{q}}=0\), where \(r=\frac{q(2n+2)}{q+1}\). This realizes possibility (4) for \(s=1\).
If \(c_{1}d\neq 0\), then we get \(c_{1}^{r}=c_{1}^{r-qj}d^{j},j>1\) which generates \(H^{r}(F)\). Thus, \(c_{1}^{qi}\neq d^{i}\) for \(1\leq i\leq j-1\) and \(c_{1}^{qj}=d^{j}\) where \(qj<r\). We get rk \(H^{*}(F)=j(r-qj)+qj+j\) which must be \(2n+2\) so, \(r=\frac{2n+2}{j}+qj-(q+1)\). We must have either \(n+1=jk\), for some \(k\in\mathbb{N}\) or \(j=2\). Hence, the cohomology ring \(H^{*}(F)\) is generated by \(c_{1},d\) with \(c_{1}^{r+1}=c_{1}^{qj}+d^{j}=c_{1}^{r-qj+1}d=0\), with \(\frac{(q+1)j}{2}-1<n\). This realizes possibility (5) for \(s=1\).
**Subcase(ii):** Assume that \(c_{4}\neq c_{1}^{4}\).
First, we consider when \(c_{1}^{4}=0\).
If \(c_{1}^{2}=0\), then
\[j^{*}(\alpha^{n})=\begin{cases}1\otimes c_{4}^{n}&\text{if $n$ is even}\\ 1\otimes c_{4}^{n}+t^{3}\otimes c_{4}^{n-1}c_{1}&\text{if $n$ is odd.}\end{cases}\]
Clearly, \(c_{4}^{n}\) must be nonzero. Thus, \(F\sim_{2}\mathbb{H}P^{n}\times\mathbb{S}^{1}\). This realizes possibility (1) for \(\mathbb{F}=\mathbb{H}\) and \(q=1\).
If \(c_{1}^{2}\neq 0\) and \(c_{1}^{3}=0\), then \(j^{*}(\alpha^{n})=\sum_{r=0}^{2}\binom{n}{r}(1\otimes c_{4})^{n-r}(t^{3} \otimes c_{1})^{r}\) and
\[j^{*}(\alpha^{n}\beta)=\sum_{i=0}^{m-1}\sum_{r=0}^{2}\binom{n}{r}t^{i+3r} \otimes(\oplus_{4l+j=m-i}c_{4}^{n-r+l}c_{1}^{r+j}).\]
So, we get \(c_{4}^{n}c_{1}\) and \(c_{4}^{n-1}c_{1}^{2}\) are the least degree elements. Since \(F\) is poincare duality space and rk \(H^{*}(F)=2n+2\), so \(c_{4}^{n-1}c_{1}^{2}\) is generator of formal dimension. So, this possible only when \(n=2\) and \(c_{4}^{2}=0\). Thus, \(F\sim_{2}\mathbb{R}P^{2}\times\mathbb{S}^{4}\).
If \(c_{1}^{3}\neq 0\) and \(c_{1}^{4}=0\), then \(j^{*}(\alpha^{n}\beta)=\sum_{i=0}^{m-1}\sum_{r=0}^{3}{n\choose r}t^{i+3r} \otimes(\oplus_{4l+j=m-i}c_{4}^{n-r+l}c_{1}^{r+j})\). Then, \(c_{4}^{n-1}c_{1}^{2}\) and \(c_{4}^{n-2}c_{1}^{3}\) are the possible least degree elements. Clearly, \(c_{4}^{n-2}c_{1}^{3}\) is possible generator of formal dimension only when \(n=3\) and \(c_{4}^{2}=0\). Thus we get \(F\sim_{2}\mathbb{R}P^{3}\times\mathbb{S}^{4}\). Now, suppose that \(c_{1}^{4}\neq 0\).
We have \(j^{*}(\alpha^{n}\beta)=\sum_{i=0}^{m-1}\sum_{r=0}^{n}{n\choose r}t^{i+3r} \otimes(\oplus_{4l+j=m-i}c_{4}^{n-r+l}c_{1}^{r+j})\). It is easy to observed that \(c_{1}^{n+1}\) and \(c_{1}^{n}c_{4}\) are the least possible degree elements.
If \(c_{1}^{n+1}=0\), then \(c_{1}^{n}c_{4}\) must be non zero. Clearly, when \(c_{4}^{2}=0\), \(F\sim_{2}\mathbb{R}P^{n}\times\mathbb{S}^{4}\) and when \(c_{4}^{2}\neq 0\), then we must have \(c_{4}^{2}=c_{1}^{8}\). After change of basis \(d^{\prime}=c_{1}^{4}+c_{4}\) we get \(F\sim_{2}\mathbb{R}P^{n}\times\mathbb{S}^{4}\). This realizes possibility (1) for \(\mathbb{F}=\mathbb{R}\) and \(q=4\).
Now suppose that \(c_{1}^{n+1}\neq 0\), then \(c_{1}^{n}c_{4}\) either zero or non zero.
Obviously, \(c_{1_{r}}^{n}c_{4}\neq 0\) is not possible. And if \(c_{1}^{n}c_{4}=0\), then for cup product zero, we get \(c_{1}^{r+1}=c_{4}^{\frac{r}{4}+1}=c_{1}c_{4}=c_{1}^{r}+c_{4}^{\frac{r}{4}}=0\) where \(r=\frac{4(2n+2)}{5}\). This realizes possibility (4) for \(s=1\) & \(q=5\). For cup product non zero, we get \(c_{1}^{r}=c_{1}^{r-4j}c_{4}^{j},j>1\) which generates \(H^{r}(F)\). Thus, \(c_{1}^{4i}\neq c_{4}^{i}\) for \(1\leq i\leq j-1\) and \(c_{1}^{4j}=c_{4}^{j}\), where \(4j<r\). We get rk \(H^{*}(F)=j(r-4j)+4j+j\) which must be \(2n+2\) so, \(r=\frac{2n+2}{j}+4j-5\), where either \(n+1=jk\), for some \(k\in\mathbb{N}\) or \(j=2\). Hence, the cohomology ring \(H^{*}(F)\) is generated by \(c_{1}\) and \(c_{4}\) with \(c_{1}^{r+1}=c_{1}^{4j}+c_{4}^{j}=c_{1}^{r-4j+1}c_{4}=0\), with \(\frac{5j}{2}-1<n\). This realizes possibility (5) for \(s=1\) and \(q=4\).
**Case(5):** If \(B_{1}=0\) and \(B_{2}=B_{3}=1\), then \(j^{*}(\alpha)=1\otimes c_{4}+t\otimes c_{3}+t^{2}\otimes c_{2}\).
In this case, if \(H^{*}(F)\) has one generator then \(F\sim_{2}\mathbb{R}P^{2n+1}\). Suppose that \(H^{*}(F)\) has two generators. Now, we consider two subcases: (i) \(c_{4}=c_{2}^{2}\) (ii) \(c_{4}\neq c_{2}^{2}\).
**Subcase(i):**\(c_{4}=c_{2}^{2}\)
If \(H^{1}(F)=0\), then \(j^{*}(\alpha^{n})=\sum_{r=0}^{n}\sum_{k=0}^{n-r}{n\choose r}t^{2k+r}\otimes c _{2}^{2n-2r-k}c_{3}^{r}\) and
\[j^{*}(\alpha^{n}\beta)=\sum_{i=0}^{m-1}\sum_{r=0}^{n}\sum_{k=0}^{n-r}{n \choose r}{n-r\choose k}t^{2k+r+i}\otimes(\oplus_{2l+3j=m-i}c_{2}^{2n-2r-k+l} c_{3}^{r+j}).\]
If \(c_{2}c_{3}=0\), then we get \(c_{2}^{n+1}\) and \(c_{3}^{n+1}\) are the least degree element. We must have \(c_{2}^{\frac{r}{2}}=c_{3}^{\frac{r}{3}}\) is generator of \(H^{r}(F)\). Thus, rk \(H^{*}(F)=\frac{r}{2}+\frac{r}{3}=2n+2\implies r=\frac{12n+12}{5}>2n+2\) but \(r<3n+3\). So, \(c_{3}^{n+1}=0\). Thus, we have \(c_{2}^{\frac{r}{2}+1}=c_{3}^{\frac{r}{3}+1}=c_{2}c_{3}=c_{2}^{\frac{r}{2}}+c_{3 }^{\frac{r}{3}}=0\), where \(r\equiv 12k,n\equiv 5k-1;k\in\mathbb{N}\). This realizes possibility (4) for \(s=2\) & \(q=3\).
If \(c_{2}c_{3}\neq 0\), then we have \(c_{2}^{n+1}\) and \(c_{2}^{n}c_{3}\) are the possible least degree elements. If \(c_{2}^{n+1}=0\) then we must have \(c_{2}^{n}c_{3}\neq 0\). Clearly, for \(c_{3}^{2}=0\), we get \(F\sim_{2}\mathbb{C}P^{n}\times\mathbb{S}^{3}\) and for \(c_{3}^{2}\neq 0\), we must have \(c_{3}^{2}=c_{2}^{3}\). Thus, \(c_{2}^{n+1}=c_{3}^{2}+c_{2}^{3}=c_{3}^{2l+2}=0\) where \(l=[\frac{n}{3}]\). This realizes possibility (2) for \(q=3\).
Clearly, \(c_{2}^{n+1}\neq 0\) and \(c_{2}^{n}c_{3}\neq 0\) is not possible.
If \(c_{2}^{n+1}\neq 0\) and \(c_{2}^{n}c_{3}=0\). Then, we get \(c_{2}^{\frac{r}{2}}=c_{2}^{\frac{r-3j}{2}}c_{3}^{j},j>1\), which generates \(H^{r}(F)\) and \(r\) and \(j\) must be even. Since, rk \(H^{*}(F)=2n+2\), so we must have \(j=2\) and \(r=2n+4\), Hence, the cohomology ring \(H^{*}(F)\) is generated by \(c_{2}\) and \(c_{3}\) with \(c_{2}^{n+3}=c_{2}^{3}+c_{3}^{2}=c_{2}^{n}c_{3}=0\). This realizes possibility (5) for \(s=2\) & \(q=3\).
If \(H^{1}(F)\neq 0\), then rk \(H^{*}(F)>2n+2\), for either \(c_{2}=c_{1}^{2}\) or \(c_{2}\neq c_{1}^{2}\).
**Subcase(ii):**\(c_{4}\neq c_{2}^{2}\)
In this subcase, we must have \(H^{1}(F)\neq 0\). It is easy to observed that rk \(H^{*}(F)>2n+2\) in both case either \(c_{2}^{2}=0\) or \(c_{2}^{2}\neq 0\).
**Case(6):** If \(B_{2}=0\) and \(B_{1}=B_{3}=1\), then \(j^{*}(\alpha)=1\otimes c_{4}+t\otimes c_{3}+t^{3}\otimes c_{1}\).
In this case, if \(H^{*}(F)\) has one generator then \(F\sim_{2}\mathbb{R}P^{2n+1}\). Suppose that \(H^{*}(F)\) has two generators. Now we consider two subcases: (i) \(c_{3}=c_{1}^{3}\) (ii) \(c_{3}\neq c_{1}^{3}\).
**Subcase(i):**\(c_{3}=c_{1}^{3}\).
First, suppose that \(c_{4}=c_{1}^{4}\). Then, we have \(j^{*}(\alpha^{n})=\sum_{r=0}^{n}\sum_{k=0}^{n-r}{n\choose r}{n-r\choose k}t^{3 k+r}\otimes c_{1}^{4n-3r-k}\) and
\[j^{*}(\alpha^{n}\beta)=\sum_{r=0}^{n}\sum_{k=0}^{n-r}\sum_{i=1}^ {m-1}{n\choose r}{n-r\choose r}t^{3r+r+i}\otimes c_{1}^{4n-3r+m-i}\] \[+\sum_{r=0}^{n}\sum_{k=0}^{n-r}\sum_{i=q}^{m-1}A_{l,j}{n\choose r }t^{3r+r+i}\otimes c_{1}^{4n-3r-k+(m-i-q)}d.\]
Clearly, \(c_{1}^{n+1}\) and \(c_{1}^{n}d\) are the possible least degree elements.
If \(c_{1}^{n+1}=0\), then \(c_{1}^{n}d\) must be non zero. Thus, for \(d^{2}=0\), we get \(F\sim_{2}\mathbb{R}P^{n}\times\mathbb{S}^{q},1\leq q\leq m\). This realizes possibility (1) for \(\mathbb{F}=\mathbb{R}\). And for \(d^{2}\neq 0\), we have two possibility either \(d^{2}=c_{1}^{2q}\) or \(d^{2}=c_{1}^{q}d\).
If \(d^{2}=c_{1}^{2q}\), then by the change of basis \(d^{\prime}=d+c_{1}^{q}\), we realizes possibility (1) for \(\mathbb{F}=\mathbb{R}\).
If \(d^{2}=c_{1}^{q}d\), then for \(1<q\leq n\), \(F\) does not satisfy poincare duality. So, we must have \(q=1\). Again, by the change of basis \(d^{\prime}=c+d\), we get \(d^{\prime n+2}=d^{n+2}=d^{\prime n+1}+d^{\prime n+1}=dd^{\prime}=0\). Thus, \(F\sim_{2}\mathbb{R}P^{n+1}\#\mathbb{R}P^{n+1}\). This realizes possibility (2) for \(\mathbb{F}=\mathbb{R}\).
Now, If \(c_{1}^{n+1}\neq 0\), then either \(c_{1}^{n}d=0\) or \(c_{1}^{n}d\neq 0\)..
Obviously, \(c_{1}^{n}d\neq 0\) is not possible. Suppose that \(c_{1}^{n}d=0\). If \(c_{1}d=0\), then we get \(c_{1}^{r+1}=d^{\frac{r}{q}+1}=c_{1}d=c_{1}^{r}+d^{\frac{r}{q}}=0\), where \(r=\frac{q(2n+2)}{q+1}\). So, \((q+1)|(2n+2)\), and hence, \(n=(q+1)k-1\) for \(q\) even & \(n=(\frac{q+1}{2})k-1\) for \(q\) odd, \(k\in\mathbb{N}\). This realizes possibility (4) for \(s=1\). Further, If \(q=1\), then \(F\sim_{2}\mathbb{R}P^{n+1}\#\mathbb{R}P^{n+1}\), if \(q=2\), then \(F\sim_{2}\mathbb{R}P^{4k}\#\mathbb{R}P^{2k}\) and if \(q=4\), then \(F\sim_{2}\mathbb{R}P^{8k}\#\mathbb{H}P^{2k}\). If \(c_{1}d\neq 0\), then we get \(c_{1}^{r}=c_{1}^{r-qj}d^{j},j>1\) which generates \(H^{r}(F)\). Thus, \(c_{1}^{qi}\neq d^{i}\) for \(1\leq i\leq j-1\) and \(c_{1}^{qj}=d^{j}\) where \(qj<r\). We get rk \(H^{*}(F)=j(r-qj)+qj+j\) which must be \(2n+2\) so, \(r=\frac{2n+2}{j}+qj-(q+1)\). Hence, the cohomology ring \(H^{*}(F)\) is generated by \(c_{1}\) and \(d\) with \(c_{1}^{r+1}=c_{1}^{qj}+d^{j}=c_{1}^{r-qj+1}d=0\), with \(\frac{(q+1)j}{2}-1<n\). This realizes possibility (5) for \(s=1\).
Next, suppose that \(c_{4}\neq c_{1}^{4}\).
If \(c_{1}^{4}=0\), then this case possible only for \(n=3\), when \(c_{1}^{n}c_{4}\neq 0\ \&\ c_{4}^{2}=0\). Thus, \(F\sim_{2}\mathbb{R}P^{3}\times\mathbb{S}^{4}\). This realizes possibility (1) for \(n=3\ \&\ q=4\).
If \(c_{1}^{4}\neq 0\), then we have \(j^{*}(\alpha^{n})=\sum_{r=0}^{n}\sum_{k=0}^{n-r}{n\choose r}{n-r\choose k}t^{n -r+2k}\otimes c_{1}^{3n-3r-2k}c_{4}^{r}\) and
\[j^{*}(\alpha^{n}\beta)=\sum_{i=0}^{m-1}\sum_{r=0}^{n}\sum_{k=0}^{n-r}{n\choose r }{n-r\choose k}t^{n-r+2k+i}\otimes(\oplus_{l+4j=m-i}c_{1}^{3n-3r-2k+l}c_{4}^{r+ j}).\]
Note that from above expression we get \(c_{1}^{n+1}\) and \(c_{1}^{n}c_{4}\) are the possible least degree elements. If \(c_{1}c_{4}=0\), then we get \(c_{1}^{r}=c_{4}^{\frac{r}{4}}\), where \(r\) is formal dimension. So, \(r=8k\) where \(n\equiv 5k-1,k\in\mathbb{N}\). Thus \(F\sim_{2}\mathbb{R}P^{8k}\#\mathbb{H}P^{2k}\). This realizes possibility (4) for \(s=1\ \&\ q=4\). Now, Suppose that \(c_{1}c_{4}\neq 0\). If \(c_{1}^{n+1}=0\), then we must have \(c_{1}^{n}c_{4}\neq 0\). Clearly, for \(c_{4}^{2}=0\), we get \(F\sim_{2}\mathbb{R}P^{n}\times\mathbb{S}^{4}\) and for \(c_{4}^{2}\neq 0\), we must have \(c_{4}^{2}=c_{1}^{8}\). By the change of basis \(d^{\prime}=c_{1}^{4}+c_{4}\), we realizes possibility (1) for \(\mathbb{F}=\mathbb{R}\) and \(q=4\).
Clearly, \(c_{1}^{n+1}\neq 0\) and \(c_{1}^{n}c_{4}\neq 0\) is not possible.
If \(c_{1}^{n+1}\neq 0\) and \(c_{1}^{n}c_{4}=0\). Then, we get \(c_{1}^{r}=c_{1}^{r-4j}c_{4}^{j},j>1\), which generates \(H^{r}(F)\). Thus, \(c_{1}^{4i}\neq c_{4}^{i}\) for \(1\leq i\leq j-1\) and \(c_{1}^{4j}=c_{4}^{j}\) where \(4j<r\). We get rk \(H^{*}(F)=j(r-4j)+5j\) which must be \(2n+2\). Thus \(r=\frac{2n+2}{j}+4j-5\) and hence, the cohomology ring \(H^{*}(F)\) is generated by \(c_{1}\) and \(c_{4}\), with \(c_{1}^{r+1}=c_{4}^{4j}+c_{4}^{j}=c_{1}^{r-4j+1}c_{4}=0\). This realizes possibility (5) for \(s=1\ \&\ q=4\).
**Subcase (ii):**\(c_{3}\neq c_{1}^{3}\).
As \(H^{*}(F)\) has at most two generators so, we must have \(c_{4}=c_{1}^{4}\). Thus \(j^{*}(\alpha^{n})=\sum_{r=0}^{n}\sum_{k=0}^{n-r}{n\choose r}{n-r\choose k}t^{ 3k+r}\otimes c_{1}^{4n-4r-3k}c_{3}^{r}\) and
\[j^{*}(\alpha^{n}\beta)=\sum_{i=0}^{m-1}\sum_{r=0}^{n}\sum_{k=0}^{n-r}{n\choose r }{n-r\choose k}t^{3k+r+i}\otimes(\oplus_{l+3j=m-i}c_{1}^{4n-4r-3k+l}c_{3}^{r+ j}).\]
So, we get \(c_{1}^{n+1}\) and \(c_{1}^{n}c_{3}\) are the possible least degree elements. If \(c_{1}c_{3}=0\), then we get \(c_{1}^{r}=c_{3}^{\frac{r}{3}}\), where \(r\) is formal dimension. So, \(r=3k\) and \(n\equiv 2k-1,k\in\mathbb{N}\). Thus, we have \(c_{1}^{3k+1}=c_{3}^{k+1}=c_{1}c_{3}=c_{1}^{3k}+c_{3}^{k},k\in\mathbb{N}\). This realizes possibility (4) for \(s=1\ \&\ q=3\).
Now, suppose that \(c_{1}c_{3}\neq 0\). If \(c_{1}^{n+1}=0\), then we must have \(c_{1}^{n}c_{3}\neq 0\). Clearly, for \(c_{3}^{2}=0\), we have \(F\sim_{2}\mathbb{R}P^{n}\times\mathbb{S}^{3}\) and for \(c_{3}^{2}\neq 0\), we must have \(c_{3}^{2}=c_{1}^{6}\). By the change of basis \(d^{\prime}=c_{1}^{3}+c_{3}\), we realizes possibility (1) for \(\mathbb{F}=\mathbb{R}\) and \(q=3\).
Clearly, \(c_{1}^{n+1}\neq 0\) and \(c_{1}^{n}c_{3}\neq 0\) is not possible.
If \(c_{1}^{n+1}\neq 0\) and \(c_{1}^{n}c_{3}=0\). Then, we get \(c_{1}^{r}=c_{1}^{r-3j}c_{3}^{j},j>1\), which generates \(H^{r}(F)\). Thus, \(c_{1}^{3i}\neq c_{3}^{i}\) for \(1\leq i\leq j-1\) and \(c_{1}^{3j}=c_{3}^{j}\) where \(3j<r\). Clearly, \(r=\frac{2n+2}{j}+3j-4\), where either \(n+1=jk\), for some \(k\in\mathbb{N}\) or \(j=2\). Hence, the cohomology ring \(H^{*}(F)\) is generated by \(c_{1}\) and \(c_{3}\) with \(c_{1}^{r+1}=c_{3}^{3j}+c_{3}^{j}=c_{1}^{r-3j+1}c_{3}=0\). This realizes possibility (5) for \(s=1\ \&\ q=3\).
**Case(7):** If \(B_{3}=0\) and \(B_{1}=B_{2}=1\), then \(j^{*}(\alpha)=1\otimes c_{4}+t^{2}\otimes c_{2}+t^{3}\otimes c_{1}\).
If \(H^{*}(F)\) has one generator then \(F\sim_{2}\mathbb{R}P^{2n+1}\). Suppose that \(H^{*}(F)\) has two generators. We consider two subcases: (i) \(c_{2}=c_{1}^{2}\) (ii) \(c_{2}\neq c_{1}^{2}\).
**Subcase (i):**\(c_{2}=c_{1}^{2}\).
First, assume that \(c_{4}=c_{1}^{4}\). We have \(j^{*}(\alpha^{n})=\sum_{r=0}^{n}\sum_{k=0}^{n-r}{n\choose r}{n-r\choose k}t^{2 k+3r}\otimes c_{1}^{4n-3r-2k}\) and
\[j^{*}(\alpha^{n}\beta)=\sum_{r=0}^{n}\sum_{k=0}^{n-r}\sum_{i=1}^ {m-1}{n\choose r}{n-r\choose r}t^{2k+3r+i}\otimes c_{1}^{4n-3r-2k+(m-i)}\] \[+\sum_{r=0}^{n}\sum_{k=0}^{n-r}\sum_{i=q}^{m-1}A_{l,j}{n\choose r }t^{2k+3r+i}\otimes c_{1}^{4n-3r-2k+(m-i-q)}d.\]
Clearly, \(c_{1}^{n+1}\) and \(c_{1}^{n}d\) are the possible least degree elements.
If \(c_{1}^{n+1}=0\), then \(c_{1}^{n}d\) must be non zero. Thus, for \(d^{2}=0\), we get \(F\sim_{2}\mathbb{R}P^{n}\times\mathbb{S}^{q},1\leq q\leq m\). This realizes possibility (1) for \(\mathbb{F}=\mathbb{R}\). And for \(d^{2}\neq 0\), we get \(F\sim_{2}\mathbb{R}P^{n}\times\mathbb{S}^{q},1\leq q\leq m\) and \(F\sim_{2}\mathbb{R}P^{n+1}\#\mathbb{R}P^{n+1}\), when \(d^{2}=c_{1}^{2q}\) and \(d^{2}=c_{1}^{q}d\), respectively.
Now, suppose that \(c_{1}^{n+1}\neq 0\), then either \(c_{1}^{n}d=0\) or \(c_{1}^{n}d\neq 0\).
Clearly, \(c_{1}^{n}d\neq 0\) is not possible. Now, suppose that \(c_{1}^{n}d=0\). Again, as same above case for \(c_{1}d=0\), we have realizes possibility (4) for \(s=1\). For \(c_{1}d\neq 0\), we have realizes possibility (5) for \(s=1\).
Next, suppose that \(c_{4}\neq c_{1}^{4}\).
If \(c_{1}^{4}=0\), then for \(c_{1}^{3}=0\), we get rk \(H^{*}(F)\neq 2n+2\), a contradiction. And for \(c_{1}^{3}\neq 0\), we must have \(n=3\) and \(c_{4}^{2}=0\). So, we have \(c_{2}c_{1}^{2}\neq 0\), and hence \(F\sim_{2}\mathbb{R}P^{3}\times\mathbb{S}^{4}\). Let \(c_{4}\neq c_{1}^{4}\neq 0\). Then, \(j^{*}(\alpha^{n})=\sum_{r=0}^{n}\sum_{k=0}^{n-r}{n\choose r}{n-r\choose k}t^{3 k+r}\otimes c_{1}^{4n-4r-3k}c_{3}^{r}\) and
\[j^{*}(\alpha^{n}\beta)=\sum_{i=0}^{m-1}\sum_{r=0}^{n}\sum_{k=0}^{n-r}{n\choose r }{n-r\choose k}t^{2n+r-2k+i}\otimes(\oplus_{l+4j=m-i}c_{1}^{2n-r-2k+l}c_{4}^{ r+j}).\]
We get \(c_{1}^{n+1}\) and \(c_{1}^{n}c_{4}\) are the possible least degree elements. If \(c_{1}c_{4}=0\), then we get \(c_{1}^{r}=c_{4}^{\frac{r}{4}}\), where \(r\) is formal dimension. So, \(r=8k\) and \(n\equiv 5k-1,k\in\mathbb{N}\). Thus, \(c_{1}^{8k+1}=c_{4}^{k+1}=c_{1}c_{4}=c_{1}^{4k}+c_{4}^{k},k\in\mathbb{N}\). This realizes possibility (4) for \(s=1\) & \(q=4\). Now, let \(c_{1}c_{4}\neq 0\). If \(c_{1}^{n+1}=0\), then we must have \(c_{1}^{n}c_{4}\neq 0\). Clearly, for \(c_{4}^{2}=0\), we have \(F\sim_{2}\mathbb{R}P^{n}\times\mathbb{S}^{3}\) and for \(c_{4}^{2}\neq 0\), we must have \(c_{4}^{2}=c_{1}^{6}\). By the change of basis \(d^{\prime}=c_{1}^{4}+c_{4}\), we realizes possibility (1) for \(\mathbb{F}=\mathbb{R}\) and \(q=4\).
Clearly, \(c_{1}^{n+1}\neq 0\) and \(c_{1}^{n}c_{4}\neq 0\) is not possible.
If \(c_{1}^{n+1}\neq 0\) and \(c_{1}^{n}c_{4}=0\). Then, we get \(c_{1}^{r}=c_{1}^{r-4j}c_{4}^{j},j>1\), generates \(H^{r}(F)\). Thus, \(c_{1}^{4i}\neq c_{4}^{i}\) for \(1\leq i\leq j-1\) and \(c_{1}^{4j}=c_{4}^{j}\), where \(4j<r\). Clearly, \(r=\frac{2n+2}{j}+4j-5\), where either \(n+1=jk\), for some \(k\in\mathbb{N}\) or \(j=2\). Hence, the cohomology ring \(H^{*}(F)\) is generated by \(c_{1}\) and \(c_{4}\) with \(c_{1}^{r+1}=c_{4}^{4j}+c_{4}^{j}=c_{1}^{r-4j+1}c_{4}=0\). This realizes possibility (5) for \(s=1\) & \(q=4\).
**Subcase(ii):**\(c_{2}\neq c_{1}^{2}\).
Suppose that \(c_{1}^{2}=0\). Then, we must have \(c_{4}=c_{2}^{2}\). So, \(j^{*}(\alpha^{n})=\sum_{r=0}^{1}{n\choose r}(1\otimes c_{2}^{2}+t^{2}\otimes c _{1}^{2n-r-2k+l}c_{4}^{r+j})\).
We get \(c_{1}^{n+1}\) and \(c_{1}^{n}c_{4}\) are the possible least degree elements. If \(c_{1}c_{4}=0\), then we get \(c_{1}^{r}=c_{4}^{r}\), where \(r\) is formal dimension. So, \(r=8k\) and \(n\equiv 5k-1,k\in\mathbb{N}\). Thus, \(c_{1}^{8k+1}=c_{4}^{k+1}=c_{1}c_{4}=c_{1}^{4k}+c_{4}^{k},k\in\mathbb{N}\). This realizes possibility (4) for \(s=1\) & \(q=4\).
Now, let \(c_{1}c_{4}\neq 0\). If \(c_{1}^{n+1}=0\), then we must have \(c_{1}^{n}c_{4}\neq 0\). Clearly, for \(c_{4}^{2}=0\), we have \(F\sim_{2}\mathbb{R}P^{n}\times\mathbb{S}^{3}\) and for \(c_{4}^{2}\neq 0\), we must have \(c_{4}^{2}=c_{1}^{2}\). By the change of basis \(d^{\prime}=c_{1}^{4}+c_{4}\), we realizes possibility (1) for \(\mathbb{F}=\mathbb{R}\) and \(q=4\).
Clearly, \(c_{1}^{n+1}\neq 0\) and \(c_{1}^{n}c_{4}\neq 0\) is not possible.
If \(c_{1}^{n+1}\neq 0\) and \(c_{1}^{n}c_{4}=0\). Then, we get \(c_{1}^{r}=c_{1}^{r-4j}c_{4}^{j},j>1\), generates \(H^{r}(F)\). Thus, \(c_{1}^{4i}\neq c_{4}^{i}\) for \(1\leq i\leq j-1\) and \(c_{1}^{4j}=c_{4}^{j}\), where \(4j<r\). Clearly, \(r=\frac{2n+2}{j}+4j-5\), where either \(n+1=jk\), for some \(k\in\mathbb{N}\) or \(j=2\). Hence, the cohomology ring \(H^{*}(F)\) is generated by \(c_{1}\) and \(c_{4}\) with \(c_{1}^{r+1}=c_{4}^{4j}+c_{4}^{j}=c_{1}^{r-4j+1}c_{4}=0\). This realizes possibility (5) for \(s=1\) & \(q=4\).
**Subcase(ii):**\(c_{2}\neq c_{1}^{2}\).
Suppose that \(c_{1}^{2}=0\). Then, we must have
\(c_{2}^{n+1}\neq 0\), \(c_{2}^{n+1}\neq 0\), \(c_{2}^{n+1}=c_{1}^{2}\neq 0\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}=c_{2}^{2}=c_{2}^{2}\), \(c_{2}^{n+1}=c_{2}^{2}=c_{2}^{2}=c_{2}^{2}=c_{2}^{2}=c_{2}^{2}=c_{2}^{2}=c_{2}^{2}\). Clearly, \(c_{2}^{n}\neq 0\). For \(c_{2}^{n+1}=0\), we get \(F\sim_{2}\mathbb{C}P^{n}\times\mathbb{S}^{1}\). For \(c_{2}^{n+1}\neq 0\), we get \(\text{rk }H^{*}(F)>2n+2\), a contradiction.
Now suppose that \(c_{1}^{2}\neq 0\). Since, \(H^{*}(F)\) has at most two generators so, we must have either \(c_{4}=c_{2}^{2}\) or \(c_{4}=c_{1}^{4}\). In both cases, we get \(c_{1}^{n+1}\) and \(c_{1}^{n}c_{2}\) are the least degree element in the image of \(j^{*}(\alpha^{n}\beta)\).
If \(c_{1}c_{2}=0\), then we must have \(c_{1}^{n+1}\neq 0\). Thus, \(c_{1}^{r}=c_{2}^{2}\) is the generator of \(H^{r}(F)\). This implies that \(\text{rk }H^{*}(F)=r+\frac{r}{2}=2n+2\). Thus, \(r=4k\), where \(n=3k-1\), \(k\in\mathbb{N}\). Hence, \(F\sim_{2}\mathbb{R}P^{4k}\#\mathbb{C}P^{2k},k\in\mathbb{N}\). This realizes possibility (4) for \(s=1\) & \(q=2\).
Now, suppose that \(c_{1}c_{2}\neq 0\). If \(c_{1}^{n+1}=0\), then \(c_{1}^{n}c_{2}\) must be nonzero. By the change of basis \(d=c_{1}^{2}+c_{2}\). we get \(F\sim_{2}\mathbb{R}P^{n}\times\mathbb{S}^{2}\). This realizes possibility (1) for \(\mathbb{F}=\mathbb{R}\) and \(q=2\).
If \(c_{1}^{n+1}\neq 0\) and \(c_{1}^{n}c_{2}\neq 0\), then \(\text{rk }H^{*}(F)>2n+2\), a contradiction.
If \(c_{1}^{n+1}\neq 0\). Then, \(c_{1}^{r}=c_{1}^{r-2j}c_{2}^{j}\), \(j>1\) forms generator of \(H^{r}(F)\). Which implies that \(c_{1}^{2j}=c_{2}^{j}\) is generator of \(H^{2j}(F)\). Hence, the cohomology ring is given by \(c_{1}^{r+1}=c_{1}^{2j}+c_{2}^{j}=c_{1}^{r-2j+1}c_{2}=0\). This realizes possibility (5) for \(s=1\) & \(q=2\).
**Case(8):** If \(B_{1}=B_{2}=B_{3}=1\), then \(j^{*}(\alpha)=1\otimes c_{4}+t\otimes c_{3}+t^{2}\otimes c_{2}+t^{3}\otimes c_ {1}\).
If \(H^{*}(F)\) has one generator then \(F\sim_{2}\mathbb{R}P^{2n+1}\). Now, suppose that \(H^{*}(F)\) has two generators. We consider two subcases: (i) \(c_{2}=c_{1}^{2}\) (ii) \(c_{2}\neq c_{1}^{2}\).
**Subcase (i):** Assume that \(c_{2}=c_{1}^{2}\).
If \(c_{3}=c_{1}^{3}\) and \(c_{4}=c_{1}^{4}\), then \(j^{*}(\alpha^{n})=(1\otimes c_{1}^{4}+t\otimes c_{1}^{3}+t^{2}\otimes c_{1}^{2}+t^ {3}\otimes c_{1})^{n}\). So, we must have \(c_{1}^{n}\neq 0\). This case is similar to Subcase (i) of Case (7), when \(c_{2}=c_{1}^{3}\) & \(c_{4}=c_{1}^{4}\).
Now, if \(c_{3}=c_{1}^{3}\) and \(c_{4}\neq c_{1}^{4}\), then \(j^{*}(\alpha^{n})=\sum_{r=0}^{n}{n\choose r}(1\otimes c_{4})^{n-r}(t\otimes c_{1} ^{3}+t^{2}\otimes c_{1}^{2}+t^{3}\otimes c_{1})^{r}\). Further, if \(c_{1}^{4}=0\), then this holds only when \(n=3\) and \(c_{4}^{2}=0\) when \(c_{4}^{n-2}c_{1}^{3}\neq 0\). Thus \(F\sim_{2}\mathbb{R}P^{3}\times\mathbb{S}^{4}\).
If \(c_{1}^{4}\neq 0\), then, \(j^{*}(\alpha^{n}\beta)=\sum_{i=0}^{m-1}\sum_{r=0}^{n}\sum_{k=0}^{n-r}\sum_{k^{ \prime}=0}^{n}{n\choose r}{n-r-k\choose k^{\prime}}t^{n-r-k\choose k^{\prime}} t^{n-r+k^{\prime}+2k+i}\otimes(\oplus_{l+4j=
or \(\mathbb{F}P^{3},\mathbb{F}=\mathbb{R}\) or \(\mathbb{C}\), or \(\mathbb{F}P^{2}\#\mathbb{F}P^{2},\mathbb{F}=\mathbb{R}\), \(\mathbb{C}\) or \(\mathbb{H}\). These possibilities have also been realized in [Theorem 3.11, [13]].
**Remark 3.3**.: It is easy to observe that the fixed point sets of involutions on \(X\sim_{2}\mathbb{H}P^{n}\times\mathbb{S}^{m}\), when \(X\) is not TNHZ in \(X_{G}\) has mod 2 cohomology of \(q\)-sphere, where \(-1\leq q\leq 4n+m\), under the assumptions that the associated Lerey-Serre spectral sequence of Borel fibration \(X\hookrightarrow X_{G}\to B_{G}\) is nondegenerate and the differentials \(d_{r}\) of the spectral sequence satisfies \(d_{r}(1\otimes b)=0\)\(\forall\)\(r\leq m\), (See Theorem 3.5 in [6]).
Now, we give examples to realizes above Theorem.
**Example 3.4**.: Let \(G=\mathbb{Z}_{2}\) act on \(\mathbb{S}^{m}\) defined by
\[(x_{0},x_{1},\cdots x_{m})\mapsto(x_{0},x_{1},\cdots,x_{q},-x_{q+1},\cdots-x_ {m}).\]
If we consider trivial action of \(G\) on \(\mathbb{H}P^{n}\), then after taking diagonal action of \(G\) on \(\mathbb{H}P^{n}\times\mathbb{S}^{m}\), we get fixed point set is \(\mathbb{H}P^{n}\times\mathbb{S}^{q}\), where \(1\leq q\leq m\).
If we take conjugation action of \(G\) on \(\mathbb{H}P^{n}\), i.e \((z_{0},z_{1},\cdots z_{n})\mapsto(\bar{z}_{0},\bar{z}_{1},\cdots\bar{z}_{n})\), then after taking diagonal action of \(G\) on \(\mathbb{H}P^{n}\times\mathbb{S}^{m}\), we get fixed point set is \(\mathbb{R}P^{n}\times\mathbb{S}^{q},1\leq q\leq m\). If \(G\) acts on \(\mathbb{H}P^{n}\), define by \((z_{0},z_{1},\cdots z_{n})\mapsto(iz_{0},iz_{1},\cdots iz_{n})\), then after taking diagonal action of \(G\) on \(\mathbb{H}P^{n}\times\mathbb{S}^{m}\), we get fixed point set is \(\mathbb{C}P^{n}\times\mathbb{S}^{q},1\leq q\leq m\). This examples realizes possibility (1) of Theorem 3.1.
Now, consider an action of \(G\) on \(\mathbb{S}^{4}\) defined by \((x_{0},x_{1},x_{2},x_{3},x_{4})\mapsto(x_{0},x_{1},x_{2},x_{3},-x_{4})\). Then, the fixed point set of the diagonal action of \(G\) on \(\mathbb{S}^{4}\times\mathbb{S}^{m}\) is \(\mathbb{S}^{3}\times\mathbb{S}^{q},1\leq q\leq m\). This also realizes possibility (1) of Theorem 3.1 for \(n=1\).
**Example 3.5**.: Bredon ([2]) constructed an example that \(\mathbb{P}^{2}(q)\#\mathbb{P}^{2}(q)\) (connected sum of projective spaces) is a fixed point set of an involution on \(\mathbb{S}^{4}\times\mathbb{S}^{q+k}\), where \(k\geq 4\). This example realizes possibility (2) of Theorem 3.1 for \(n=1\). In this paper, Bredon also gave examples of involutions on \(X\sim_{2}\mathbb{S}^{n}\times\mathbb{S}^{m},n\leq m\) and \(X\sim_{2}\mathbb{S}^{4}\times\mathbb{S}^{m},4<m\) with the fixed point sets \(F=\mathbb{R}P^{3}\) and \(F\sim_{2}\mathbb{S}^{7}\), respectively. These examples realizes possibility (3) of Theorem 3.1 for \(n=1\), and the case when \(X\sim_{2}\mathbb{H}P^{1}\times\mathbb{S}^{m}\) and not TNHZ in \(X_{G}\), respectively.
Next, we have discussed the cohomology ring of the orbit spaces of free involutions on a space \(X\) having mod 2 cohomology the product of quaternionic projective space and sphere \(\mathbb{H}P^{n}\times\mathbb{S}^{m}\). For the existence of free involutions on \(\mathbb{H}P^{n}\times\mathbb{S}^{m}\), consider the diagonal action on \(\mathbb{H}P^{n}\times\mathbb{S}^{m}\), by taking any involution on \(\mathbb{H}P^{n}\) and antipodal action on \(\mathbb{S}^{m}\).
First, we have consider the case when \(\pi_{1}(B_{G})\) acts trivially on \(H^{*}(X)\) under some assumptions on the associated Lerey-Serre spectral sequence of Borel fibration \(X\hookrightarrow X_{G}\to B_{G}\). Note that [6] if \(G=\mathbb{Z}_{2}\) act freely on \(X\sim_{2}\mathbb{H}P^{n}\times\mathbb{S}^{m}\), then \(\pi_{1}(B_{G})\) acts
trivially on \(H^{*}(X)\) whenever one of the following holds: (1) \(4n\leq m\) (2) \(4=m<4n,n\) is even, (3) \(4<m<2m\leq 4n,m\equiv 0(\text{mod }4)\), and (4) \(m\not\equiv 0(\text{mod }4)\).
**Theorem 3.6**.: Let \(G=\mathbb{Z}_{2}\) act freely on finite CW-complex \(X\sim_{2}\mathbb{H}P^{n}\times\mathbb{S}^{m}\), where \(n,m\geq 1.\) Assume that \(\pi_{1}(B_{G})\) acts trivially on \(H^{*}(X)\) and the differentials \(d_{r}(1\otimes b)=0\ \forall\ r\leq m.\) Then, the cohomology ring of orbit space \(H^{*}(X/G)\) is isomorphic to one of the following graded commutative algebras:
1. \(\mathbb{Z}_{2}[x,y,z]/I\), where \(I\) is homogeneous ideal given by: \[<x^{5},y^{\frac{n+1}{2}}+a_{0}y^{\frac{4(n+1)-m}{8}}z+a_{1}x^{4}y^{\frac{4n-m }{8}}z+a_{2}z,z^{2}+a_{3}x^{2i}y^{\frac{m-i}{4}}+a_{4}x^{i^{\prime}}y^{\frac{ m-i^{\prime}}{8}}z>,\] where \(\deg x=1\), \(\deg y=8\) & \(\deg z=m\), \(a_{0}=0\) if \(m\not\equiv 0(\text{mod }8)\) or \(m>4n+4\); \(a_{1}=0\) if \(m\equiv 0(\text{mod }8)\) or \(m>4n\); \(a_{2}=0\) if \(m\not=4(n+1)\); \(a_{3}=0\) if \(m\not\equiv i(\text{mod }4)\) or \(\{i=0\text{ and }2m>4(n-1)\},0\leq 2i\leq 4\) and \(a_{4}=0\) if \(m\not\equiv i^{\prime}(\text{mod }8)\) or \(m>4n,0\leq i^{\prime}\leq 4\), \(a_{k}\in\mathbb{Z}_{2},0\leq k\leq 4\), \(n\) odd,
2. \(\mathbb{Z}_{2}[x,y,z]/<x^{5},y^{\frac{n}{2}+1},z^{2}+a_{0}y+a_{1}x^{4}z>\), where \(\deg x=1\), \(\deg y=8\) & \(\deg z=4\), \(a_{0},a_{1}\in\mathbb{Z}_{2}\), \(n\) even, and
3. \(\mathbb{Z}_{2}[x,y]/<x^{m+1},y^{n+1}+\sum_{0<i\equiv 0(mod\ 4)}^{min\{4(n+1),m \}}a_{i}x^{i}y^{\frac{4(n+1)-i}{4}}>\), where \(\deg x=1\), \(\deg y=4\) and \(a_{i}\in\mathbb{Z}_{2}\).
Finally, we have consider the case when \(\pi_{1}(B_{G})\) acts nontrivially on \(H^{*}(X)\).
**Theorem 3.7**.: Let \(G=\mathbb{Z}_{2}\) act freely on a finite CW-complex \(X\sim_{2}\mathbb{H}P^{n}\times\mathbb{S}^{m}\), where \(n,m\geq 1.\) Assume that \(\pi_{1}(B_{G})\) acts nontrivially on \(H^{*}(X).\) Then, \(H^{*}(X/G)\) is isomorphic to one of the following graded commutative algebras:
1. \(\mathbb{Z}_{2}[x,y,z]/<x^{9},y^{2}+a_{0}z+a_{1}x^{8},z^{\frac{n+1}{2}}+a_{2}x ^{8}z^{\frac{n-1}{2}},xy>\), where \(\deg x=1\), \(\deg y=4,\&\ \deg z=8,a_{i}\in\mathbb{Z}_{2},0\leq i\leq 2,m=4<4n\), \(n\) odd, and
2. \(\mathbb{Z}_{2}[x,y,z,w_{k}]/<x^{5},y^{\frac{m}{8}}+a_{0}w_{1},z^{2},xw_{k},w_ {k}w_{k+i}+a_{k,i}x^{4d}y^{\frac{2m-4n+4}{8}}z>\), where \(\deg x=1\), \(\deg y=8\), \(\deg z=4n+4\) & \(\deg w_{k}=m+4(k-1)\), \(1\leq k\leq\frac{4n-m+4}{4}\), and \(0\leq i\leq\frac{4n-m}{4}\), \(-1\leq q(odd)\leq\frac{4n-m-8}{4},d=0,1\), & \(n\) odd, \(4<m<4n<2m,m\equiv 0\ (\text{mod }8)\) and \(a_{k,i}=0\) if \(\frac{4(n+2)-m}{4}<2k+i;a_{0}\) and \(a_{k,i}{}^{\prime}s\) are in \(\mathbb{Z}_{2}.\) If \(d=0\), then \(i\) is even and \(q=2k+i-3.\) If \(d=1\), then \(i\) is odd and \(q=2k+i-4\).
The proofs of the above Theorems are similar to proofs of Theorem 4.2 and Theorem 4.5 in [6], respectively.
**Remark 3.8**.: If \(a_{i}=0,0\leq i\leq 2\) in possibility (1) of the Theorem 3.7, then \(X/G\sim_{2}(\mathbb{R}P^{8}\vee\mathbb{S}^{4})\times\mathbb{P}^{\frac{n-1}{2}} (8).\) If \(a_{i}=0\ \forall\ 0\leq i\leq 4\), in possibility (1) of Theorem 3.6, then \(X/G\sim_{2}\mathbb{R}P^{4}\times\mathbb{P}^{\frac{i}{2}}(8)\times\mathbb{S}^{m},j=n-1\) for \(n\) odd and \(j=n\) for \(n\) even. If \(a_{i}=0\ \forall\ 0<i\equiv 0(mod\ 4)\leq min\{4(n+1),m\}\) in the possibility (3), then \(X/G\sim_{2}\mathbb{R}P^{m}\times\mathbb{H}P^{n}\).
**Example 3.9**.: Let \(T:\mathbb{H}P^{n}\times\mathbb{S}^{m}\rightarrow\mathbb{H}P^{n}\times\mathbb{S}^{m}\) be a map defined by \(([z],x)\mapsto([z],-x)\). Then, this gives a free involution on \(\mathbb{H}P^{n}\times\mathbb{S}^{m}.\) The orbit space of this action is \((\mathbb{H}P^{n}\times\mathbb{S}^{m})/\mathbb{Z}_{2}\sim_{2}\mathbb{H}P^{n} \times\mathbb{R}P^{m}.\) This realizes possibility (3) of Theorem 3.6, for \(a_{i}=0\ \forall\ i.\)
|
2301.01837 | A Meta-Learning Algorithm for Interrogative Agendas | Explainability is a key challenge and a major research theme in AI research
for developing intelligent systems that are capable of working with humans more
effectively. An obvious choice in developing explainable intelligent systems
relies on employing knowledge representation formalisms which are inherently
tailored towards expressing human knowledge e.g., interrogative agendas. In the
scope of this work, we focus on formal concept analysis (FCA), a standard
knowledge representation formalism, to express interrogative agendas, and in
particular to categorize objects w.r.t. a given set of features. Several
FCA-based algorithms have already been in use for standard machine learning
tasks such as classification and outlier detection. These algorithms use a
single concept lattice for such a task, meaning that the set of features used
for the categorization is fixed. Different sets of features may have different
importance in that categorization, we call a set of features an agenda. In many
applications a correct or good agenda for categorization is not known
beforehand. In this paper, we propose a meta-learning algorithm to construct a
good interrogative agenda explaining the data. Such algorithm is meant to call
existing FCA-based classification and outlier detection algorithms iteratively,
to increase their accuracy and reduce their sample complexity. The proposed
method assigns a measure of importance to different set of features used in the
categorization, hence making the results more explainable. | Erman Acar, Andrea De Domenico, Krishna Manoorkar, Mattia Panettiere | 2023-01-04T22:09:36Z | http://arxiv.org/abs/2301.01837v1 | # A Meta-Learning Algorithm for Interrogative Agendas
###### Abstract
Explainability is a key challenge and a major research theme in AI research for developing intelligent systems that are capable of working with humans more effectively. An obvious choice in developing explainable intelligent systems relies on employing knowledge representation formalisms which are inherently tailored towards expressing human knowledge e.g., _interrogative agendas_. In the scope of this work, we focus on formal concept analysis (FCA), a standard knowledge representation formalism, to express interrogative agendas, and in particular to categorize objects w.r.t. a given set of features. Several FCA-based algorithms have already been in use for standard machine learning tasks such as classification and outlier detection. These algorithms use a single concept lattice for such a task, meaning that the set of features used for the categorization is fixed. Different sets of features may have different importance in that categorization, we call a set of features an agenda. In many applications a correct or good agenda for categorization is not known beforehand. In this paper, we propose a meta-learning algorithm to construct a good interrogative agenda explaining the data. Such algorithm is meant to call existing FCA-based classification and outlier detection algorithms iteratively, to increase their accuracy and reduce their sample complexity. The proposed method assigns a measure of importance to different set of features used in the categorization, hence making the results more explainable.
F 2021 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 40 International (CC BY 4.0).
CEVR Workshop Proceedings (CEUR-WS.org)
## 1 Introduction
As artificial intelligence (AI) technologies are playing key roles in our daily lives, developing intelligent systems which can work with humans more effectively (instead of replacing them) is becoming a central research theme [1, 2, 3]. Such theme is mostly pronounced as _hybrid intelligence_, aiming to benefit from the strengths of both humans and the machine intelligence in solving problems. Developing systems of such capability demands fundamentally novel approaches to major research problems in AI: state-of-the-art systems outperform humans in many cognitive tasks from playing video games [4] to pattern recognition [5], however they fall short when it comes to other tasks such as common sense reasoning, performing causal discovery, and behavioural human capabilities such as explaining its own decisions, adapting to different environments, collaborating with others, etc. A particular challenge in developing such systems relies on making them more interpretable [1, 6, 7] which is the main focus of this paper.
An obvious medium in making such systems interpretable relies on employing an existing knowledge representation formalism which is inherently tailored towards expressing human knowledge. One such type of human knowledge that is relevant in problem solving is captured by the notion of _interrogative agenda_ (also called research agenda [8]) of an epistemic agent (which will be explained further in detail in Section 2.2). Intuitively, given a context, an interrogative agenda abstracts a set of features that an epistemic agent is interested in. In order to express interrogative agendas we employ the knowledge representation formalism of _formal concept analysis_.
Formal concept analysis (FCA) is an influential foundational theory in knowledge representation and reasoning [9, 10, 11, 12, 13, 14, 15] which provides a framework for categorizing objects w.r.t. a given set of features. The set of features used in the categorization (formal context in FCA) can be identified as its agenda, and different agendas will correspond to different categorizations. The agenda used to categorize a set of objects may be chosen on several factors like the availability and precision of the data, the categorization methodology, and the purpose of the categorization.1 In this paper, we focus on obtaining concept lattices (possibly fuzzy) corresponding to different agendas (possibly non-crisp) However, in many applications, it is unclear which interrogative agenda (Sec. 2.2) is best suited to obtain a categorization that can be useful in dealing with a given problem. Thus, in this work, we focus on the task of using a machine learning algorithm to learn such agendas, and hence a "good categorization" for the problem at hand. In particular, we will address the task of classification and outlier detection.
Footnote 1: A logical framework for studying these different categorizations obtained from different agendas and their interaction was developed in our earlier work [16] and applied to auditing domain.
In the realm of machine learning, formal concept analysis has been used in the past for classification, outlier detection, rare concept mining and identification of rare patterns (Sec. 3). However, to the best of our knowledge, all these methods use a single concept lattice (or its sublattice) to deal with the problems mentioned above. That is, the agenda of the categorization is fixed beforehand. The main difficulty in using such techniques relies on the fact that there are exponentially many subsets of features (and weights) one has to take into account. On the other hand, since some features may not be relevant for a given classification task, removing them can reduce the data collection cost, its complexity, and may even improve the accuracy for some tasks. However, determining the set of relevant features can be difficult, and it is an important part of the preprocessing phase for many such algorithms.
In this paper, we propose a meta-learning algorithm to identify the best-suited agenda (and hence categorization). That is, to estimate the significance of different sets of features for the given task. The incorporation of such outer-loop on top of an existing classification or outlier detection algorithm can potentially increase its generalising power and the performance. Another major advantage of such method is that the learned agendas provide us an estimation of the importance of different sets of features for the given task, making our results more explainable.
Structure of paper.In Section 2, we provide the relevant preliminaries. In Section 3, we give an overview of FCA-based classification and outlier detection algorithms. In Section 4, we describe the framework for learning agendas and provide a generic learning algorithm. In
Section 5, we conclude and give some directions for future research.
## 2 Preliminaries
### Formal concept analysis
A _formal context_[14] is a structure \(\mathbb{P}=(A,X,I)\) such that \(A\) and \(X\) are sets of _objects_ and _features_, respectively, and \(I\subseteq A\times X\) is the so-called _incidence relation_ which records whether a given object has a given feature. That is, for any object \(a\) and feature \(x\), \(aIx\) iff \(a\) has feature \(x\). Formal contexts can be thought of as abstract representations of e.g., databases, tabular data and such. Every formal context as above induces maps \(I^{(1)}:\mathcal{P}(A)\rightarrow\mathcal{P}(X)\) and \(I^{(0)}:\mathcal{P}(X)\rightarrow\mathcal{P}(A)\), respectively defined by the assignments
\[I^{(1)}[B]:=\{x\in X\mid\forall a(a\in B\Rightarrow aIx)\},\quad I^{(0)}[Y]= \{a\in A\mid\forall x(x\in Y\Rightarrow aIx)\}. \tag{1}\]
A _formal concept_ of \(\mathbb{P}\) is a pair \(c=(\llbracket c\rrbracket,\llbracket c\rrbracket)\) such that \(\llbracket c\rrbracket\subseteq A\), \(\llbracket c\rrbracket\subseteq X\), and \(I^{(1)}[\llbracket c\rrbracket]=(\llbracket c\rrbracket)\) and \(I^{(0)}[\llbracket c\rrbracket]=\llbracket c\rrbracket\). A subset \(B\subseteq A\) (resp. \(Y\subseteq X\)) is said to be _closed_ or _Galois-stable_ if \(Cl(B)=I^{(0)}[I^{(1)}[B]]=B\) (resp. \(Cl(Y)=I^{(1)}[I^{(0)}[Y]]=Y\)). The set of objects \(\llbracket c\rrbracket\) is intuitively understood as the _extension_ of the concept \(c\), while the set of features \((\llbracket c\rrbracket)\) is understood as its _intension_. The set of the all formal concepts of \(\mathbb{P}\) (denoted by \(L(\mathbb{P})\)) can be partially ordered as follows: for any \(c,d\in L(\mathbb{P})\),
\[c\leq d\quad\text{ iff }\quad\llbracket c\rrbracket\subseteq\llbracket d \rrbracket\quad\text{ iff }\quad(\llbracket d\rrbracket)\subseteq(\llbracket c \rrbracket). \tag{2}\]
With this order, \(L(\mathbb{P})\) is a complete lattice, the _concept lattice_\(\mathbb{P}^{+}\) of \(\mathbb{P}\).
### Interrogative agendas
In epistemology and formal philosophy, interrogative agenda (or research agenda [8]) of an epistemic agent (or group of agents e.g., users) indicates the set of questions they are interested in, or what they want to know relative to a certain circumstance. Intuitively, in any context, interrogative agendas act as cognitive filters that block content which is deemed irrelevant by the agent. Only the information the agent considers relevant is used e.g. in the formation of their beliefs, or actions, etc. Deliberation and negotiation processes can be described as whether or how agents succeed and interact in shaping their interrogative agendas, and the outcomes of these processes can be described in terms of the aggregated (or "common ground") agenda. Also, phenomena such as polarization [17], echo chambers [18] and self-fulfilling prophecies [19] can be described in terms of the formation and dynamics of interrogative agendas among networks of agents.
Dealing with a classification or outlier detection problem, we may have different agendas for different aims. For example, the agenda for the classification of consumers for a grocery store based on their buying preferences is very different from the agenda of a political analyst trying to classify the same set of people based on their political inclinations. Thus, interrogative agendas play an important role in determining natural or useful categorization for a specific purpose.
### Interrogative agendas and flexible categorization
Let \(\mathbb{P}=(A,X,I)\) be a formal context. For a set of features \(Y\subseteq X\), the formal context induced by \(Y\) from \(\mathbb{P}\) is \((A,X,I\cap A\times Y)\). Given the set of all the features \(X\), the (non-crisp) interrogative agenda of an agent can be described by a mass function on \(\mathcal{P}(X)\). For an agenda represented by \(m:\mathcal{P}(X)\rightarrow[0,1]\), and any \(Y\subseteq X\), \(m(Y)\) represents the importance (or intensity of the preference) of the set of features \(Y\) according to the agenda given by \(m\). We assume that mass functions are normalized, that is,
\[\sum_{Y\subseteq X}m(Y)=1. \tag{3}\]
Any such mass function induces a probability or preference function \(p_{m}:\mathcal{R}\rightarrow[0,1]\) such that \(p_{m}((A,X,I\cap A\times Y))=m(Y)\), where \(\mathcal{R}\) is the set of all the formal contexts corresponding to the crisp agendas induced by subsets of \(X\) (i.e. the formal contexts corresponding to each \(Y\subseteq X\)).
The agendas of different agents can be aggregated using different Dempster-Shafer rules [20, 21, 22] to obtain a categorization corresponding to aggregated agendas. A logical framework for deliberation between different agents having different agendas is developed in [16]. This framework can be applied to study categorizations when different agents with different interests interact with each other for communication or joint decision making, as it is the case in auditing, community analysis, linguistics, etc. We also describe a method to approximate the importance of individual features from mass functions describing agendas by plausibility transform [23] or pignistic transformation [24], methods used in Dempster-Shafer theory to transform Dempster-Shafer mass functions to probability functions. These importance values of individual features can be useful in several different applications like feature analysis, clustering, etc.
## 3 Classification and outlier detection using concept lattices
In this section, we give an overview of different classification and outlier detection techniques using concept lattices.
### Classification using concept lattices
Different algorithms have been applied to classify objects using formal concept analysis, that is, using concept lattices. Fu et al. [25] provide a comparison between different FCA-based classification algorithms, such as LEGAL [26], GALOIS [27], RULEARNER [28], CLNN and CLNB [29]. Prokasheva et al. [30] describe different classification algorithms using FCA and challenges to such methods.
In [31], Kuznetsov describes a classification algorithm that uses the JSM-method [32, 33]. He proposes to use concept lattices and training examples to form hypotheses as follows. Let \((A,X,I)\) be a formal context for the set of objects \(A\) and the set of features \(X\). We add an additional target feature \(x\not\in X\) for denoting a class of an object. This partitions \(A\) into three sets of objects \(A_{+}\), \(A_{-}\), and \(A_{\tau}\) consisting of objects known to have feature \(x\), objects known to not have feature \(x\), and objects for which it is unknown whether or not they have it, respectively.
Positive hypotheses for the JSM-method based on this formal context are given by the sets of features that are shared by a set of positive examples but not by any negative example. That is, a set \(H\subseteq X\) is a positive hypothesis iff \(I^{(0)}[H]\cap A_{+}\neq\emptyset\) and \(H\not\subseteq I^{(1)}[\{a\}]\) for any \(a\in A_{-}\). Negative hypotheses are defined analogously. For any object \(b\), it will be classified positively (resp. negatively) if \(I^{(1)}[\{b\}]\) contains a positive (resp. negative) hypothesis but no negative (resp. positive) hypotheses. In case \(I^{(1)}[\{b\}]\) contains both or neither, the classification is undetermined or some other method like majority voting can be used to classify \(b\). The method sketched above has been used with different modifications in many FCA-based classification algorithms [34, 35, 36]. Some classification algorithms based on FCA use concept lattices to augment other classifiers like SVM [37], Naive Bayes classifier and Nearest neighbour classifier [29] in preprocessing or feature selection. Other FCA-based classification methods include biclustering [36], and cover-based classification [38].
### Outlier detection using concept lattices
Outlier detection can be considered as a special case of binary classification where the classes are outlier and non-outliers. Thus, any of the above-mentioned algorithms can be used for outlier detection using concept lattices. Some other methods or algorithms based on formal concept analysis have also been studied specifically for outlier detection or similar tasks like mining rare concepts or patterns [39, 40, 41]. The simplest method to define the outlier degree of an element from a concept lattice is by using the size of its closure (i.e. the smallest category containing the element). Smaller size of closure of an object indicates that there are a small number of elements which have the same features as the object and thus it is likely to be an outlier. Sugiyama [39] suggests that the outlierness of an object in a concept lattice should not depend on the size of its closure but one must consider the number of concepts it creates. He suggests to define the outlierness score of a set of objects \(B\subseteq A\) as
\[q(B):=|\{(G,Y)\in\mathbb{P}^{+}\mid B\subseteq G\,\text{or}\,I^{(1)}[B] \subseteq Y\}|. \tag{4}\]
This definition is more suited to detect outliers that belong to a densely agglomerated cluster which locates sparsely if we overview the whole set of objects. Zhang et al. [41] propose an outlier mining algorithm based on constrained concept lattices to detect local outliers using a sparsity-based method. One of the key advantages of using formal concept analysis in classification or outlier detection over other algorithms is that FCA can be used to deal with both continuous and discrete attributes simultaneously, through the discretization of continuous attributes by conceptual scaling (Sec. 3.3).
One of the major issues in applications of formal concept analysis is the complexity of the algorithms involved. The fundamental reason behind the high complexity is that in the worst-case scenario the number of categories in a concept lattice grows exponentially with the number of objects and features involved. Several techniques have been devised in past to overcome this complexity problem [42, 43, 44].
### Discretization of continuous attributes and conceptual scaling
In order to apply formal concept analysis on attributes with continuous values, we need to discretize them. The process of converting many-valued (possibly continuous-valued) attributes
into binary attributes or features for FCA is known as conceptual scaling [45]. Scaling is an important part of most FCA-based techniques and has been studied extensively [45, 46, 47]. Choosing the correct scaling method depends on the specific task the concept lattice is used for.
## 4 Learning interrogative agendas
Formal concept analysis categorizes a given set of objects w.r.t a given set of features. Thus, the outlier detection (or the classification) task at hand depends on the features (or attributes) under consideration. However, in many applications it is hard to estimate which features are of importance and how important they are, that is the correct agenda, for a given task. Here we describe a machine learning framework that tries to solve this problem by using machine learning to learn a "good" agenda for the given task. This provides a way to improve the performance of FCA-based classification or outlier detection algorithms by choosing the correct agenda. This also makes results more explainable by providing the importance value of each set of features.
### Space of possible agendas
As discussed in Section 2.3, an (non-crisp) interrogative agenda on a given set of features \(X\) is given by a mass function \(m:\mathcal{P}(X)\to[0,1]\), where for any \(Y\subseteq X\), \(m(Y)\) denotes the importance of the set of features \(Y\) in the categorization. The mass function \(m\) induces a probability function \(p_{m}:\mathcal{R}\to[0,1]\), where \(\mathcal{R}\) is the set of all the (crisp) formal contexts induced from \(\mathbb{P}=(A,X,I)\) by different crisp agendas i.e. subsets of \(X\). For any categorization (formal context) \(\mathbb{P}\in\mathcal{R}\), \(p_{m}(\mathbb{P})\) denotes the likelihood assigned or preference given to the categorization \(\mathbb{P}\) by the agenda \(m\). Thus, the set of all possible non-crisp categorizations (resp. non-crisp agendas) induced from a context \(\mathbb{P}\) is given by the set of all the probability functions on \(\mathcal{R}\) (resp. the set of all the possible mass functions on \(\mathcal{P}(X)\)). As discussed in the introduction, we want to learn a "good" agenda that leads to a categorization that can be used to complete a given task effectively. This corresponds to learning a probability function \(p\) on \(\mathcal{R}\) which represents a suitable categorization for the given task. That is, we use machine learning to search for a "good" function in the space of probability functions on \(\mathcal{R}\). For the sake of computational and notational convenience, here we propose the following simplifications.
Let \(\mathbb{R}\) be the set of real numbers. Let \(f:\mathcal{R}\to\mathbb{R}\) be a map assigning weight \(w_{\mathbb{L}}\in\mathbb{R}\) for every \(\mathbb{P}\in\mathcal{R}\). For any \(\mathbb{P}\in\mathcal{R}\), \(f(\mathbb{P})\) denotes the importance (or preference) assigned to the context \(\mathbb{P}\) or to the corresponding set of features \(Y\), where \(\mathbb{P}=(A,X,I\cap A\times Y)\). We call any such function \(f\) a non-crisp agenda as it gives weights (representing importance) to different sets of features. Any such function can be seen as a real-valued vector of dimension \(|\mathcal{R}|\). Thus, the set of all such functions is isomorphic to the space \(\mathbb{R}^{|\mathcal{R}|}\). As this space is linear, the shift from probability functions on \(\mathcal{R}\) to real-valued functions simplifies the task of learning an agenda (weight function) that minimizes loss using a simple gradient descent method.
The weights assigned to lattices can be interpreted as probabilities on \(\mathcal{R}\), (and hence mass functions on \(\mathcal{P}(X)\)) via normalization when all the weights are non-negative. The negative weights suggest that the corresponding categorization is opposite to the preferred categorization for the task at hand. For example, suppose we are interested in detecting elements with a value
of feature \(f_{1}\) being abnormally high, while the outlier detection method used finds outliers with value of \(f_{1}\) low. Then the learning algorithm is likely to assign a negative weight to the agenda \(\{f_{1}\}\).
As discussed earlier, one of the major problems in applications of formal concept analysis is the complexity of the algorithms involved. Here, we are proposing to consider priority (or weight) functions on a set of different concept lattices corresponding to different agendas. As the number of different (crisp) agendas induced from a set \(X\) of features is exponential in \(|X|\), this may add another exponential factor to the complexity of the algorithm. In many applications where the number of features is large, this may make the problem computationally infeasible. Thus, in most applications we need to choose a smaller set of concept lattices or (crisp) agendas as a basis, that is set of (crisp) concept lattices on which the weight functions are defined. We propose the following strategies for this choice.
1. **Choosing agendas that consist of a small number of features** In this strategy, we choose the (crisp) agendas consisting of \(\alpha\) or a smaller number of features to construct basis concept lattices for some fixed \(\alpha\ll|X|\). This is based on the idea that tasks like classification or outlier detection can be performed with good accuracy by considering only a small number of features together. This is especially the case with tasks involving epistemic components as humans use a limited number of features in combination for basic tasks like comparison and classification. As these agendas consist of a small number of features, the number of concepts in these concept lattices is small. This makes the computational complexity low for most algorithms operating on concept lattices. Thus, this method can be applied for finding agendas when the algorithms may have high computational complexity for lattices with a large number of concepts. In some situations, it may also be useful to add the full concept lattice (lattice corresponding to the full feature set \(|X|\)) to the set of basis lattices. This allows us to consider the full concept lattice with all available information for the task at hand while having the possibility of giving higher or lower (compared to other features) importance to some small subsets of features. For example, if the weights attached to all the lattices except those given by agendas \(\{f_{1}\}\) and \(X\) are close to \(0\) and the weights assigned to these agendas are similar, it corresponds to the agenda in which the set of all features and \(\{f_{1}\}\) are the only important sets of features. Thus, the concept lattice based on \(f_{1}\) alone would be of high significance.
2. **Choosing important agendas based on prior or expert knowledge** For some tasks, we may have prior or expert knowledge assigning different importance or priority to some lattices or agendas. In such cases, these lattices are taken as the set of basis lattices. This provides us a way to incorporate prior or expert knowledge with other algorithms using formal concept analysis.
3. **Choosing agendas adaptively** In this strategy, we start with a set of agendas given by all the sets consisting of less than \(\alpha\) features for some small \(\alpha\) (usually taken as 1). We use machine learning to learn weights assigned to them, and then drop all the ones which get assigned a very low weight (after normalization). We then consider agendas consisting of any set of features that is a subset of the union of agendas that are not removed in the first step. Choosing these agendas can be interpreted as considering combinations of features that are deemed important in the first learning step. We then repeat the learning
process with this new set of lattices. We keep repeating this process until all the agendas (lattices) added in the last step get assigned low weights or we reach \(|X|\) (full concept lattice). In this way, we recursively check the possible combinations of agendas deemed to be important so far in the next recursive step. This method works on assumption that if a feature is not important on its own, then it is unlikely to be part of a set of features that is important. However, this assumption may fail in several situations. In such cases, this method should not be used to choose a basis.
There can be other effective strategies for choosing basis lattices for different tasks and algorithms.
### Learning algorithm
Once the set of possible agendas (or concept lattices) is chosen, we apply some classification or outlier detection algorithm on each of these. For every lattice \(\mathbb{L}\in\mathcal{R}\), we start assigning it a random weight \(w\in\mathbb{R}\). Let \(Alg\) be any algorithm which performs classification or outlier detection for a fixed concept lattice.
Suppose \(Alg\) is a classification (resp. outlier detection) algorithm classifying a set \(A\) of objects into \(n\) classes using concept lattices. For any object \(a\) and a class \(k\), let \(Alg_{k}(a,\mathbb{L})\) (resp. \(Alg(a,\mathbb{L})\) denote the membership of the object \(a\) into the class \(k\) (resp. outlier degree) according to the classification algorithm \(Alg\) acting on the lattice \(\mathbb{L}\). Notice that we allow for our classifiers (resp. outlier detection algorithms) to be interpreted as fuzzy or probabilistic such that membership value (resp. outlier degree) of \(a\) belongs to \([0,1]\). For an algorithm \(Alg\) with crisp output, the value \(Alg_{k}(a,\mathbb{L})\) (resp. \(Alg(a,\mathbb{L})\)) will be either \(0\) or \(1\). For a given weight function \(w:\mathcal{R}\rightarrow\mathbb{R}\), we say that the membership of \(a\) in the class \(k\) (resp. outlier degree of \(a\)) assigned by the algorithm \(Alg\) acting on a non-crisp categorization described by \(w\) is
\[out_{k}(a,w)=\frac{\sum_{\mathbb{L}\in\mathcal{R}}w(\mathbb{L})Alg_{k}(a, \mathbb{L})}{\sum_{\mathbb{L}\in\mathcal{R}}w(\mathbb{L})}. \tag{5}\]
Intuitively, this corresponds to taking the weighted sum of the result given by \(Alg\) on lattices with weights provided by the agenda \(w\). Let \(loss\) be a loss function for a given classification task, and let \(loss(out)\) be the total loss for the classification (resp. outlier detection) when classes (outlier degrees) are assigned by \(Alg_{k}(a,w)\) (resp. \(Alg(a,w)\)). We use a gradient descent method to learn the agenda \(f_{0}\) that minimizes the loss. We then use the learnt agenda \(f_{0}\) to assign a class to an object that is for any test object \(b\), its predicted membership in class \(k\) (resp. outlier degree) is \(Alg_{k}(b,f_{0})\) (resp. \(Alg(b,f_{0})\)).
A generic algorithm for outlier detection can be given in a similar manner.
### Example
Let us consider the following toy data table providing some information on different types of apples. It contains information on the color, size, sweetness, and origin of the apples. We assume that all apples under consideration are either green or red. For conceptual scaling, we divide sweetness, price, and volume into low, medium, and high. This converts these continuous-valued attributes into discrete-valued. The set of features is obtained by considering each value
```
1:Input: a set of objects \(A\), a set of features \(X\), a training set \(T\subseteq X\), and a map \(y:T\to C\) representing the labels on the training set, an algorithm \(Alg\) that takes in input some object and a concept lattice in \(\mathcal{R}\), and outputs an element in \(\mathbb{R}^{C}\) representing its prediction for each class; a loss function \(loss\) that compares two classifications and outputs a real number, and a number of training epochs \(M\).
2:Output A model that classifies objects in \(X\).
3:procedureTrain(\(A\), \(X\), \(T\), \(y\), \(Alg\), \(loss\), \(M\))
4:\(\mathbb{L}_{1},\ldots,\mathbb{L}_{n}\leftarrow\) compute the concept lattices of \(\mathcal{R}\)
5:let\(predictions\) be an empty map from \(A\) to \(\mathbb{R}^{C}\)
6:let\(w\) be an array of weights of length \(n\) initialized with random values in \(\mathbb{R}\)
7:for\(e=1,\ldots,M\)do
8:for\(a\in X\), \(k\in C\)do
9:\(predictions[a][k]\leftarrow\frac{\sum_{i=1}^{n}w(\mathbb{L}_{i})Alg_{k}(a,\mathbb{ L}_{i})}{\sum_{i=1}^{n}w(\mathbb{L}_{i})}\)
10:endfor
11:update\(w\) with an iteration of gradient descent using \(loss(predictions)\)
12:endfor
13:endprocedure
```
**Algorithm 1** Meta-Learning Algorithm for Interrogative Agendas
Let \(A\) and \(X\) be the set of all types of apples and features respectively. The (non-crisp) agendas of interest to us are the ones assigning mass to an attribute and not to an individual feature. That is, we consider basis lattices corresponding to feature sets that contain all the values for a given many-valued attribute. As an example, if a \(Y\subseteq X\) in the agendas corresponding to a basis lattice contains the feature high volume, then it must also contain the features low and medium volume. We use volume to denote the set of features {high volume, low volume, medium volume}. A similar convention is used for the other attributes as well.
Let \(I\subseteq A\times X\) be the incidence relation and let \(P\) be a customer. Suppose we are interested in classifying apples into types customer \(P\) likes (class 1) and does not like (class 2). Given a formal context (concept lattice) \(\mathbb{P}=(A,X,I\cap A\times Y)\), describing a categorization of these types of apples for a given agenda of interest \(Y\), we use the following simple algorithm to predict the class for a new type of apple. Let \(A_{+}\) and \(A_{-}\) be the set of apples known to be in class 1 and class 2 respectively (from the training set). A set of features \(H\subseteq Y\) is said to be a
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Type** & **Color** & **Volume** & **Sweetness** & **Local** & **Price** \\ \hline
1 & red & High & High & Yes & Medium \\
2 & green & High & High & Yes & Medium \\
3 & red & Medium & Medium & Yes & Medium \\
4 & green & Low & High & No & Medium \\
5 & green & High & Medium & No & Low \\
6 & red & Medium & Low & Yes & Low \\
7 & green & High & Medium & Yes & Low \\
8 & green & High & Medium & Yes & High \\ \hline \hline \end{tabular}
\end{table}
Table 1: Exemplary data table containing the information on different types of apples w.r.t. attributes such as color, volume, sweetness, et cetera.
positive (resp. negative) hypothesis in w.r.t. a lattice \((A,X,I\cap A\times Y)\) iff \(H\) is Galois-stable, \(I^{(0)}[H]\) is non-empty, and \(I^{(0)}[H]\cap A_{-}=\emptyset\) (resp. \(I^{(0)}[H]\cap A_{+}=\emptyset\)). For any new element \(t\), we put it in class \(1\) (resp. class \(2\)) if the category \(I^{(1)}[\{t\}]\) contains only positive (resp. negative) hypothesis. The algorithm is inconclusive when it contains neither type of hypothesis (no information) or contains both type of hypotheses (inconsistent information).
Suppose the classification of apples of types 1-8 into classes 1 and \(2\) for customer \(P\) are as \(Class\,1=\{1,2,4,5,7\}\) and \(Class\,2=\{3,6,8\}\) and suppose also that we use the full concept lattice (that is, agenda \(Y=X\)). Let \(t_{0}\) be a new type of apple that is green, has high volume, high sweetness, is local, and has a high price. Consider hypotheses \(H_{1}=\{\)High sweetness\(\}\) and \(H_{2}=\{\)Green, local\(\}\) which are both contained in \(I^{(1)}[\{t_{0}\}]\). The hypothesis \(H_{1}\) is positive while \(H_{2}\) is negative. Thus, the above classification algorithm can not classify this object as the available information is inconsistent. However, in many cases, some subsets of features are of much more importance to a customer than others. For example, from the above classification, it is hinted that the customer \(P\) considers Sweetness and Price as more important features than color or location. Our algorithm for learning agenda can help to recognize this difference in importance of different sets of features and allow us to classify such elements.
Suppose we use our method to find the best categorization (or agenda) for the completion of this task using the above classification algorithm with the basis lattice consisting of lattices given by agendas comprising of one attribute (as discussed earlier, one attribute can correspond to multiple features due to scaling). We start with random weights assigned to each of these lattices. We then use the classification algorithm described above to classify new types of apples into classes \(1\) and \(2\) using each of these lattices. We then sum over the weights of lattices in which elements are assigned to either class. The new object is assigned to the class which has a higher mass (the algorithm is indecisive if such a class does not exist). We use machine learning (gradient descent) to train the algorithm to find the best weights for this classification task.
During the training phase, our algorithm can (generally) learn that the attribute (or set of features) {sweetness} matters much more than other features to the customer \(P\). Thus, a high weight will be attached to the lattice with agenda {sweetness}. Thus, the above algorithm in combination with our method assigns \(t_{0}\) to class \(1\). Adding this method on top of a classification algorithm may give a better classification (that is, more elements classified correctly with a given amount of training samples) given our learnt information'sweetness is much more important for \(P\) in decision-making' is true.
Similarly, higher (resp. lower) masses attached to agendas consisting of different sets of a single attribute are helpful in better categorization when this attribute is more (resp. less) important for the customer. Thus, using machine learning techniques (for example, gradient descent when possible) to learn the best possible agenda to provide categorization to complement the classification algorithm can improve its accuracy with less training. Considering more basis lattices may further improve the accuracy and sample complexity. For example, it can be seen that the types 5 and 7, which have medium sweetness and low price, belong to class \(1\). This provides us with another likely hypothesis that the customer likes apples that are of medium size (not necessarily high) but have a low price. This hints to us that the agenda {size, price} may be of significant importance to the customer. In case this agenda is indeed more important to the agent, the learning would assign it a high weight during training and thus allows us to make more accurate predictions with a fewer number of samples. However, an increasing
number of basis lattices may increase computational complexity significantly, meaning that such a decision needs to be made judiciously.
This simple example shows that the classification algorithm described above can be improved in terms of accuracy, sample complexity, and explainability by adding a learning step for finding out the best agenda for categorization. Adding this step to the different algorithms discussed in Section 3, used for classification and outlier detection using concept lattices can improve these algorithms in a similar manner. This is especially the case for the tasks in which the importance of different features may be hard to estimate beforehand. The obtained agendas can be defined formally using the logical framework described in [16]. In that paper, a logical model was used to represent deliberation between agents with different agendas. This framework can also be used to model deliberation or interaction between different learning algorithms by aggregating learnt agendas using different techniques described in [16]. The agendas inferred by our learning algorithm can be used for further tasks like aggregation from different sources. For example, if for two different classifiers the agendas learned are given by the mass functions \(m_{1}\) and \(m_{2}\) on \(\mathcal{P}(X)\), then a combined classifier that takes both into account can be obtained by choosing the agenda \(F(m_{1},m_{2})\), where \(F\) is a suitable Dempster-Shafer combination rule [21, 48, 22], and then applying the classification algorithm to the resulting lattice.
## 5 Conclusion and future directions
In this paper, targeting the explainability line of hybrid intelligence research [1], we proposed a meta-learning algorithm to learn a "good" (interrogative) agenda for categorization (which is used by a potential FCA-based classification or outlier detection algorithm). Adding such a learning step to a given algorithm allows us to improve the accuracy and sample complexity of the procedure while also making it more explainable. On the empirical side, a performance evaluation and the ablation study on the results of employing different FCA-based classification and outlier detection algorithms is an avenue of future research. Another investigation line is the transferability analysis of "good" agendas e.g., how much knowledge do we transfer and how good the data efficiency is when such an agenda is used on previously unseen environments/categorizations. Noteworthy is extending this methodology towards other interesting application domains such as knowledge discovery, data visualization, information retrieval, etc.
On the theoretical side, this framework can be used to model deliberation between agendas learnt from different algorithms, providing us a way to study their interaction, comparison, or combination. Within the interpretation of taking the concept lattice as expert knowledge, the learnt agendas can also be aggregated or compared with agendas of different experts allowing us to incorporate learning and expert knowledge in categorization. From a multiagent systems perspective, it is especially useful to model subjective categorizations involving multiple agents (human experts and algorithms) with different agendas or goals interacting with each other. In future work, we are considering investigation in a variety of directions e.g., investigating desirable properties of various aggregation mechanisms, representational power such as proportionality and fairness of induced agendas for multiple parties, convergence and robustness guarantees for the "good" agendas, computational complexity analysis on hard and easy cases for (non-)crisp agendas, and extending our method on a more general framework in order to
tackle the problem of features selection in an uniform way.
The meta-algorithm described in the present paper is currently employed in the development of an outlier detection algorithm with good results. Currently, it has been tested on the datasets from the ELKI toolkit [49] and it has been compared against the algorithms discussed in it. A detailed report of the results will be available in the future.
## Acknowledgments
Erman Acar is generously supported by the Hybrid Intelligence Project which is financed by the Dutch Ministry of Education, Culture and Science with project number 024.004.022. Krishna Manoorkar is supported by the NWO grant KIVI.2019.001 awarded to Alessandra Palmigiano.
|
2301.06180 | Secure Video Streaming Using Dedicated Hardware | Purpose: The purpose of this article is to present a system that enhances the
security, efficiency, and reconfigurability of an Internet-of-Things (IoT)
system used for surveillance and monitoring. Methods: A Multi-Processor
System-On-Chip (MPSoC) composed of Central Processor Unit (CPU) and
Field-Programmable Gate Array (FPGA) is proposed for increasing the security
and the frame rate of a smart IoT edge device. The private encryption key is
safely embedded in the FPGA unit to avoid being exposed in the Random Access
Memory (RAM). This allows the edge device to securely store and authenticate
the key, protecting the data transmitted from the same Integrated Circuit (IC).
Additionally, the edge device can simultaneously publish and route a camera
stream using a lightweight communication protocol, achieving a frame rate of 14
frames per Second (fps). The performance of the MPSoC is compared to a NVIDIA
Jetson Nano (NJN) and a Raspberry Pi 4 (RPI4) and it is found that the RPI4 is
the most cost-effective solution but with lower frame rate, the NJN is the
fastest because it can achieve higher frame-rate but it is not secure, and the
MPSoC is the optimal solution because it offers a balanced frame rate and it is
secure because it never exposes the secure key into the memory. Results: The
proposed system successfully addresses the challenges of security, scalability,
and efficiency in an IoT system used for surveillance and monitoring. The
encryption key is securely stored and authenticated, and the edge device is
able to simultaneously publish and route a camera stream feed high-definition
images at 14 fps. | Nicholas Murray-Hill, Laura Fontes, Pedro Machado, Isibor Kennedy Ihianle | 2023-01-15T20:24:46Z | http://arxiv.org/abs/2301.06180v2 | # Secure Video Streaming Using Dedicated Hardware
###### Abstract
The purpose of this article is to present a system that enhances the security, efficiency, and reconfigurability of an Internet-of-Things (IoT) system used for surveillance and monitoring. A Multi-Processor System-On-Chip (MPSoC) composed of Central Processor Unit (CPU) and Field-Programmable Gate Array (FPGA) is proposed for increasing the security and the frame rate of a smart IoT edge device. The private encryption key is safely embedded in the FPGA unit to avoid being exposed in the Random Access Memory (RAM). This allows the edge device to securely store and authenticate the key, protecting the data transmitted from the same Integrated Circuit (IC). Additionally, the edge device can simultaneously publish and route a camera stream using a lightweight communication protocol, achieving a frame rate of 14 frames per Second (fps). The performance of the MPSoC is compared to a NVIDIA Jetson Nano (NJN) and a Raspberry Pi 4 (RPI4) and it is found that the RPI4 is the most cost-effective solution but with lower frame rate, the NJN is the fastest because it can achieve higher frame-rate but it is not secure, and the MPSoC is the optimal solution because it offers a balanced frame rate and it is secure because it never exposes the secure key into the memory. The proposed system successfully addresses the challenges of security, scalability, and efficiency in an IoT system used for surveillance and monitoring. The Rivest-Shamir-Adleman (RSA) encryption key is securely stored and authenticated, and the edge
device is able to simultaneously publish and route a camera stream feed high-definition images at 14 fps. The proposed system enhances the security, efficiency, and reconfigurability of an IoT system used for surveillance and monitoring. The RSA encryption key is effectively protected by the implementation of an MPSoC with Programmable Logic (PL), which also allows for faster processing and data transmission.
IoT, FPGA, Graphics Processor Unit (GPU), PYNQ, MPSoC, security streaming, edge computing
## 1 Introduction
Internet-of-Things (IoT) has become a central focus in the technology industry, with the development of smart devices that collect and exchange data over the internet. These devices, including sensors, actuators, and other IoT-enabled technologies, are used for a variety of purposes such as analysis, processing, and automation. It is estimated that there will be approximately 75 billion IoT devices in use by 2025 [1]. The use of IoT technology has expanded into various fields including smart energy, industrial factories, transportation, and home automation.
However, the widespread integration of IoT systems in our daily lives also leads to an increase in the amount of data being collected and exchanged over the internet, raising concerns about scalability and the protection of sensitive and private data. Many IoT systems rely on centralised architectures, which process and perform security operations on the cloud [2]. These architectures have limitations such as scalability, transaction speeds, interoperability, and privacy/security [3], and may also be a single point of failure in the event of a breach. Centralised servers that manage keys and act as a single trust authority can pose a significant security risk to other devices on the network, as these keys often form the foundation of security systems, cryptography algorithms, and device authentication/verification [2].
To address these issues, distributed infrastructures have emerged, allowing modern IoT edge devices to perform their own processing and transmit data at the edge. These devices feature robust security mechanisms for secure authentication and secret key storage, which can be used for encryption in a secure, hardware-enforced manner to improve end-device security for IoT systems and data integrity [4]. Field-Programmable Gate Array (FPGA) offer such security features through the use of programmable logic and reconfigurability after manufacturing. This reconfigurability allows devices to be updated in order to keep up with the constantly evolving technology landscape and emerging security threats.
The main contribution of this article is the development and implementation of a secure, efficient, and reconfigurable surveillance and monitoring IoT
system using dedicated hardware. By utilising the Multi-Processor System-On-Chip (MPSoC)'s Programmable Logic (PL) to securely store and authenticate a symmetric key, the proposed system improves the security and integrity of data transmitted from an edge device. Additionally, the use of MPSoCs allow the edge device to simultaneously publish and route a camera stream using a lightweight communication protocol, achieving a high capture rate. The proposed system addresses many of the challenges faced by current IoT systems, including scalability, security, and efficiency.
The remainder of this article is organised as follows: relevant literature and related works to the proposed IoT system proposed in this article is reviewed in Section 2, the methodology used to develop and implement the proposed system is described in Section 3, the results and analysis of the proposed system are presented in Section 4, and finally, the conclusion and future work are discussed in Section 5.
## 2 Related Work
The IoT is a technology that aims to provide an infrastructure for applications that can coordinate the interaction of people, things, and systems for a specific purpose [5]. These applications do not necessarily have a universally adopted standard, but the architectural model of an IoT system typically consists of three main layers: the perception layer, which uses sensors and microcontrollers to perceive the physical environment; the communication or network layer, which processes and transports data; and the application layer, which uses the data to deliver application-specific services to the user [1]. The communication/network layer uses various technologies to package and transmit data due to the processing and bandwidth restrictions of many IoT devices. Hypertext Transfer Protocol (HTTP), which is commonly used for communication between devices on the internet, is not suitable for low-powered IoT devices due to its fully connection-oriented architecture, large header size, and latency [6; 7]. Established communication protocols that are more suitable for these power and bandwidth-restricted IoT requirements include Constrained Application Protocol (CoAP), Message Queueing Telemetry Transport (MQTT), and Extensive Messaging and Presence Protocol (XMPP). Corak et al. [8] evaluated and compared the performance of these protocols in a real-world IoT testbed. The metrics considered were packet creation time and packet delivery speed to determine the delay differences. The study found that XMPP had the worst performance due to its use of Extensible Markup Language (XML) format, which increased latency. MQTT and CoAP had similar overall performance in terms of packet creation and transmission time, but MQTT was found to be more optimised and standardised. In addition to these protocols, wireless technologies such as Low Range (LoRa), Low Range Wide Area Network (LoRaWAN), and Low-Power Wide Area Network (LPWAN) can be used to enable long-range and low-power communications for IoT devices [9].
These wireless protocols are designed to provide low-power, wide-area networks, making them ideal for use cases where devices need to transmit small amounts of data over long distances, such as in agriculture, smart cities, and industrial applications [9]. Van der Westhuizen and Hancke [10] conducted a more in-depth comparison between CoAP and MQTT to determine which was the most suitable for use with constrained devices, specifically sensors. The comparison considered communication delay and network traffic. Both protocols were found to be good choices for resource-constrained devices, with similar performance and response times. However, the most suitable protocol depended on the overall requirements of the system. CoAP was found to be the optimal choice for interfacing with business systems, due to its small average packet sizes and minimal battery/data usage. MQTT, on the other hand, was found to be the preferred solution for systems such as home automation and sensor networks, where device heterogeneity is more pronounced. MQTT was easier to configure for new devices and had the most effective data flow thanks to its publish/subscribe model and use of Quality of Service (QoS).
### Edge computing
Edge computing is a network architecture that involves processing sensory (e.g. visual data) data closer to the source, rather than on the cloud [11]. This allows for fast processing and efficient handling of data intensive operations in real-world scenarios such as the IoT. While edge computing can offer benefits for IoT systems, there are also limitations in terms of security. Khan et al. [11; 12] found that further development is needed in areas such as authentication and access control, and that tamper-proof architectures may be one solution to addressing these security issues. However, securing large scale and time-critical IoT systems can also be challenging due to the cost of methods such as encryption in terms of latency, energy consumption, and network bandwidth [13]. Additionally, the heterogeneity of devices that communicate across these networks without a well-established protocol can also pose challenges [1]. Fortunately, professionals in the field are working to overcome these limitations and improve the safety and efficiency of communication between IoT devices.
### FPGA technology
FPGAs are specialised hardware devices that have gained popularity in the edge computing space due to their ability to solve problems through reconfigurable hardware circuits. These circuits can be described using Hardware Description Languages (HDL) such as Verilog and Very High Speed Integrated Circuit Hardware Description Language (VHDL), and are made up of various logic units such as look-up tables, flip-flops, and multiplexers. FPGAs offer several benefits for security, parallel computing, and flexibility to update hardware designs after deployment [14; 15; 16]. They have also been advanced through the use of System-on-Chip (SoC), which integrate programmable logic with
real-time processors. An example of this is the AMD-Xilinx Zynq Ultrascale+ MPSoC1, which includes an Advanced Reduced Instruction Set Computing Machine (ARM) CPU, programmable logic, and units for graphics and video processing. While FPGAs and SoCs have similarities with microcontrollers, FPGAs offer advantages in physical and cybersecurity through encrypted bitstreams and key loading mechanisms, and can act as a Root of Trust (RoT) by holding security private keys and critical algorithms. FPGAs also show greater efficiency in processing algorithms for image processing and video transcoding due to their parallel computing capabilities.
Footnote 1: Available online, [https://www.xilinx.com/products/silicon-devices/soc/zynq-ultrascale-mpsoc.html](https://www.xilinx.com/products/silicon-devices/soc/zynq-ultrascale-mpsoc.html), last accessed 07/01/2023
While FPGAs offer significant advantages for IoT, they are considered complex due to the low-level hardware knowledge required, such as VHDL and Verilog. To address this, FPGA vendors have been promoting the use of high-level design flows and tools that allow for the creation of Register Transfer Level (RTL) designs using high-level languages like C, C++, System C and Open Computer Language (OpenCL). However, the question remains as to how well these high-level designs compare to manually written RTL designs in terms of optimisation. Guo et al. [17] discussed that while High Level Synthesis (HLS) may not be as optimised as manually written RTL designs for complex designs, the use of directives like loop unrolling and loop merging and pipelining can significantly improve resource utilisation, reduce latency, increase resource sharing, and optimize logic for video processing algorithms. These findings suggest that FPGA technology can be more accessible to designers without strong low-level hardware knowledge, while still maintaining good performance.
In summary, the reviewed articles have demonstrated the various considerations and challenges faced in the design and implementation of an IoT system. Communication protocols such as MQTT and CoAP have been shown to be effective in resource constrained environments, but the choice between them ultimately depends on the specific requirements of the system. Edge computing has the potential to improve the efficiency and security of IoT networks, but also comes with its own limitations that require further development. FPGA technology offers advanced security and parallel processing capabilities for IoT, but can be complex to implement. High level synthesis tools, such as the AMD-Xilinx Vivado HLS2, have been shown to improve the productivity and performance of FPGA designs for real time image processing applications, but may not always be as optimised as manually written designs. These findings highlight the importance of carefully evaluating the various technologies and approaches available for a particular IoT system in order to ensure optimal performance and security.
## 3 Methods
The proposed IoT system utilises the Ultra96-V2 Development Board (Ultra96) equipped with a powerful AMD-Xilinx Zynq UltraScale+ MPSoC ZU3EG3 device as the main processing system at the perception layer. The performance of the Ultra96-V2 Development Board (Ultra96) was compared to a NVIDIA Jetson Nano (NJN) and a Raspberry Pi 4 (RPI4) under the same testing conditions.
Footnote 3: Available online, [https://www.xilinx.com/content/dam/xilinx/imgs/products/zynq/zynq-eg-block.PNG](https://www.xilinx.com/content/dam/xilinx/imgs/products/zynq/zynq-eg-block.PNG), last accessed 07/01/2023
To establish a fair comparison, each processing device (i.e. Ultra96, RPI4 and NJN) runs an MQTT client to publish data from its connected Universal Serial Bus (USB) webcam to an MQTT broker, which acts as an intermediary to route the data to interested parties. The camera feed is then displayed on a Node-RED4 dashboard at the application layer for subscribers to view. The use of MQTT and the Node-RED dashboard allows for efficient and flexible communication and data management within the system. The system also implements security measures, such as bitstream authentication, to protect against potential attacks. Overall, the proposed IoT system utilises a variety of technologies to coordinate the interaction of people, things, and systems for a specific purpose (see Figure 1).
Footnote 4: Available online, [https://nodered.org/](https://nodered.org/), last accessed 07/01/2023
The Avnet Ultra96, which is powered by an AMD-Xilinx Zynq UltraScale+ MPSoC device that combines an ARM processor and FPGA. The Ultra96 is energy efficient and performs well due to designated processors being responsible for specific tasks. The Processor System (PS) in the AMD-Xilinx Zynq UltraScale+ MPSoC runs an ARM64v8 Linux environment for running a web server while also interfacing with the programmable logic via the ARM eXensible Interface 4 (AXI4) for user authentication and on-field reconfigurability.
Figure 1: Architecture Diagram
ARM64v8 is a version of the ARM architecture that supports 64-bit instructions. It is used in some 64-bit ARM processors, such as those used in the Avnet Ultra96.
The NJN5 is a small, powerful computer designed for use in image and video processing applications. It is powered by a quad-core ARM Cortex-A57 CPU and a 128-core NVIDIA Maxwell GPU, running on an ARM64v8 Linux environment. Programming the NJN is typically done using the NVIDIA JetPack SDK (NJPSDK), which includes a Linux-based development environment and a variety of software libraries for working with the CPU and GPU. Open Computer Vision (OpenCv) compiled with the CUDA library was used to achieve maximum performance when processing visual data.
Footnote 5: Available online, [https://developer.nvidia.com/embedded/jetson-nano-developer-kit](https://developer.nvidia.com/embedded/jetson-nano-developer-kit), last accessed 15/01/2023
The RPI46 is a low-cost, single-board computer designed for educational and hobbyist use. It is powered by a Broadcom BCM2711, quad-core Cortex-A72 (ARMv8) 64-bit SoC @ 1.5GHz and runs on a Linux-based operating system, typically Raspbian. OpenCv was used to achieve maximum performance when processing visual data. The RPI4 is known to be cost-effective solution but with lower frame rate compared to other devices.
Footnote 6: Available online, [https://www.raspberrypi.com/products/raspberry-pi-4-model-b/](https://www.raspberrypi.com/products/raspberry-pi-4-model-b/), last accessed 15/01/2023
The IoT system was not initially configured with any software or input sensors, so a USB webcam was connected to the USB 3.0 Type A port to capture the camera feed. The Micro-B upstream port was used to connect to a host workstation on the same local network, although the board also supports WiFi connectivity. The device was boosted using a Micro Secure Digital (uSD) card that loaded the Python productivity for Zynq (PYNQ) framework. This open-source framework allows developers to program AMD-Xilinx UltraScale+ Zynq FPGA devices, such as the one used in this device, using Python. Furthermore, PYNQ was designed to be used in embedded systems and provides a set of libraries, drivers and Jupyter notebooks to enable easy programmability of FPGAs through high-level programming languages like Python. This setup allows for the physical connection and control of the camera feed, which can be streamed in the proposed system.
### PYNQ Framework
Booting the device required software to be loaded onto an SD card, in order to leverage the security and parallel hardware execution benefits of the Ultra92-V2 programmable logic, the approach was to use the PYNQ framework version 2.67. This platform features a Linux operating system, along with the Python software package and a Jupyter8 web server for developing solutions on the board for rapid on-field development and reconfigurability over a network. This image should be flashed onto a uSD card with a capacity of at least 16GB and inserted into the board. Once powered on, the board can be accessed
by connecting a USB Micro-B cable to a host PC or by setting up a WiFi connection on the local network. The board is configured with the default IP address of 192.168.3.1, which allows access to the locally hosted Jupyter web server.
This revised statement provides more clarity by breaking up the original sentence into several shorter sentences. It also provides more detail on how to access the board, including both USB and WiFi options, and clarifies that the Jupyter web server is hosted locally. Additionally, it uses the passive voice as requested.
### Cuda
To develop and execute various components which run on the board, there are prerequisites that should be installed during the setup phase. OpenCv version 4.5.19 for Python is used to retrieve frames from an input device, such as a webcam or IP camera. NumPy version 1.16.010 is also used within the Jupyter notebook to manipulate data structures, such as arrays. This was used to read and write user's credential files. An MQTT broker should also be installed to route the data between publisher and subscribers, so the Mosquito-MQTT version 1.4.15 11 was installed from Ubuntu's open-source universe repository. Finally, to create an MQTT client to publish the stream from the embedded processing system, the Paho-MQTT 12 Python package was installed. At this point, all the prerequisites to develop and execute the system are in place on the PYNQ Linux environment. A detailed list of the tools and equipment used in the system can be found in Table 1.
Footnote 9: Available online, [https://opencv.org/](https://opencv.org/), last accessed: 05/05/2022
Footnote 10: Available online, [https://numpy.org/](https://numpy.org/), last accessed: 05/05/2022
Footnote 11: Available online, [https://mosquit.org/](https://mosquit.org/), last accessed: 05/05/2021
Footnote 12: Available online, [https://pypi.org/project/paho-mqtt/](https://pypi.org/project/paho-mqtt/), last accessed: 05/05/2022
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Resource & Type & Description \\ \hline Anvet Ultra96 & Equipment & System on Chip device for authenticating user key and running the various components of project \\ \hline NJN & Equipment & Embedded system capable of achieving higher frame rates and running the various components of project \\ \hline RPI4 & Equipment & Low cost embedded system, equipped with 2GB of DDR, ideal for reducing the costs and running the various components of project \\ \hline \end{tabular}
\end{table}
Table 1: List of equipment and tools
### Designing the Secure Bitstream of the Ultra96
To secure the system, a bitstream file was created to provide confidentiality through a 256-bit secret key and method of authentication. The key is described in the FPGA logic and is embedded within the FPGA unit, allowing the system to securely store the private key and prevent it from being exposed in the RAM. The system employs a pair of keys, consisting of a public and a private key, to ensure secure message authentication. Nevertheless, the private key is securely stored in the FPGA logic, making the proposed method safer
than other authentication methods that store private keys externally. The use of a private key stored in the FPGA logic ensures that only authorised devices with the corresponding public key are granted access to the system and camera stream. This is achieved through a secure authentication process where the authorised device sends an authentication message encrypted with the public key. The FPGA then decrypts the message using the private key stored in its logic to confirm that the device is authorised for providing an additional layer of security to the system, making it more resistant to unauthorised accesses. The high level design flow was used to develop the hardware design at a higher level of abstraction using C/C++ code, which was then converted into optimised RTL code by a compiler. Custom Intellectual Property cores were also included in the design and interfaced with via the PYNQ framework to run on the Ultra96 programmable logic.
The process of building the authentication IP core begins with using AMD-Xilinx Vivado HLS 13 software. Using the HLS software, the top level function was written in C containing a secret key, authentication method to compare the valid key with the input key, along with the required I/O ports for the PL to interface with the IP block. In this case there were two ports, one of which was the key with a size of 256-bits and the other being the authentication result which was a single bit boolean value to represent whether the input key was valid or not. Due to the size of these ports being relatively small, the AXI4-Lite protocol was utilised for the Ultra96 processing system to interface with the IP block, as this is generally a suitable design choice for smaller data transfers. To optimise the design, various loop and array optimisation directives were tested in the hope to reduce the estimated clock time and maximum clock cycles. By using the pipeline _Pragma_ in the loop to compare the keys, the maximum clock cycles was reduced from 64 to 34. This directive works by reducing the initiation interval for the loop by allowing concurrent execution of the operations. To verify the output of the top level function a test bench was written, this test bench was used by the HLS tool during C simulation, synthesis and C/RTL co-simulation to validate that the produced RTL was functionally identical to the C code that was written and therefore confirming that the IP is working as intended to be packed and exported. The timing and latency summary for the IP is presented in Figure 2.
Footnote 13: Available online, [https://www.xilinx.com/products/design-tools/vivado.html](https://www.xilinx.com/products/design-tools/vivado.html), last accessed: 05/05/2022
To verify the functionality of the custom IP block, a test bench was written in C. This test bench was used by the HLS tool during C simulation, synthesis, and C/RTL co-simulation to validate that the produced RTL was functionally identical to the C code, and therefore confirm that the IP was working as intended. Once the custom IP block was exported, it could be used as part of the wider system by importing it into the AMD-Xilinx Vivado Design Suite. This tool has an IP Integrator, which was used to build the hardware design by integrating the custom IP block with IPs available in the AMD-Xilinx's IP catalogue. A block diagram, shown in Figure 3, was generated using the Vivado
Tool, containing the Zynq UltraScale+ MPSoC block, which represents the processor of the Ultra96 and configures clocks, peripherals, and other settings. To transfer the authentication data between the PS and the custom IP, a single memory-mapped ARM eXtensible Interface (AXI) master and AXI slave Interconnect was included. The reset signals were handled by the Processor System Reset IP block.
The custom IP block was integrated into the wider system using the AMD-Xilinx Vivado Design Suite. The IP Integrator tool was used to build the hardware design by combining the created IP block with IPs available in the AMD-Xilinx catalogue. The design was then simulated, synthesised, and implemented to generate a bitstream. The bitstream,.hwh file, and driver file were transferred to the Ultra96 device to be imported using the PYNQ Overlay class. This process allows for the custom IP block to be used in the system to provide secure authentication. The IP block is lightweight and only uses a small portion of the programmable logic resources on the device.
The IP block is lightweight and only uses a small portion of the programmable logic resources on the device. Figure 4 shows the resource utilisation.
Figure 3: IP Integrator Block Design
Figure 2: Timing and latency summary. The estimated time is 2.88ns with an uncertainty of 1.25ns, which is below the target clock of 10ns. In terms of latency, both the maximum and minimum values are 34 clock cycles or 0.34us.
PYNQ is accessed through a local Jupyter web server at 192.168.3.1. It allows the execution of Python packages and libraries on the Ultra96 board. The PYNQ Overlay class can be used to view and interface with the PL of the Ultra96 using the previously created bitstream and default overlay driver to access the IP's ports configured in the drivers file. The authentication result is retrieved using three specific addresses: the start control signal (0x000), the offset of the input key port (0x080), and the offset of the data out port (0x100). The user's symmetric key can be loaded into the input key port a 4-byte integer at a time. The start control signal is set to high to start the IP and the authentication output is read from the data out port. If the key is valid, the camera is initialised and published to the MQTT broker.
The MQTT broker was configured to automatically start on boot and run on localhost with the port 1883. OpenCv and Paho-MQTT were also imported for use in capturing and publishing the camera stream. A configuration file was placed on the device to define system parameters such as camera settings, MQTT settings, and the path to the credentials file. This file allows the user to easily update system parameters without requiring knowledge of the system. The data visualisation at the application layer was the final component for the project. This layer is responsible for connecting the clients or subscribers to the MQTT broker and displaying the secured camera feed. Node-RED is a browser-based editor, where flows can be built using a catalogue of nodes to fit custom IoT requirements. Additional nodes can also be installed via node package manager. This tool was chosen for the project due to being open source and high productivity, where additional nodes can be quickly inserted into the flow and deployed instantly. The flow that was designed consisted of an MQTT input node, which is configured to the MQTT broker running on the host workstation. This node receives the message payload from the broker as a base64 string and then passes this into an HTML image template, which is finally connected to a dashboard widget template where it is displayed automatically. Once this flow is deployed, the dashboard can be accessed on the local network at the URL 127.0.0.1:1880/ui. The designed flow is shown in Figure 5.
Figure 4: IP Resource utilisation
## 4 Results
The aim of this project was to build a flexible and reconfigurable edge device that could protect the integrity of data within a surveillance and monitoring IoT system. To achieve this, a secure authentication mechanism was implemented to guarantee that the edge device could only publish the camera stream when data integrity and authenticity could be assured. This was accomplished by concealing a 256-bit secret key and method of authentication inside a bitstream file, which is the hardware description of an FPGA and is difficult to reverse engineer due to being a stream of bits that only describe the hardware logic itself. This provided the necessary confidentiality to protect the key.
The proposed system was tested under the same conditions on the Ultra96, NJN and RPI4 to ensure that each version of the IoT device delivered the expected functionality and behaviour. Several tests focused on the integration of the IP within the overall system were carried out to ensure that the system is working as intended. This involved testing the end-to-end process of capturing the camera stream, publishing the data over MQTT, and displaying the stream on the dashboard. These tests were carried out by setting up the Ultra96 board, NJN and RPI4 with the boot image and prerequisites, importing the bitstream, and running the python script to capture and publish the camera data. The Node-RED flow was then set up, and the dashboard accessed to verify that the stream was being displayed as expected on all devices. Additionally, the system was tested under various scenarios such as using the correct key, using an incorrect key, and attempting to access the stream without providing a key, to ensure that the system was functioning as intended and that the authorisation process (see Figure 7). was secure (see Figure 7). These tests were important to ensure that the Ultra96, NJN and RPI4 provide optimal solution, balanced frame rate and secure key storage. It is clear from Table 2 that the hardware implementation (i.e. Ultra96) had the same performance has the software implementation (i.e. NJN and RPI4) but without exposing the private key into the RAM.
Ultra96, the top level function was written in C, containing the secret key and authentication method to compare the valid key with the input key, along with the required I/O ports for the PS to interface with the IP block. The AXI4-Lite protocol was used for the Ultra96 processing system to interface with the IP block, as it is suitable for smaller data transfers. To optimize the design, various loop and array optimisation directives were tested to reduce the
Figure 5: Node-RED Flow. The streaming node that will be enabled only after a successful authentication process.
estimated clock time and maximum clock cycles. By using the pipeline _Pragma_ in the loop to compare the keys, the maximum clock cycles was reduced from 64 to 34.
To verify the output of the top level function, in the Ultra96, a test bench was written and used by the HLS tool during C simulation, synthesis, and C/RTL co-simulation to validate that the produced RTL was functionally identical to the C code and that the IP was working as intended. Once the custom IP block was exported, it could be used as part of the wider system by importing it into the AMD-Xilinx Vivado Design Suite. This tool has an IP Integrator, which was used to build the hardware design by integrating the created IP block with IPs from the AMD-Xilinx's IP catalogue. A single memory-mapped AXI master and AXI slave Interconnect was included to ensure that the system was able to handle various scenarios that may occur during operation. These tests included different combinations of correct and
\begin{table}
\begin{tabular}{l c c c} \hline Tests & \(\mathrm{RPI4}\) & \(\mathrm{NJN}\) & Ultra96 \\ \hline \hline Correct Key & 10 & 10 & 10 \\ Invalid Key & 5 & 5 & 5 \\ Incomplete Key & 7 & 7 & 7 \\ Empty Key & 4 & 4 & 4 \\ Wrong key & 6 & 6 & 6 \\ \hline Successful authentications & 10 & 10 & 10 \\ Unsuccessful authentications & 22 & 22 & 22 \\ \hline \end{tabular}
\end{table}
Table 2: Authentication process (number of attempts) per each processing system
Figure 6: Authentication Unit Tests. The output of the seven unit tests, where the first test involved providing the correct key followed by three tests with incorrect keys, and the last three tests involved two incomplete keys.
incorrect keys, as well as edge cases such as missing or incorrect bytes in the key. It was essential that all of these tests passed in order to consider the IP secure and fit for purpose. In addition to these unit tests, the overall system was also tested to ensure that it functioned as intended. This included testing the MQTT communication protocol, the PYNQ framework, and the Node-RED dashboard. Overall, the testing of the system showed that the objective of building a flexible and reconfigurable edge device was met, as the system was able to securely store and use a secret key and authentication method, and was also easily configurable and adjustable through the use of a configuration file and various software tools. A demo video [18] is available on YouTube14 demonstrating the system working.
Footnote 14: Available online, [https://youtu.be/8AXlf6tRZyo](https://youtu.be/8AXlf6tRZyo), last accessed 07/01/2023
The final results obtained for all the devices are listed in Table 3.
The proposed system was tested under the same conditions on the Ultra96, NJN and RPI4 to evaluate the performance and security of the IoT device. The results showed that the NJN achieved the highest frame rate of 30 fps (real-time), making it the best in terms of frame rate. The RPI4 offered a more cost-effective solution but with a lower frame rate of 6 fps, making it the worst in terms of frame rate. And the Ultra96 achieved a frame rate of 14 fps and offered a safer solution by securely storing the RSA encryption key in the FPGA unit, making it the best in terms of security and performance.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline BID USER & PR & MIT & VIRT & RES & S & ACPU & MDM & TIME+ COMMAND \\ \hline
[MISSING_PAGE_POST]
## 5 Discussion and Future Work
In comparison to other authentication systems that rely solely on the use of a CPU, the proposed IoT system utilizing an FPGA has several advantages. One main advantage is the improved security provided by storing the secret key and authentication method within the FPGA bitstream, as it is not accessible in a readable format outside the device. This is in contrast to a CPU-only system where the secret key and authentication method may be stored in plaintext or encrypted in memory, which could potentially be accessed by an attacker with the appropriate tools and knowledge. On the CPU side, private keys are often stored in the file $HOME/.ssh/id_rsa as plain text, which poses a security risk. However, tools like valgrind18 and Hex editors can be utilised to inspect processes loaded into memory, making it possible to detect any potential breaches. Additionally, one can enhance security by adding authorised public keys to $HOME/.ssh/authorized_keys. Nevertheless, in the case of FPGA logic, keeping authorised public and private keys described within the logic makes it exceptionally challenging to breach security. This is because FPGAs are configured with hardware-level security features that can help to prevent unauthorised access and ensure that the keys remain secure. Although there are risks associated with storing private keys in plain text on the CPU side, there are also measures that can be taken to mitigate these risks, and the use of FPGA logic can provide an additional layer of security.
Footnote 18: Available online, [https://valgrind.org/](https://valgrind.org/), last accessed: 25/03/2023
Another advantage of the proposed system is its reconfigurability and flexibility. With the use of a configuration file, the user is able to easily update the location of the bitstream, as well as change the camera input and MQTT settings without requiring in-depth knowledge of the system. This is not necessarily possible with a CPU-only system, as changes to the system may require modifications to the codebase and potentially require the expertise of a software developer. Therefore, the user can decide which programming system to use based on the budget, security and frame rate restrictions. In terms of performance, the Ultra96 is able to stream the camera feed at a maximum of 14 fps, which is higher than the aim of 6 fps offered by the RPI4. This is due to the efficient resource utilisation of the FPGA, as seen in Figure 7 where the device is only utilizing 63.6% of its CPU resources. In comparison, a CPU-only system may struggle to handle the processing demands of the camera streaming and MQTT communication simultaneously, potentially resulting in lower frame rates or slower performance.
In summary, the proposed IoT system utilising an FPGA for authentication offers improved security, reconfigurability, and performance compared to systems that rely solely on a CPU. There are many directions in which future work on this project could go. One possible avenue of research is to improve the security of the system by implementing more advanced forms of authentication. For example, instead of using a simple symmetric key, a more secure method such as a public-private key pair could be used. This would require
the use of a cryptographic accelerator or hardware security module to ensure that the key operations can be performed quickly and efficiently on the edge device. Another possibility is to incorporate additional security measures to protect against physical tampering with the device. This could include the use of tamper-evident seals or hardware-based intrusion detection to alert the user if the device has been opened or tampered with.
Another area for improvement could be to optimize the system for better resource utilisation. This could involve using more advanced optimisation techniques during the design phase, or implementing more efficient protocols for communication between the different components of the system. Finally, it would be interesting to explore the possibility of implementing machine learning algorithms on the edge device to enable more advanced forms of data analysis and decision-making. This could involve training a model on the device to identify certain patterns or characteristics in the data, and then using this model to make decisions about how to handle the data. Overall, there are many exciting directions in which this project could be taken, and we believe that it has the potential to make a significant impact in the field of edge computing and IoT security.
## Acknowledgements
The authors would like to express their gratitude to Mr Flemming Christensen and Sundance Multiprocessor Technology for their invaluable support and assistance in the form of AMD-Xilinx training.
## Declarations
The manuscript has no associated data and the source code can be made available upon request. |
2307.02068 | Properties of secondary components in extensive air shower of cosmic
rays in knee energy region | The knee of cosmic ray spectra reflects the maximum energy accelerated by
galactic cosmic ray sources or the limit to the ability of galaxy to bind
cosmic rays. The measuring of individual energy spectra is a crucial tool to
ascertain the origin of the knee. The Extensive Air Shower of cosmic rays in
the knee energy region is simulated via CORSIKA software. The energy resolution
for different secondary components and primary nuclei identification capability
are studied. The energy reconstruction by using electromagnetic particles in
the energy around knee is better than by using other secondary particles. The
resolution is 10-19 percent for proton, and 4-8 percent for iron. For the case
of primary nuclei identification capability, the discriminability of density of
muons is best both at low (around 100 TeV) and high (around 10 PeV) energy, the
discriminability of the shape of lateral distribution of electron and
gamma-rays are good at low energy and the discriminability of density of
neutrons is good at high energy. The differences between the lateral
distributions of secondary particles simulated by EPOS-LHC and QGSJet-II-04
hadronic model are also studied. The results in this work can provide important
information for selecting the secondary components and detector type during
energy reconstruction and identifying the primary nuclei of cosmic rays in the
knee region. | Chen Yaling, Feng Zhang, Hu Liu, Fengrong Zhu | 2023-07-05T07:18:16Z | http://arxiv.org/abs/2307.02068v1 | # Properties of secondary components in extensive air shower of cosmic rays in knee energy region
# Properties of secondary components in extensive air shower of cosmic rays in knee energy region
Chen Yaling\({}^{1}\), Feng Zhang\({}^{1}\), Hu Liu\({}^{1,}\), Fengrong Zhu\({}^{1}\)
\({}^{1}\) School of Physical Science and Technology, Southwest Jiaotong University, Chengdu 610031, Sichuan, China
Email: [email protected]
**Abstract: The "knee" of cosmic ray spectra reflects the maximum energy accelerated by galactic cosmic ray sources or the limit to the ability of galaxy to bind cosmic rays. The measuring of individual energy spectra is a crucial tool to ascertain the origin of the knee. However, the measuring of energy and the identifying of primary nuclei are the foundation of measuring the energy spectra of individual components. The Extensive Air Shower of cosmic rays in the knee energy region is simulated via CORSIKA software. The energy resolution for different secondary components (include electron, gamma, muon, neutron and Cherenkov light) and primary nuclei identification capability are studied. The energy reconstruction by using electromagnetic particles (electron, gamma and Cherenkov light) in the energy around "knee" is better than by using other secondary particles. The resolution is 10% 19% for proton, and 4% 8% for iron. For the case of primary nuclei identification capability, the discriminability of density of muons is best both at low ( 100 TeV) and high ( 10 PeV) energy, the discriminability of the shape of lateral distribution of electron and gamma-rays are good at low energy and the discriminability of density of neutrons is good at high energy. The differences between the lateral distributions of secondary particles simulated by EPOS-LHC and QGSJet-\(\mathbb{I}\)-04 hadronic model are studied. For electron, gamma and Cherenkov light, the differences of the number of particles are within 5%; for muon, when the perpendicular distance from the shower axis is greater than 100 m, the difference of the muon number is within 5%; for neutron, the difference in neutron number between the two models is larger than 10%. The results in this work can provide important information for selecting the secondary components and detector type during energy reconstruction and identifying the primary nuclei of cosmic rays in the knee region.**
**Keywords: extensive air shower, cosmic rays, composition and phase identification, energy reconstructions**
## 1 Introduction
Cosmic ray is a high-energy particle from cosmic space, whose energy spectrum obeys the power law spectrum, and the maximum energy reaches about 1021eV.The main feature of its energy spectrum is that at about 1015eV. And the Spectral index of the power law spectrum changes from 2.7 to 3.1, which is called the "knee" region. The origin of cosmic ray knee region is an important subject in cosmic ray physics [1]. The "knee" of cosmic ray spectra reflects the maximum energy accelerated by galactic cosmic ray sources or the limit to the ability of galaxy to bind cosmic rays. Different models predict the different characteristics of the inflection energy (the energy where the Spectral index changes) of the single component cosmic ray energy spectrum in the knee region. For example, some models predict that the inflection energy is proportional to the charge Z of the original particle [2], and some models predict that the inflection energy is proportional to the mass number A of the original particle [3]. The measurement of single component energy spectrum of cosmic ray is of great significance to the study of these transformations.
At present, cosmic ray can be measured directly or indirectly. Direct measurement is mainly to measure cosmic ray through High-altitude balloon and space experiments, such as CREAM [4], AMS [5], DAMPE [6, 7, 8], etc. Its advantage is that it can directly measure the charge of primary particles, and it has good discrimination ability for cosmic ray with different charges. At the same time, it can use the beam of accelerator experiment to calibrate the detector, and the absolute energy scale is relatively easy to determine. However, due to load limitations, the effective detection area is small, and the upper limit for measuring the energy spectrum can only reach around 100 TeV [9]. Therefore, the measurement of cosmic ray in the knee area mainly relies on indirect measurements from ground-based experiments, such as KASCADE [10], ARGO-YBJ [11], LHAASO [12], ICECUBE [13], TALE [14], TUNKA [1] and AS-g [15]. The ground experiment measures the primary cosmic ray by measuring the secondary components produced by the cosmic ray in the extensive air shower (EAS). Compared with the direct measurement method, it has the advantage of large effective detection area and can measure the energy spectrum of cosmic ray in the knee region. However, because the original cosmic ray particles are not directly measured, the ability to identify the composition of cosmic ray is not high, and the method of energy reconstruction often depends on the composition
of the original particles and the absolute energy scale is not easy to determine, so for ground experiments, the energy measurement and the ability to identify the composition of the original Cosmic ray are the constraints for accurate measurement of single component energy spectrum.
At present, most experiments only measure one or more of the secondary particles. For example, the detection energy band of KASCADE/KASCADE-Grand experiment is about 100 TeV-100 PeV, which can detect the electron, muon and hadron components in the secondary particles [10]. The proton, helium nucleus, carbon, silicon and iron elements in the "knee" cosmic ray are identified and measured by the electromagnetic particle number and muon number [16, 17]; ARGO-YBJ and LHAASO-WFCTA prototypes detected charged particles and Cherenkov photons in secondary particles, and measured the full particle energy spectrum and light component energy spectrum of cosmic ray in the energy range of 1 TeV-10 PeV [18]. The measurement energy band of ICETOP/ICECUBE is about 250 TeV-1 EeV [19, 20], Aartsen et al[20] used the deep learning technology to reconstruct the energy and composition of cosmic ray using Cherenkov photons generated by secondary particles in ice, thus realizing the energy spectrum measurement of components. These experiments measure different types of secondary components and the measurement results of energy spectra do not match [16, 17, 18, 19, 20]. In this paper, we will study the energy reconstruction accuracy and particle identification ability of these secondary components, as well as their dependence on the strong interaction model. To provide reference for understanding the differences in measurement results between different experiments and how to obtain better energy reconstruction accuracy and particle identification ability.
The measurement is conducted at the altitude where the longitudinal development of EAS reaches the maximum. The fluctuation of secondary particles is smaller, which can obtain better detection performance. Many experiments also measure cosmic ray at this altitude. In this paper, the secondary particles and Cherenkov photons of vertically incident cosmic ray at 4400 m above sea level will be studied in the knee region. Section **2** introduces the parameter settings for simulation, including the selection of detection planes, the parameter settings for secondary particles and Cherenkov light; Section **3** studies the horizontal distribution characteristics of secondary components in EAS and the differences between different Strong interaction models; Section **4** studies the energy reconstruction accuracy of secondary components to the original Cosmic ray; Section **5** studies the component identification ability of secondary components to the original Cosmic ray; Section **6** is a summary.
## 2 EAS simulation
In this paper, CORSIKA Version-7.7410 software package [21] is used to simulate the EAS of Cosmic ray in the atmosphere. EPOS-LHC and QGSJet - \(\parallel\)-04 are used as the high-energy strong interaction models, and EPOS-LHC Strong interaction model is used specifically. However, these two high-energy Strong interaction models are compared (figure 7 and figure 8). The low-energy Strong interaction models use FLUKA, and the electromagnetic interaction models use EGS4. The five primary components are proton, helium, CNO, MgAlSi, and iron, respectively. The mass number of CNO and MgAlSi are 14 and 27, respectively. The initial particle energy \(log_{10}(E/GeV)\) is fixed at 5.1, 5.3, 5.5, 5.7, 5.9, 6.1, 6.5, and 6.9. The zenith angle is fixed at 0 \({}^{\circ}\), and the azimuth angle is evenly projected within 0 \({}^{\circ}\) -360 \({}^{\circ}\). In order to study the effects of non-vertical incidence, this article simulated a case where the zenith angle was fixed at 45 \({}^{\circ}\), and compared the results of vertical incidence and zenith angle at 45 \({}^{\circ}\) (figure 17), with all other results being vertical incidence. The observation plane is selected at an altitude of 4400 meters, and the horizontal component and vertical component of the Earth's magnetic field at the observation plane are 34.618 \(\mu T\) and 36.13 \(\mu T\), respectively.
The truncation kinetic energy of secondary particles is set to: hadron 0.1 GeV, muon 0.1 GeV, electron 1 MeV, and gamma ray 1 MeV. The selected truncation kinetic energy is lower than the default value in the CORSIKA manual to store more secondary particles, and the contribution of secondary particles below the selected truncation kinetic energy to the overall transverse distribution is very small. The wavelength of Cherenkov light is set at 200-1000 nm. The collection area of Cherenkov photons is a circular area whose vertical distances from the shower axis are respectively r=20, 50,100, 150, 200, 300 and 400 m. The radius of the circle is 3 m. In real experiments, the atmosphere has absorption and scattering effects on Cherenkov light, including Rayleigh scattering, aerosol scattering and ozone absorption. But they depend on specific models, and this article mainly studies the detection performance under ideal conditions, so these processes will not be considered in the simulation for the time being.
## 3 Lateral distribution of secondaries
### Type of secondaries
The secondary components produced in EAS include Cherenkov photons, positron-electron, gamma rays, muons, neutrons, and other particles. Figure 1 shows the types and particle number of secondary components produced by Cosmic ray with protons (black) and iron nuclei (red) as their primary particles in the EAS process when energy is \(log_{10}(E/GeV)\) = 5.1. Other primary cosmic rays produce similar secondary components in EAS, which will not be described here. Furthermore, in the setting observation plane, the secondary particles in descending order with a large number are Cherenkov photons, gamma rays, positron-electron, muon and neutrons. At present, most experiments are also conducted to measure these secondary particles, and this paper will only study these secondary components.
### Lateral distribution
In the EAS process, the vertical distance from the shower axis is recorded as r, and the relationship between the secondary particle number density at different locations and the location of r is the horizontal distribution of secondary particles. Figure 2(a) and figure 2(b) respectively show the transverse distribution of secondary components generated in EAS by Cosmic ray with energy \(log_{10}(E/GeV)\) = 5.1, proton and energy \(log_{10}(E/GeV)\) = 6.9, and iron core. It can be seen that at the same place, the number density of Cerenkov photons are thousands of times the number density of gamma ray photons, and the number density of gamma ray photons is 100-1000 times the density of neutron particle number with the smallest density. With r=100m and energy \(log_{10}(E/GeV)\) = 6.9, the primary particle takes iron core as an example. Cherenkov photons number density is about 410\(m^{~{}2}\), the gamma ray density is about 100\(m^{-2}\), the positron-electron number density is about 10\(m^{-2}\), the muon number density is about 0.3\(m^{-2}\) and the neutron number density is about 0.05\(m^{-2}\).
In order to facilitate the viewing of the distribution range of secondary particles in the detection plane, a ring is taken with the core as the center and the vertical distance from the core as the radius. The radius of the ring is taken and the number of secondary components in the ring is counted, as shown in figure 3. Figure 3(a) and figure 3(b) respectively show the primary particles with energy \(log_{10}(E/GeV)\) = 5.1, the primary particles with energy \(log_{10}(E/GeV)\)=6.9 composed of protons and the primary particles with iron core. The abscissa is a uniform ring for \(log_{10}(r)\). It can be seen that for primary Cosmic ray with different energies and components, the number of secondary particle first increases with the vaule of r. When it reaches the maximum, the number of secondary particle start to decrease with the vaule of r. Positive and negative electrons are mainly distributed within 10-100 meters; Gamma rays and muons are mainly distributed within the range of tens to hundreds of meters from the core site; Neutrons are mainly distributed around 1 km away from the core; and Cherenkov photons are mainly distributed near the 100 meters away from the core.
For EAS generated by electromagnetic particles, the Kamata-Greisen (NKG) function is usually used to describe
Figure 1: Type and counts of secondary components in the EAS, the primary particles are proton (black) and iron (red). Energy of the primary particle is \(log_{10}(E/GeV)\)=5.1.
Figure 3: Distribution of the number of secondary components produced by different primary particles during EAS in the detection plane: (a) Primary particle is proton with energy \(log_{10}(E/GeV)\!=\!5.1\); (b) primary particle is iron with energy \(log_{10}(E/GeV)\!=\!6.9\).
Figure 2: Lateral distribution of secondary components produced by different primary particles during EAS: (a) Primary particle is proton with energy \(log_{10}(E/GeV)\!=\!5.1\); (b) primary particle is iron with energy \(log_{10}(E/GeV)\!=\!6.9\).
the lateral distribution of its secondary particles, expressed as
\[\rho_{1}(r) = N_{\rm sine}C(s)\left(\frac{r}{R_{\rm M}}\right)^{s-2}\left(1+\frac{r }{R_{\rm M}}\right)^{s-4.5}\] \[C(s) = \frac{1}{2\pi R_{\rm M}^{2}}\times\frac{\Gamma(4.5-s)}{\Gamma(s) \Gamma(4.5-2s)} \tag{1}\]
In the formula, C(s) is a function of s, \(\Gamma\) represents the Gamma function, the vertical distance from the EAS shower axis, the particle population density at the location, the total number of secondary particles, the Morrill radius at the location of the observation plane, and the age of EAS development[22].
For Cosmic ray whose primary particles are protons, helium, oxygen, silicon and iron, different ground experiments have modified the NKG function to describe the transverse distribution of its secondary particles. For example, in the KASCADE experiment, the expression of describing the lateral distribution of secondary particles produced by hadrons in EAS is [23]
\[\rho_{2}(r) = N_{\rm size}\;C(\lambda)\left(\frac{r}{r_{0}}\right)^{\lambda- \alpha}\left(1+\frac{r}{r_{0}}\right)^{\lambda-\beta}\] \[C(s) = \frac{1}{2\pi r_{0}^{2}}\times\frac{\Gamma(\beta-\lambda)}{\Gamma (\lambda-\alpha+2)\Gamma(\alpha+\beta-2\lambda-2)} \tag{2}\]
In the formula, the parameter represents the age of EAS development, which is a free parameter. But \(r_{0},\;\alpha\) and \(\beta\) are defined as a constant. For the KASCADE experiment, \(\;\alpha=1.5,\;\beta=3.6\) and \(r_{0}=40\) meters.
This article first attempts to use equation (2) to fit the lateral distribution of different secondary components in figure 2. It was found that there can be multiple sets of fitting parameters for the same horizontal distribution, that is, there is coupling between the parameters (for example, only two parameters are independent in \(\;\lambda\;\), \(\;\alpha\) and \(\;\beta\;\)). In order to reduce fitting parameters, this article adopts a more general equation (3) to fit the lateral distribution of secondary components:
\[\rho(r) = N_{\rm size}C(s)\left(\frac{r}{r_{0}}\right)^{s-2}\left(1+\frac{r}{r _{0}}\right)^{(s+\Delta)}\] \[C(s) = \frac{1}{2\pi r_{0}^{2}}\times\frac{\Gamma(-s-\Delta)}{\Gamma(s) \Gamma(-\Delta-2s)} \tag{3}\]
In the equation, \(\Delta\) is the parameter. When \(\Delta\)= -4.5 and \(r_{0}=R_{M}\) in equation (3), it is consistent with equation (1); When s= \(\lambda+0.5\), \(\Delta\) = -4.1 and \(r_{0}=40\) meters, it was consistent with equation (2). The formula (3) is a double power law function. The specific meaning of the parameter is to represent the power law index (or slope) of the phase where the Particle number increases with the increase of the value of r in figure 3, which is equivalent to the age parameter in Formula (1). 2s+ \(\Delta\) represents the power law index (or slope) of the phase where the Particle number decreases with the increase of the value of r in figure 3, and \(r_{0}\) represents the coordinates of r where the two different power law indexes change.
There are four free parameters in the formula (3) that are \(N_{size}\), s, \(\Delta\), \(r_{0}\). And the value of these fitting parameters are not unique. The same horizontal distribution can be fitted by combining multiple groups of parameters. Multiple parameter combinations can be used to fit the same lateral distribution. In order to further reduce the number of free parameters, the correlation between these parameters was studied. For the energy segment simulated in this paper and the original cosmic ray component, when the secondary particle is gamma ray and \(r_{0}=460m\), the horizontal distribution of these cases can be fitted. And the correlation between \(s_{\gamma}\) and \(\Delta_{\gamma}\) is shown in figure 4(a) and the fitting expression is Formula (4). So \(N_{\rm size}^{\gamma}\;\),\(s_{e}\) will only be used as free parameters. Correspondingly, for the electrons in the secondary particles, when \(r_{0}^{s}\)=50m, fitting \(s_{e}\) and \(\Delta_{e}\) satisfy equation (5), as shown in figure 4(b) \(N_{\rm size}^{e}\;\) and \(s_{e}\) will only be used as free parameters. For the muon in the secondary particle, fixed \(r_{0}^{\mu}=\)800m, the \(s_{\mu}\) and \(\Delta_{\mu}\) satisfy equation (6) and \(N_{\rm size}^{\mu}\;\),\(s_{\mu}\) will only be used as free parameters.
\[\Delta_{\gamma} = -1.18\cdot s_{\gamma}^{2}+1.94\cdot s_{\gamma}-5.00 \tag{4}\] \[\Delta_{e} = -0.35\cdot s_{e}^{2}-0.27\cdot s_{e}-3.20\] (5) \[\Delta_{\mu} = -s_{\mu}-4.4 \tag{6}\]
or neutrons in secondary particles, due to the small number of secondary particles, the limit on the transverse distribution function is weaker. And the range of variation for each parameter is larger. According to equation (3) in reference [24], certain parameters of equation (3) in this paper were fixed and optimized based on this. It was found that equation (7) can fit the transverse distribution of neutrons well,where the free parameter is \(N_{\rm size}^{\rm n}\) and \(r_{0}^{\rm n}\).
\[\rho_{n}(r) = N_{\rm size}^{\rm n}C_{\rm n}\left(\frac{r}{r_{0}^{\rm n}} \right)^{-0.9}\left(1+\frac{r}{r_{0}^{\rm n}}\right)^{-4.0}\] \[C_{\rm n} = \frac{1}{2\pi\left(r_{0}^{\rm n}\right)^{2}}\times\frac{\Gamma(4.0)}{\Gamma(1.1)\Gamma(2.9)} \tag{7}\]
Equations (3) and (7) are used to fit the transverse distribution of secondary particles produced by cosmic rays of different components in EAS, as shown in figure 5. Because only a few Cherenkov photons are preserved in the simulation, the transverse distribution of Cherenkov photons is not fitted.
Figure 4: Dependence between parameters s and \(\Delta\) in gamma (a) and electron (b) lateral distribution fitting (Shower is induced by proton and iron respectively, and the blue dotted line is the fitting curve).
Figure 5: Fitting of lateral distribution of secondary components: (a), (b) Primary particle is proton with \(log_{10}(E/GeV)=5.1\)(a)and \(log_{10}(E/GeV)=6.9\) (b); (c),(d) primary particle is iron with \(log_{10}(E/GeV)=5.1\) (c) and \(log_{10}(E/GeV)=6.9\) (d). The green, black, blue, and pink points represent gamma, electron, muon, and neutron respectively, the red stars at the top represent Cherenkov light. The solid lines with the same color are the fitted function.
To test the fit quality of the transverse distribution of different secondary components, the deviation between the fit values "Nsize "and the statistical values "N" of the number of secondary particles produced by cosmic rays of different components and energies in EAS ( Diff = \(\frac{N-N_{size}}{N_{size}}\times\)100%) is shown in figure 6. When the energy \(log_{10}(E/GeV)\)\(>\)5.5, the deviation of all particles is within 6%, and then the fluctuation will be used to characterize the accuracy of the energy reconstruction.
### Differences between hadronic models
The difference in current intensity of proton spectra in the knee region measured by different strong interaction models in KASCADE experiments is nearly double [10]. In this paper, the two strong interaction models' s difference of transverse distribution between EPOS-LHC and QGSJet-1]-04 is studied, and the results are shown in figure 7 and figure 8. It can be seen that the difference in the number of positrons, gamma rays and Cerenkov photons in the two models is close and minimal. When r\(>\)20m, the difference of the models of the above three particles is within 5% and all range differences of r are within 10%; when r\(>\)100m, the difference of Muon' s model is less than 5%, but when r\(<\)100m, the maximum difference can be close to 20% (corresponding to the original particle is iron, with an energy of about 10 PeV, around r= 5 m); Neutrons differ the most, with a difference of 10%-20% at r\(>\)100m and a maximum difference is about 40% when r\(<\)100m (corresponding to r\(<\)100m). In general, the difference between muons and neutrons is significantly reduced when r\(>\)100m. For experiments measuring muse and neutrons, it is recommended that the detector size is greater than 100 m and should select particles greater than 100 m for reconstruction to reduce model dependence. Muons and neutrons are the products of the strong interaction process. The EPOS-LHC model takes into account the influence not considered in other strong interaction models. Under the multiple scattering of EPOS-LHC, the energy scale of a single scattering is taken into account when calculating the respective cross sections. This is not the case in the QGSJET-II-04 model based on Gribov-Regge theory [25].The differences between different strong interaction models have been studied in detail in literature [25, 26, 27] and will not be discussed in this paper.
Figure 6: Deviation between counted value N and fitted value \(N_{size}\) for different secondary particles. Shower is induced by proton (a) and iron (b).
Figure 7: Difference in percentage of the lateral distribution of secondary particles between EPOS-LHC and QGSJet-1]-04 hadronic interaction model, in which the primary particles are protons with different energies: (a) \(log_{10}(E/GeV)\)=5.1; (b) \(log_{10}(E/GeV)\)=6.9.
## 4 Energy resolution
By fitting the transverse distribution of the secondary particles produced by cosmic ray EAS, the fitting parameters of each secondary particle can be obtained. The number of particles or the number density of particles at a certain radius is often used for energy reconstruction, while the ratio of the number of different secondary particles and the shape parameters of the transverse distribution are often used to identify the composition of the original particle. In this paper, we will use the Cherenkov photon numbers of the four secondary particles fitted and statistically obtained at a different "r" in Section **3** to characterize the energy reconstruction accuracy and compare the differences between them. This paper only studies the accuracy of energy reconstruction under fixed components and does not study the component dependence of energy reconstruction and the construction of composition-independent energy reconstruction variables by combining real observation data with composition-sensitive variables, which is beyond the scope of this paper. The results obtained in this paper are superior to those obtained after taking into account the composition correction, so it can be considered as the upper limit of energy reconstruction using a single secondary particle.
Since the energy of the primary particle is proportional to the number or density of secondary particles,
\[E{=}C{\times}N_{\rm{size}} \tag{8}\]
So the percentage spread of the number of secondary particles or the number density (defined as the spread of the number or number density distribution divided by the mean of the distribution) is equal to the resolution of the reconstructed energy
\[\frac{\Delta E}{E}{=}\,\frac{\Delta(C{\times}N_{\rm{size}}\ )}{C{\times}N_{\rm{size}}}{ =}\,\frac{\Delta N_{\rm{size}}}{N_{\rm{size}}} \tag{9}\]
Because the simulation process is carried out at several discrete fixed energies, the influence of the width of the energy range on the broadening of the particle population distribution is not involved. Therefore, this paper will directly use the percentage of the distribution broadening of particle number or particle number density to characterize the energy reconstruction accuracy, without carrying out specific energy reconstruction, and the calculation of the broadening will use the value of \(\sigma\) fitted by Gaussian function.
At present, the most commonly used energy reconstruction method is to reconstruct the energy with the secondary particle number density "\(\rho\)" (electron number density "\(\rho_{e}\)", gamma ray number density "\(\rho_{\gamma}\)", muon number density "\(\rho_{\mu}\)", neutron number density "\(\rho_{n}\)") of a certain solid "r" at a given point [28]. Figure 9 shows the variation curve of the expansion percentage of secondary electron density " \(\rho_{e}\)" produced by primary particles of different energies with the position "r" when the primary component is iron. And the lines of different colors represent the different energies of primary particles. It can be seen that for the number density of electrons in the secondary particles, the broadening percentage is smaller in the range of 100-500 m, and it is less dependent on the composition and energy of the primary particles. Other secondary particles have similar properties, which will not be detailed. Gamma ray at 300-800 m range is better, Muon at 150-600 m range is better and neutron at 800-2000 m range is better.The percentage spread of electron number density at 200 m, gamma ray density at 500 m, Muon density at 250 m, and neutron density at 1000 m will be used to characterize the accuracy of energy reconstruction using them (Figure 11).
Figure 8: Difference in percentage of the lateral distribution of secondary particles between EPOS-LHC and QGSJet- [I] - 04 hadronic interaction model, in which the primary particles are irons with different energies: (a) \(log_{10}(E/GeV)\)= 5.1; (b) \(log_{10}(E/GeV)\)=6.9.
Another way to reduce the broadening of the particle population distribution is to use the age parameter "s" to modify the particle population [29]. Since the number of secondary particles in the observation plane is affected by the development stage of EAS, and the age parameter "s" represents the development stage of EAS, the influence of the development stage of EAS can be reduced by modifying the age parameter. Figure 10(a) shows the distribution of fitting parameters of secondary electrons "\(N_{\rm size}\)" with "\(s_{e}\)" when the original particle energy \(log_{10}(E/GeV)\)=5.1 and the composition is iron, which \(\ln(N_{\rm size}^{\rm e})\) decreases as se increases. The red solid line is the straight line fitting. The broadening that can be effectively reduced is recorded as" \(\ln(N_{\rm size}^{\rm e}\,2)\)" at the mean corrected to by the red solid line. The red and blue curves in figure 10(b) represent the distribution before and after correction respectively and are fitted with Gaussian functions. The modified broadening of "\(\ln(N_{\rm size}^{\rm e})\)" is significantly smaller, which can be used to characterize the energy reconstruction accuracy of this method and the results are shown in figure 11. Figure 11 shows the comparison of the energy reconstruction accuracy before and after age correction "\(N_{\rm size}\)" "\(N_{\rm size}\)2" and the secondary particle population density for energy reconstruction when the primary particle was iron, as well as the relationship between the energy reconstruction accuracy and the primary particle energy. It can be seen that for electrons and gamma rays in secondary particles, the energy reconstruction accuracy can be effectively improved by using the particle number and particle number density modified by age parameter compared with the direct use of particle number, and the energy reconstruction accuracy of the particle number modified by age parameter is slightly better than that obtained by particle number density. But the difference is not large. For muons and neutrons in secondary particles, there is little difference in the energy reconstruction accuracy of the particle number, particle density and particle number modified by age parameter. Since the modified curve of the age parameter depends on the energy and type of the original particle, and the radius selected for the particle number density is a fixed value, and the energy reconstruction accuracy obtained between them is not very different, the particle number density of the secondary particle at a fixed point will be used to characterize the optimal energy reconstruction accuracy. The results here are similar for the energy reconstruction accuracy of the EPOS-LHC model and the QGSJet-\(\lx@sectionsign\)1-04 model. In this paper, the limit detection performance under ideal conditions is studied and the process of detector response is not involved. The systematic error of energy reconstruction caused by the difference of the mean particle numbers of the two strong interaction models is not considered here.
Figure 12 shows the percentage broadening of the distribution of Cherenkov photon numbers (\(N^{C}\)) at different vertical distances from the core site when the primary particle is iron, and the vertical distances from the center to the shower axis are 20, 50, 100, 150, 200, 300 and 400 m, respectively. The number of Cherenkov photons at visible of 50m, 150m and 200 m has a smaller spread of about 4%-7%.
The energy reconstruction accuracy obtained by different secondary components is shown in figure 13, where the primary particles in Figure 13(a) - (e) correspond to proton, helium, carbon, nitrogen, oxygen, magnesium, aluminum, silicon and iron, respectively. For protons, the energy reconstruction accuracy of electrons, gamma rays and Cerenkov photons at 50 m is about 10%-19%. For iron, the energy reconstruction accuracy of gamma rays, Cherenkov photons and muons at 150 m is better and is about 4%-8%. The higher the mass number of the primary particle, the higher the precision of energy reconstruction. In the experiment, multiple secondary particles can be combined according to the accuracy of individual energy reconstruction of different secondary particles to obtain
Figure 9: Resolution in percentage (sigma/mean) of the particle number density of secondary electrons varies with perpendicular distance to the shower axis. The secondary electrons are induced by iron with different energies.
Figure 11: Energy resolution reconstructed by, and respectively. Shower is induced by iron. The secondary particles are electron (a), gamma (b), muon (c) and neutron (d), respectively;\(N_{size}\) indicates the amended \(N_{size}\).
Figure 10: Distribution of \(ln(N_{size}^{e})\) vs. \(s_{e}\) from fitted lateral distribution function for iron with energy \(log_{10}(E/GeV)\)= 5.1 (Red solid line is a linear fitting); (b) comparison between corrected and uncorrected \(\ln(N_{size}^{e})\), and \(\ln(N_{size}^{e})\) corrected with the red solid line in panel (a) (In the panel (b), \(\chi^{2}/ndf\) can represent the good or bad fit, \(\chi^{2}\) represents the difference between the model and the data point, ndf refers to the degree of freedom of fitting, that is the number of data points minus the number of free parameters of the model).
energy reconstruction variables with less component dependence and higher precision. The above results can provide references for the selection of secondary particle types, energy reconstruction methods and distance from the core.
## 5 Composition discrimination
The identification of the primary particles of cosmic rays is the key to the single component energy spectrum measurement of cosmic rays. The study on the primary particle sensitivity of secondary components in EAS can provide guidance for the selection of component identification variables. In this paper, the ability of these parameters to identify primary particles is studied according to the fitting parameters of the transverse distribution of secondary particles obtained in Section 3. Since Cherenkov light is mainly used to identify primary particles based on imaging, it is beyond the scope of this study and will not be studied. According to the study in Section 4, it can be seen that the particle number density fluctuates less than the total number of particles, and it will also have better discrimination ability for component identification. Here, the particle number density will be used instead of the total number of particles to study the particle discrimination ability.
This paper will study the component identification ability of each variable under the same energy condition, which can show the independent identification ability of each variable. For real experimental data, energy reconstruction variables can be combined with energy-independent component identification variables. For example, the number of electromagnetic particles and the number of muse of secondary particles are related to the energy and type of the
Figure 12: Resolution in percentage (sigma/mean) of Cherenkov low photon number \(N^{C}\) varies with the energy of the primary particle at different vertical distance r from the core site. Shower is induced by iron.
Figure 13: Energy resolution from different secondary components vs. primary particle energy. The primary particles are proton (a), helium (b), CNO (c), MgAlSi (d), iron (d) (Colored lines indicate different secondary type)
original particle. The fluctuation of electromagnetic particles is smaller and the number of muon is more sensitive to the composition. So it can be modified on the basis of electromagnetic particles and the number of muon can be used to obtain the energy reconstruction variable independent of the composition. It is modified using the previous energy reconstruction variable to obtain an energy-independent component sensitive variable. For reasons of space, I will not go into details here.
Figure 14 shows the ability of particle number density and age parameters to distinguish protons and iron nuclei when the secondary particles are positrons, gamma rays, muons and neutrons respectively. The dots of different colors in Figure 14 represent protons and iron nuclei with energy \(log_{10}(E/GeV)=5.1\), \(log_{10}(E/GeV)=6.1\) and \(log_{10}(E/GeV)=6.9\), respectively. It can be seen that the age parameters "s" of positrons and gamma-rays, the particle number density of muons and neutrons have a good ability to identify the composition. In order to more vividly demonstrate their particle identification ability, the distributions of protons and iron nuclei at the same energy are projected onto the coordinate axes of the above variables, respectively, as shown in figure 15 and figure 16. It can be seen that the identification energy of muon particle number density is the best in both low energy and high energy segments. The age parameter "\(s_{e}\)" "\(s_{\Gamma}\)" of positron and gamma ray transverse distribution shape is better in low energy segment (such as around 100 TeV) and the identification ability of neutron particle number density is better in high energy segment (such as around 10 PeV). In the experiment, several secondary particles can be combined to obtain the component identification variable with less energy dependence and better identification ability according to the identification ability of different secondary particles. This can provide reference for the selection of component identification variables and detector types at different energies.
This paper also studies the case of a zenith Angle of 45\({}^{\circ}\), which will increase the atmospheric depth compared to the case of vertical incidence. For the energy segment studied in this paper and the selected altitude, the zenith angle is the atmospheric depth of 45\({}^{\circ}\) will exceed the atmospheric depth required for the shower to develop to the maximum area, while the atmospheric depth of vertical incidence is near the shower to develop to the maximum area. So the distribution of detected secondary components will be affected and the energy reconstruction accuracy and particle identification ability will be affected. The distribution comparison of secondary components at different zenith angles is shown in figure 17. It can be seen that as the zenith Angle increases, the number of electrons and gamma rays will decrease because they have passed through the shower maximum and their fluctuations will also be significantly larger. The muse interact less with the atmosphere during propagation, the increase of atmospheric depth has little influence on the muse number and the fluctuation of the muse number is also small. Neutrons also continue to undergo hadron shower processes with the atmosphere, attenuating more relative to the muon and the fluctuation of neutrons increases with the increase of the zenith, the amplitude is between the electromagnetic particles and the muon. The telescope measures the Cherenkov light at a fixed position, and the density of the Cherenkov light varies with the zenith angle depending on the vertical distance of the detection area from the shower axis " r". As shown in figure 17, when r= 50 m, the number of Cherenkov photons decreases with the increase of
Fig. 16. Comparison of the distribution of (a) \(s_{e}\), (b) \(s_{\gamma}\), (c) \(\rho_{\mu}\), (d) \(\rho_{n}\) between proton and iron with \(log_{10}(E/GeV)\)=6.9
zenith angle. While, the number of Cerenkov photons increases with the increase of zenith Angle at r= 150 m. At r= 50 m and r= 150 m, the fluctuation of Cherenkov photon number increases significantly and the variation amplitude is similar to the fluctuation amplitude of electromagnetic particles. In summary, when the atmospheric depth exceeds the atmospheric depth where the shower develops to the maximum, the fluctuation of various secondary components increases, the fluctuation of Muse changes the least and the fluctuation of electromagnetic particles changes the most.
## 6 Summary
The single-component energy spectrum of the cosmic ray knee region is an important means to understand the physical origin of the cosmic ray knee region. Since the ground experiment lacks a good absolute energy calibration method, and can only measure secondary particles produced by the primary particles in EAS and cannot directly measure the primary particles. The energy measurement and particle identification ability of the ground experiment are called the limiting factors of single-component energy spectrum measurement. Based on this, we simulate the characteristics of secondary components of cosmic rays with different energies and primary particle compositions at 4400 m above sea level in EAS, including positrons, gamma rays, muons, neutrons and Cherenkov photons. The lateral distribution characteristics of various secondary components and the strong interaction dependence of the lateral distribution of different secondary components are studied in detail and the lateral distribution is fitted well with specific functions. The method and accuracy of energy reconstruction, strong interaction model dependence and discriminability of cosmic-ray particle identification are studied in detail with these fitting parameters. It provides reference for the selection of detector types, energy reconstruction methods and component identification variables in various ground experiments.
For energy reconstruction, using the number density of secondary particles at a certain vertical distance from the core site " r" is a better choice than the total number of secondary particles. It is less compositional dependent than the total number of particles modified by the age parameter. When the primary particle is proton, the energy reconstruction accuracy of electron, gamma ray and Cherenkov photon at 50 m is good and is about 10%-19%. When the primary particle is iron, the energy reconstruction accuracy of gamma ray, Cherenkov photon and Muon at 150 m is better and is about 4%-8%. The higher the mass number of the primary particle, the higher the precision of energy reconstruction. In the experiment, multiple secondary particles can be combined according to the accuracy of individual energy reconstruction of different secondary particles to obtain energy reconstruction variables with less component dependence and higher precision.
For particle identification, the identification energy of muon particle number density "\(\rho_{\mu}\)" is the best in both the low energy segment and the high energy segment. The age parameter of positron and gamma ray transverse distribution shape is better in the low energy segment (such as around 100 TeV) and the identification ability of neutron particle number density is better in the high energy segment (such as around 10 PeV). In the experiment, multiple secondary particles can be combined to obtain component identification variables with lower energy dependence and
better identification ability according to the identification ability of different secondary particles, such as multivariate analysis method [30] or deep learning method [31]. The parameters provided in this paper can be directly used as training variables.
For the difference in transverse distribution of secondary particles caused by the two strong interaction models of EPOS-LHC and QGSJET-II-04, the difference in the number of positrons, gamma rays and Cherenkov photons is very close and they are smaller than that of muon and neutrons and it is within 5% at r\(>\)20m and within 10% at all ranges of r. The difference in the number of muon is within 5% at r\(>\)100m. But the maximum difference can be close to 20% (corresponding to the original particle is iron, the energy is about 10 PeV, near r=5 m). The difference of neutrons is the largest, the difference is 10%-20% at r\(>\)100m and the maximum difference is about 40% (corresponding to \(r\)\(<\)10\(m\)). Overall, the difference between the muon and neutron models is significantly reduced at r\(>\)100m. In the experiment, secondary particles larger than 100 m were selected for reconstruction. It can reduce the dependence of the strong interaction model effectively.
For the incidence of 45\({}^{\circ}\) zenith angle, compared with the vertical incidence, the number of electromagnetic particles will be significantly reduced and the fluctuation will be larger and the energy reconstruction accuracy and particle identification ability obtained by them will be worse. The number of muses will be slightly reduced and the fluctuation of the number of muses will not change much. The detection performance will be little affected. The decrease in the number of neutrons and the increase in the number of particles is somewhere between an electromagnetic particle and a muon. The change of Cherenkov photon relative to the vertical incidence depends on the vertical distance between the photon and the shower axis and the fluctuation of the photon number is also larger relative to the vertical incidence. And the amplitude of the fluctuation is similar to that of electromagnetic particles. In summary, when the atmospheric depth exceeds the atmospheric depth where the shower develops to the maximum, the fluctuation of various secondary components increases and the detection performance deteriorates. The fluctuation of the particle changes the least and the fluctuation of the electromagnetic particle changes the most.
In summary, without considering the detector effect, this paper studies the energy reconstruction accuracy of various secondary components during energy reconstruction and the identification ability of the original particle composition, which provides references for the selection of detector types, energy reconstruction variables and methods and component identification variables in ground experiments.
## Acknowledgements
This work is supported by the Science and Technology Department of Sichuan Province, China (Grant No. 2021YFSY0031, 2020YFSY0016), the National Key R&D Program of China (Grant No. 2018YFA0404201), and the National Natural Science Foundation of China (Grant Nos. 12205244, 12147208).
|
2307.15514 | Revisiting Fully Convolutional Geometric Features for Object 6D Pose
Estimation | Recent works on 6D object pose estimation focus on learning keypoint
correspondences between images and object models, and then determine the object
pose through RANSAC-based algorithms or by directly regressing the pose with
end-to-end optimisations. We argue that learning point-level discriminative
features is overlooked in the literature. To this end, we revisit Fully
Convolutional Geometric Features (FCGF) and tailor it for object 6D pose
estimation to achieve state-of-the-art performance. FCGF employs sparse
convolutions and learns point-level features using a fully-convolutional
network by optimising a hardest contrastive loss. We can outperform recent
competitors on popular benchmarks by adopting key modifications to the loss and
to the input data representations, by carefully tuning the training strategies,
and by employing data augmentations suitable for the underlying problem. We
carry out a thorough ablation to study the contribution of each modification.
The code is available at https://github.com/jcorsetti/FCGF6D. | Jaime Corsetti, Davide Boscaini, Fabio Poiesi | 2023-07-28T12:16:31Z | http://arxiv.org/abs/2307.15514v2 | # Revisiting Fully Convolutional Geometric Features for Object 6D Pose Estimation
###### Abstract
Recent works on 6D object pose estimation focus on learning keypoint correspondences between images and object models, and then determine the object pose through RANSAC-based algorithms or by directly regressing the pose with end-to-end optimisations. We argue that learning point-level discriminative features is overlooked in the literature. To this end, we revisit Fully Convolutional Geometric Features (FCGF) and tailor it for object 6D pose estimation to achieve state-of-the-art performance. FCGF employs sparse convolutions and learns point-level features using a fully-convolutional network by optimising a hardest contrastive loss. We can outperform recent competitors on popular benchmarks by adopting key modifications to the loss and to the input data representations, by carefully tuning the training strategies, and by employing data augmentations suitable for the underlying problem. We carry out a thorough ablation to study the contribution of each modification. The code is available at [https://github.com/jcorsetti/FCGF6D](https://github.com/jcorsetti/FCGF6D).
## 1 Introduction
Object 6D pose estimation is the problem of finding the Euclidean transformation (i.e. pose) of an object in a scene with respect to the camera frame [15]. This problem is important for autonomous driving [29], augmented reality [30], space docking [19], robot grasping [7], and active 3D classification [38]. The main challenges are handling occlusions, structural similarities between objects, and non-informative textures. Different benchmarks have been designed to study these challenges, such as LineMod-Occluded (LMO) [1], YCB-Video (YCBV) [40], and T-LESS [14]. LMO includes poorly-textured objects in scenarios with several occlusions. In YCBV, well-textured objects appear in scenarios with fewer occlusions but more pose variations. T-LESS includes poorly-textured and geometrically-similar objects in industrial scenarios with occasional occlusions.
Object 6D pose estimation approaches based on deep learning can be classified as _one-stage_[17, 26, 24] or _two-stage_[18, 13, 12, 39]. One-stage approaches can directly regress the object pose [17, 26, 24]. Two-stage approaches can predict 3D keypoints [13, 12] or point-level correspondences between the scene and the object [39]. Correspondences can be computed through point-level features [39]. One-stage approaches are typically more efficient than their two-stage counterpart, as they require only one inference pass. However, rotation regression is a difficult optimisation task because the rotation space is non-Euclidean and non-linear, and the definition of correct orientation is ambiguous in case of symmetric objects [34]. On the other
Figure 1: Top: Typically, two-stage 6D pose estimation methods process the input (RGBD image, 3D object) with different deep neural networks (2D, 3D) to learn keypoint correspondences [39], or directly predict the keypoint projections on the image [13, 12]. They also rely on detectors to crop the input image, and estimate the final pose with RANSAC-based PnP [9]. Bottom: Our method processes the whole scene and the object point clouds with 3D deep neural networks, optimises the output point-wise (dense) features by using ground-truth correspondences, and estimates the final pose with a point cloud registration algorithm.
hand, correspondence-based approaches have to be coupled with registration techniques, such as RANSAC, PnP, or least square estimation [39].
We argue that the problem of learning discriminative point-level features is overlooked in the related literature. Moreover, we believe that working at intermediate levels of representation learning, rather than regressing the pose directly, facilitates interpretability and enables us to effectively debug algorithms. Literature on representation learning for point cloud registration has made great advances [4, 32], and none of the object 6D pose estimation methods have deeply investigated the application of these techniques to the underlying problem (Fig. 1). In a landscape dominated by complex networks, our work stands as the first to comprehensively explore and quantify the benefits of this formulation with a simple yet effective solution. Our research addresses fundamental and previously unanswered questions:
_i) How to learn features of heterogeneous point clouds (objects and scenes) that align in the same representation space and exhibit cross-domain generalisation (synthetic to real)?_
_ii) What training strategies are optimal for this approach?_
_iii) What degree of improvement can these strategies bring?_
To answer these questions, we revisit Fully Convolutional Geometric Features (FCGF) [4] and show that its potential to achieve state-of-the-art results lies in an attentive design of data augmentations, loss negative mining, network architecture, and optimisation strategies. FCGF is designed to learn point-level features by using a fully-convolutional network optimised through a hardest contrastive loss. Compared to the original FCGF setting, our setting is asymmetric, i.e. the two input point clouds have different sizes and resolutions. Therefore, we modify the hardest contrastive loss to take into account the size of each point cloud for the mining of the hardest negatives. We use separate architectures to learn specific features for the two (heterogeneous) input data (object and scene), but unlike several state-of-the-art methods we train only a single model for all the objects of each dataset. We use specific augmentations to tackle occlusions, which are the main challenge in real-world scenarios and in the considered datasets. We name our approach FCGF6D. FCGF6D outperforms state-of-the-art methods (+3.5 ADD(S)-0.1d on LMO, +0.8 ADD-S AUC on YCBV), even when comparing with methods that train one model for each object. Our ablation study suggests that most of the performance gain is obtained thanks to our changes to the loss, the addition of the RGB information and our changes to the optimizer. In summary, our contributions are:
* We tailor FCGF for object 6D pose estimation in order to i) process entire scenes rather than cropped regions as competitors, ii) learn a single model for all objects instead of a model for each object, iii) process both photometric and geometric information with a single unified deep network model.
* A modified version of the hardest contrastive loss that is applied to heterogeneous point clouds and that considers a geometric constraint when mining the hardest negative.
* We study data augmentations that enable FCGF to improve generalisation between synthetic and real data.
## 2 Related work
6D pose estimation approaches can be designed to use different input data. RGB methods [18, 17, 6, 35] rely on photometric information only, while RGBD methods [13, 12, 11, 26, 39] also use range information in addition to RGB.
**RGB-based 6D pose estimation.** SO-Pose [6] proposes an end-to-end method that explicitly models self-occlusion maps (i.e., portions of the object that are hidden by camera orientation). It computes 2D-3D correspondences for each visible point of the object, and feeds them with self-occlusion maps to a pose regression module. ZebraPose [35] proposes a strategy to learn surface descriptors on the image, by training a neural network to predict pixel features which correspond to predefined descriptors on the object model. At inference time, it finds correspondences by similarity, and solves the PnP problem with RANSAC. The authors show that the vertex encoding process is crucial for performance improvement.
**RGBD-based 6D pose estimation.** PVN3D [13] extends PVNet [31] by incorporating 3D point cloud information. The core of this approach is a keypoint voting mechanism, in which for each pixel the offset to a reference keypoint is regressed. A semantic segmentation module is also used to identify the points belonging to each object in the scene. PVN3D is a two-stage method, as it passes the final correspondences to a RANSAC-based [9] algorithm for 6D pose estimation. FFB6D [12] adopts an analogous method to PVN3D [13], but introduces a novel convolutional architecture with Fusion Modules. These modules enable the model to combine photometric (RGB) and geometrical (D) features for learning a better point cloud representation. E2EK [26] proposes an end-to-end trainable method by extending FFB6D [12]. It clusters and filters the features computed by FFB6D based on confidence, and then processes them by an MLP-like network that regresses the pose. Wu et al. [39] addresses the problem of objects that are symmetric to rotation with a two-stage method. They extend FFB6D [12] by introducing a novel triplet loss based on geometric consistency. Symmetry is leveraged by considering symmetric points as positives, thus forcing them to have similar features. Feng et al. [8] proposes a method to solve a related problem. In this work, FCGF is applied to align different point clouds of objects belonging to the same category. However, the authors do not introduce task-specific modifications to FCGF, and unlike our case of application, the target object is assumed to be already segmented from the scene.
Unlike methods that employ sophisticated combinations
of deep network architectures to process RGB and depth modalities [12, 39], our approach uses deep networks based on sparse convolutions to process coloured point clouds with a single framework. Sparse convolutions are designed to process point clouds efficiently [3]. We also split the pose estimation problem into two subproblems, i.e. feature learning and point cloud registration. This allows us to evaluate the quality of the learned features by using metrics such as Feature Matching Recall [5], which fosters interpretability of our model. Unlike Wu et al. [39], we do not rely on a detector to crop the region with the candidate object before processing the point cloud with our network. Our experiments show that we can outperform the nearest competitors E2EK [26] and Wu et al. [39] by 5.7 and 1.9 ADD(S)-0.1d on the LMO dataset, respectively, without using a detector.
## 3 Preliminary: A review of FCGF
**Input data representation.** FCGF takes as input a quantised version of the original point cloud \(\mathcal{X}\in\mathbb{R}^{V\times 3}\). The quantisation procedure splits the volume occupied by \(\mathcal{X}\) into a grid of voxels of size \(\mathbf{Q}\) and assigns a single representative vertex \(\mathbf{x}_{i}\in\mathbb{R}^{3}\) to each voxel \(i\). This reduction is typically computed with random sampling or by average pooling (barycenter) [3]. The resulting sparse representation is obtained by discarding voxels corresponding to a portion of the empty space and is significantly more efficient in terms of memory utilisation.
**Feature extractor.** The fully-convolutional feature extractor \(\mathbf{\Phi}_{\Theta}\) is a parametric function with learnable parameters \(\Theta\) designed as a UNet [33]. Given \(\mathbf{x}_{i}\), \(\mathbf{\Phi}_{\Theta}\) produces a \(F\)-dimensional feature vector defined as \(\mathbf{\Phi}_{\Theta}(\mathbf{x}_{i})=\mathbf{f}_{i}\in\mathbb{R}^{F}\). FCGF processes pairs of point clouds using a Siamese approach, i.e. feature extractors with shared weights. FCGF is implemented in PyTorch using Minkowski engine [3].
**Hardest contrastive loss.** The hardest contrastive (HC) loss is defined as \(\mathcal{L}_{\text{HC}}=\lambda_{P}\mathcal{L}_{P}+\lambda_{N}\mathcal{L}_{N}\), where \(\mathcal{L}_{P}\) promotes similarity between features of positive samples, \(\mathcal{L}_{N}\) promotes dissimilarity between features of negative samples, and \(\lambda_{P},\lambda_{N}\) are hyperparameters. Given a pair of 3D scenes \((\mathcal{X}_{1},\mathcal{X}_{2})\) as input, the set of positive pairs is defined as \(\mathcal{P}=\{(i,j):\mathbf{x}_{i}\in\mathcal{X}_{1},\mathbf{x}_{j}\in \mathcal{X}_{2},\phi(\mathbf{x}_{i})=\mathbf{x}_{j}\}\), where \(\phi:\mathcal{X}_{1}\rightarrow\mathcal{X}_{2}\) is a correspondence mapping between \(\mathcal{X}_{1}\) and \(\mathcal{X}_{2}\) voxels. \(\mathcal{L}_{P}\) is defined as
\[\mathcal{L}_{P}=\sum_{(i,j)\in\mathcal{P}}\frac{1}{|\mathcal{P}|}\left(\| \mathbf{f}_{i}-\mathbf{f}_{j}\|-\mu_{P}\right)_{+}^{2}, \tag{1}\]
where \(|\mathcal{P}|\) is the cardinality of \(\mathcal{P}\), \(\mu_{P}\) is a positive margin to overcome overfitting [25], and \((\cdot)_{+}=\max(0,\cdot)\). For each pair \((i,j)\in\mathcal{P}\), two sets of candidate negatives are defined as \(\mathcal{N}_{i}^{{}^{\prime}}=\{k\text{ s.t. }\mathbf{x}_{k}\in\mathcal{X}_{1},k\neq i\}\), \(\mathcal{N}_{j}^{{}^{\prime}}=\{k\text{ s.t. }\mathbf{x}_{k}\in\mathcal{X}_{2},k\neq j\}\). Computing \(\mathcal{N}_{i}^{{}^{\prime}}\), \(\mathcal{N}_{j}^{{}^{\prime}}\) scales quadratically with the minibatch size, therefore random subsets of \(\mathcal{N}_{i}\) and \(\mathcal{N}_{j}^{{}^{\prime}}\) with fixed cardinalities are instead used in practice. \(\mathcal{L}_{N}\) is defined as
\[\mathcal{L}_{N}= \sum_{(i,j)\in\mathcal{P}}\frac{1}{2|\mathcal{P}_{i}|}\left(\mu_{ N}-\min_{k\in\mathcal{N}_{i}}\|\mathbf{f}_{i}-\mathbf{f}_{k}\|\right)_{+}^{2} \tag{2}\] \[+\frac{1}{2|\mathcal{P}_{j}|}\left(\mu_{N}-\min_{k\in\mathcal{N}_ {j}}\|\mathbf{f}_{j}-\mathbf{f}_{k}\|\right)_{+}^{2},\]
where \(|\mathcal{P}_{i}|,|\mathcal{P}_{j}|\) are the numbers of valid negatives mined from the first and second term, respectively. Unlike metric learning losses that randomly mine a certain number of negatives from \(\mathcal{N}_{i}\), \(\mathcal{N}_{j}\)[10, 37], the HC loss mines the most similar features within a batch, i.e. the hardest negatives.
## 4 Tailoring FCGF for 6D pose estimation
In this section, we describe how we modified FCGF. We focus on manipulating heterogeneous representations of input data, improving the HC loss, and modernising the training strategy. Fig. 2 shows the block diagram of FCGF6D.
### Input data
**Heterogeneous representations.** FCGF was designed for scene registration, where its input data is 3D scan pairs of the same scene captured from different viewpoints. Therefore, their input data belongs to the same distribution, i.e. real-world data captured with the same LiDAR sensor. This is why authors in [4] use a Siamese approach. Unlike FCGF, our input data is heterogeneous, therefore we process it with two independent deep networks. Formally, given an object \(O\) and a scene \(\mathcal{S}\), the input of our pipeline is the pair \((\mathcal{M}_{O},\mathcal{I}_{\mathcal{S}})\), where \(\mathcal{M}_{O}\) is a textured 3D model of \(O\) and \(\mathcal{I}_{\mathcal{S}}\) is an RGBD capture of \(\mathcal{S}\) from a viewpoint. We transform \((\mathcal{M}_{O},\mathcal{I}_{\mathcal{S}})\) into a pair of point clouds. For \(O\), we produce a point cloud \(\mathcal{X}_{O}\in\mathbb{R}^{V\times 6}\) by sampling \(V_{O}\) vertices on the triangular faces of \(\mathcal{M}_{O}\) and extracting the corresponding RGB colours from its texture. For \(\mathcal{S}\), we use the intrinsic parameters of the RGBD sensor to map \(\mathcal{I}_{\mathcal{S}}\) into a coloured point cloud and sample \(V_{\mathcal{S}}\) points from it. Let \(\mathcal{X}_{\mathcal{S}}\in\mathbb{R}^{V_{\mathcal{S}}\times 6}\) be the point cloud of \(\mathcal{S}\). We quantise \(\mathcal{X}_{O}\) and \(\mathcal{X}_{\mathcal{S}}\) by a factor \(Q\) and process the pair with two networks implemented with Minkowski engine [3]. \(V_{O}\), \(V_{\mathcal{S}}\), and \(Q\) are hyperparameters.
**Processing geometric and photometric data.** Minkowski engine [3] is designed to process optional input features in addition to the 3D coordinate of each point. However, authors in [4] show that, in the context of scene registration, adding the photometric information associated to each point leads to overfitting. We found instead that this addition significantly improves the performance. Colour information helps in i) discriminating objects of different categories but with similar geometric shape (e.g. pudding box and gelatin box in YCBV [40]), and ii) selecting the correct pose of symmetric objects among the set of geometrically-equivalent ones (i.e.
the 6D pose of a box or a can cannot be uniquely defined unless we consider their texture patterns).
### Loss function
**Positive mining.** We define \(\mathcal{P}\) (Eq. 1) as the set of valid correspondences between \(\mathcal{X}_{O}\) and \(\mathcal{X}_{S}\). Let \((\mathbf{R}_{O},\mathbf{t}_{O})\) be \(O\) ground-truth 6D pose in \(S\) and \(\widetilde{\mathcal{X}}_{O}=\mathbf{R}_{O}\mathcal{X}_{O}+\mathbf{t}_{O}\) be the rigidly transformed version of \(\mathcal{X}_{O}\) into the reference frame of \(\mathcal{X}_{S}\). We compute all the correspondences by searching for each point of \(\widetilde{\mathcal{X}}_{O}\) its nearest neighbouring point in \(\mathcal{X}_{S}\). Due to occlusions with other objects and/or self-occlusions, some of the correspondences may be spurious, e.g. associating points of different surfaces. Therefore, we consider a correspondence valid if the distance between \(\mathbf{\tilde{x}}_{i}\in\widetilde{\mathcal{X}}_{O}\) and \(\mathbf{x}_{j}\in\mathcal{X}_{S}\) is less than a threshold \(\tau_{P}\) and if the other points on the scene are farther away, i.e. \((i,j)\in\mathcal{P}\Leftrightarrow\|\mathbf{\tilde{x}}_{i}-\mathbf{x}_{j}\|< \tau_{P}\) and \(\|\mathbf{\tilde{x}}_{i}-\mathbf{x}_{j}\|<\|\mathbf{\tilde{x}}_{i}-\mathbf{x} _{k}\|\) for every \(k=1,\dots,V_{S}\). **Negative mining.** We experienced that mining the hardest negatives from the negative sets \(\mathcal{N}_{i}\), \(\mathcal{N}_{j}\) (Eq. 2) can lead to loss instability and collapsing. This occurs because the hardest negative in \(\mathcal{N}_{i}=\{k\,:\,\mathbf{x}_{k}\in\mathcal{X}_{O},k\neq i\}\), i.e. the sample with the closest feature to \(\mathbf{f}_{i}\in\mathbb{R}^{F}\), is likely to be a point spatially close to \(\mathbf{x}_{i}\in\mathcal{X}_{O}\), because their local geometric structure is nearly the same. Hence, Eq. 2 tries to enforce features corresponding to the same local geometric structure to be distant from each other. This problem can be mitigated by replacing \(\mathcal{N}_{i}\), \(\mathcal{N}_{j}\) in Eq. 2 with \(\widetilde{\mathcal{N}}_{i}=\{k\,:\,\mathbf{x}_{k}\in\mathcal{X}_{O},\|\mathbf{ x}_{k}-\mathbf{x}_{j}\|>\tau_{NO}\}\) and \(\widetilde{\mathcal{N}}_{j}=\{k\,:\,\mathbf{x}_{k}\in\mathcal{X}_{S},\| \mathbf{x}_{k}-\mathbf{x}_{j}\|>\tau_{NO}\}\), where \(\tau_{NO}\) is a safety threshold, i.e. the radius of spheres on object and on scene where mining is forbidden.
The choice of \(\tau_{NO}\) is key because it determines which points on the point clouds can be used for negative mining. We found beneficial to choose \(\tau_{NO}\) as a function of the dimension of the input object. Given \(\mathcal{X}_{O}\), we define its diameter as \(D_{O}\), and set \(\tau_{NO}=\tau_{\text{scale}}D_{O}\). In Fig. 3, we illustrate the safety thresholds. In this way, we can maintain a good quantity of negatives while avoiding the mining of spurious hardest negatives. Using different thresholds for the object and the scene points clouds underperformed our final choice. Therefore, our loss is defined as
\[\mathcal{L}_{\text{HC}}=\lambda_{P}\mathcal{L}_{P}+\lambda_{NO}\mathcal{L}_{ NO}+\lambda_{NS}\mathcal{L}_{NS},\]
where \(\lambda_{P}\), \(\lambda_{NO}\) and \(\lambda_{NS}\) are weight factors. \(\tau_{P}\), \(t_{\text{scale}}\), \(\lambda_{P}\), \(\lambda_{NO}\), and \(\lambda_{NS}\) are hyperparameters.
### Training strategy
**Data augmentation.** FCGF combines scaling and rotation augmentations to enhance feature robustness against variations in camera pose [4]. These are effective in the context of point cloud registration, but in our specific scenario, the point cloud of the objects always belongs to a known set. Avoiding these augmentations helps the deep network in learning specialised features for each object. Our data augmentations consist of the following:
(i) Point re-sampling of \(O\) and \(S\), i.e. unlike FCGF, we randomly downsample point clouds at each epoch to mitigate overfitting. This allows the model to be more robust to depth acquisition noise; (ii) Colour jittering on \(O\), i.e. we randomly perturb brightness, contrast, saturation, and hue of \(O\); (iii) Random erasing on \(S\), i.e. unlike FCGF, we simulate occlusions at training time. For each point of \(\widetilde{\mathcal{X}}_{O}\) we compute its nearest neighbour in \(\mathcal{X}_{S}\) and randomly select a point on \(\mathcal{X}_{S}\) within such correspondence set. We then erase all the points that fall within a distance threshold \(\rho\) from it. This allows the model to be more robust to occlusions in the input scene.
**Optimisation techniques.** FCGF uses an SGD optimiser with an initial learning rate \(\text{lr}_{\text{init}}=10^{-1}\) decreased during
Figure 2: FCGF6D training pipeline consists of four logical parts. Given a scene \(S\) and an object \(O\) we take as input the pair \((\mathbf{I}_{S},\mathcal{M}_{O})\). In the first part, we compute 3D point cloud representations \((\mathcal{X}_{S},\mathcal{X}_{O})\) of \((\mathbf{I}_{S},\mathcal{M}_{O})\), where \(\mathcal{X}_{S}\) is obtained by lifting \(\mathbf{I}_{S}\) using the intrinsic parameters of the camera that acquired it, and then quantise them. In the second part, we mine positives by computing the correspondences between \(\mathcal{X}_{S}\) and \(\widetilde{\mathcal{X}}_{O}=\mathbf{R}_{O}\mathcal{X}_{O}+\mathbf{t}_{O}\), where \(\mathbf{R}_{O},\mathbf{t}_{O}\) is the ground-truth 6D pose of \(O\). In the third part, we perform point-wise feature extraction with two independent UNets \(\Phi_{S},\Phi_{O}\). In the fourth part, the hardest contrastive loss with safety thresholds is applied to guide the feature learning process.
training with an exponential scheduler with \(\gamma=0.99\). In our setting, these hyperparameters do not lead to convergence. Instead, we set \(\text{lr}_{\text{init}}=10^{-3}\). We experiment with Adam [21] and AdamW [28], and notice improvements in both cases. We also switch to a Cosine Annealing scheduler [27] that lowers the learning rate from \(10^{-3}\) to \(10^{-4}\) across the epochs.
## 5 Experiments
### Datasets
We evaluate FCGF6D on the LineMod-Occluded (LMO) [1] and the YCB-Video (YCBV) [40] datasets.
**LMO**[1] contains RGBD images of real scenes with different configurations of objects placed on a table. It provides the ground-truth 6D pose of eight of these objects, which are always present in the scene. Objects are poorly textured, of varying dimensions and placed in a cluttered scene, featuring a variety of lightning conditions. We use the original test set of 1,213 real images, while for the training set the works we use as comparison use different combinations of synthetic and real images: the methods they use to generate the synthetic images and the number of samples for each type are not always clearly defined [13, 12, 26, 39]. Differently, we only use the Photo Realistic Rendering (PBR) set of 50,000 synthetic images provided by the BOP challenge [16] as it contains a large variety of pose configurations. Following [13], we adopt an hole filling algorithm [22] to improve the depth quality on both training and test images.
**YCBV**[40] contains RGBD images of real scenes with different configurations of 21 objects taken from the YCB dataset [2]. Objects have similar geometry (e.g. boxes and cans) and are placed in various poses (e.g. some objects are placed on top of others). Unlike LMO, the objects are placed in different contexts. We use the original test set of 20,738 real images. As for LMO, state-of-the-art methods use different combinations of synthetic and real data [13, 12, 26, 39]. For training, we choose 4,000 synthetic, 4,000 real, and 4,000 PBR images provided by the BOP challenge [16] because we found that using only the PBR images leads to unsatisfactory results. Also for YCBV we adopt a hole filling algorithm [22] on both train and test depth images as done in [13].
### Implementation details
**LMO setting.** Experiments on LMO share the following hyperparameters. The input pair \((O,S)\) is first sampled to \(V_{O}=4,000\) and \(V_{S}=50,000\) points, respectively, and then quantised with a step of \(Q=2\text{mm}\). As feature extractor we use a MinkUNet34 [3] with output dimension \(F=32\). The correspondence estimation threshold used for the positive mining is \(\tau_{P}=4\text{mm}\), and the maximum number of correspondences extracted is set to 1,000. The safety threshold \(\tau_{NO}\) is defined proportionally to the object \(O\) diameter by setting \(t_{\text{scale}}=0.1\) (see Fig. 3). The hardest negative mining on \(\mathcal{X}_{O}\) is performed in \(\widehat{\mathcal{N}}_{i}\). When mining the hardest negatives on \(\mathcal{X}_{S}\), instead of considering the full candidates set \(\widetilde{\mathcal{N}}_{j}\) we randomly sample 10,000 points from it to reduce the spatial complexity. HC loss margins are set as \(\mu_{P}=0.1\), \(\mu_{N}=10\), and coefficients are set to \(\lambda_{P}=1\), \(\lambda_{NO}=0.6\), and \(\lambda_{NS}=0.4\). The feature extractor is trained on 50,000 PBR images for 12 epochs. The pose is obtained by using the TEASER++ [41] algorithm.
**YCBV setting.** Experiments on YCBV share the same LMO hyperparameters except in the following cases. We set \(V_{S}=20,000\), as we found that it works on par with the original \(V_{S}\) of LMO. We believe this happens because YCBV objects are less occluded and their geometries are less complex than LMO objects. As feature extractor we use a MinkUNet50 model [3], trained on 12,000 mixed images for 110 epochs.
Figure 3: Examples of different mining strategies. (a) Hardest contrastive loss as proposed in FCGF: no constraints are enforced on the location of the hardest negative (red point) with respect to the correspondent point (green point). (b) A vanilla choice of the safety thresholds: the radii \(\tau_{NS}\), \(\tau_{NO}\) are proportional to the diameters \(D_{S}\), \(D_{O}\) of the respective point clouds. (c) Our choice: the value of the thresholds is proportional to the diameter of the object, i.e. \(\tau_{NS}=\tau_{NO}\).
The pose is obtained with a RANSAC-based algorithm from Open3D [43]. Experimentally, on YCBV we found that RANSAC yields better results than TEASER++. We believe that this happens because TEASER++ is heavily based on correspondences [41] and for YCBV we use a lower resolution for the scene compared to LMO, which in turn reduces the number of correspondences.
### Evaluation metrics
We use the ADD and ADD-S metrics that are defined as
\[\text{ADD} =\frac{1}{V_{O}}\sum_{\mathbf{x}\in\mathcal{X}_{O}}\left\|( \mathbf{Rx}+\mathbf{t})-(\hat{\mathbf{Rx}}+\hat{\mathbf{t}})\right\|,\] \[\text{ADD-S} =\frac{1}{V_{O}}\sum_{\mathbf{x}_{1}\in\mathcal{X}_{O}}\min_{ \mathbf{x}_{2}\in\mathcal{X}_{O}}\left\|(\mathbf{Rx}_{1}+\mathbf{t})-(\hat{ \mathbf{Rx}}_{2}+\hat{\mathbf{t}})\right\|,\]
where \(\mathbf{R},\mathbf{t}\) and \(\hat{\mathbf{R}},\hat{\mathbf{t}}\) are the translation and rotation components of the predicted and the ground-truth poses of \(\mathcal{X}_{O}\in\mathbb{R}^{V_{O}\times 3}\), respectively. ADD(S) computes the ADD for non-symmetric objects and the ADD-S for symmetric ones. Performance on LMO is assessed in term of the ADD(S)-0.1d metric [13, 12, 26, 39], which computes the percentage of ADD(S) errors lower than 10% of the object diameter [15]. Performance on YCBV is assessed in term of the ADD-S AUC metric [40, 13, 12]. The area-under-the-curve (AUC) of ADD-S is obtained by computing the cumulative percentage of ADD-S errors lower than a threshold varying from 1mm to 100mm. Note that in ADD(S)-0.1d the success thresholds are relative to the object diameters, while in ADD-S AUC they are absolute.
### Quantitative results
Tab. 1 reports the results on LMO [1] in term of ADD(S)-0.1d: for completeness we added the two best performing RGB methods (top), while the other ones are RGBD methods (bottom). As reported in the Prior column, most methods rely on additional priors, either in the form of object detections (Det) or of object segmentation masks (Seg). FCGF6D outperforms all the other methods by a large margin without using any prior (penultimate row): it outperforms Wu et al. by 1.9%, E2EK by 5.7%, DCL-Net by 8.4%, FFB6D by 12.8%, PR-GCN by 14.0%, and PVN3D by 15.8%. Note that Wu et al. [39] and E2EK [26] train a different deep neural network for each object (DNNs column), whereas we train only a single deep neural network, saving learning parameters and training time. Moreover, when we use the object detections obtained with YOLOv8 [20] (last row), the performance of FCGF6D further improves, outperforming Wu et al. by 3.5%, E2EK by 7.3%, and all the other methods by more than 10.0%. Note that detectors are prone to errors: when detections are wrong, the object pose will be wrong too. We can observe that the detector is more effective with Duck and Eggbox. The first is a particularly small object, therefore more likely to be occluded. The second undergoes frequent occlusions (other objects are on top of it in several images), thus making localisation difficult without a detector. To further understand the negative impact of the detector, we compute the percentage of poses which are wrong when we use detections and correct when we do not use detections. For Ape, Can and Glue, this percentage is 3.3%, 1.7%, and 5.1%, respectively. Please refer to the Supplementary Material for a comprehensive analysis of the detector impact.
Tab. 2 reports the results on YCBV [40] in ADD-S AUC compared with other RGBD-based methods. The row Prior indicates eventual additional priors used by each method. The default configuration of FCGF6D does not require any input prior and uses a deep neural network for all the objects. FCGF6D outperforms recent competitors that do not use input priors: it outperforms FFB6D by 0.8% and PVN3D by 1.7%. E2EK [26] and Wu et al. [39] instead consider input priors in the form of object segmentation masks and object detections, respectively, and train a model for each object (DNNs row). When we use input priors in the form of detections, FCGF6D outperforms E2EK by 2.4% and slightly underperforms Wu et al. by \(-0.6\%\). We also observe that, thanks to multi-scale representation provided by the UNet, we obtain good performance also on symmmetric objects without the need of specific techniques to handle symmetry. Note that we employed detections in both Tabs. 1&2
\begin{table}
\begin{tabular}{c l l c c c c c c c c c c} \hline \hline Input & Method & DNNs & Prior & Ape & Can & Cat & Drill & Duck & Eggbox\({}^{\star}\) & Glue\({}^{\star}\) & Holepuncher & Avg \\ \hline \multirow{4}{*}{\begin{tabular}{} \end{tabular} } & SO-Pose [6] & 1 & Det & 48.4 & 85.8 & 32.7 & 77.4 & 48.9 & 52.4 & 78.3 & 75.3 & 62.3 \\ & ZebraPose [35] & 8 & Det & 57.9 & 95.0 & 60.6 & 94.8 & 64.5 & 70.9 & 88.7 & 83.0 & 76.9 \\ \hline \multirow{8}{*}{
\begin{tabular}{} \end{tabular} } & PVN3D [13] & 1 & – & 33.9 & 88.6 & 39.1 & 78.4 & 41.9 & 80.9 & 68.1 & 74.7 & 63.2 \\ & PR-GCN [42] & n.a. & Det & 40.2 & 76.2 & 57.0 & 82.3 & 30.0 & 68.2 & 67.0 & 97.2 & 65.0 \\ & FEB6D [12] & 1 & – & 47.2 & 85.2 & 45.7 & 81.4 & 53.9 & 70.2 & 60.1 & 85.9 & 66.2 \\ & DCL-Net [23] & n.a. & Det & 56.7 & 80.2 & 48.1 & 81.4 & 44.6 & 83.6 & 79.1 & 91.3 & 70.6 \\ & E2EK [26] & 8 & Seg & 61.0 & 95.4 & 50.8 & 94.5 & 59.6 & 55.7 & 78.3 & 91.4 & 73.3 \\ & Wu et al. [39] & 8 & Det & 66.1 & 97.4 & 70.7 & 95.4 & 70.1 & 61.2 & 59.8 & 95.7 & 77.1 \\ & FCGF6D (ours) & 1 & – & 65.4 & 96.7 & 64.8 & 97.8 & 71.7 & 54.1 & 83.2 & 97.9 & 79.0 \\ & FCGF6D (ours) & 1 & Det & 63.6 & 94.8 & 63.4 & 97.4 & 73.4 & 74.6 & 80.4 & 97.3 & **80.6** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of RGB and RGBD methods performance on LMO [1] evaluated in terms of ADD(S)-0.1d. Key: \({}^{\star}\): symmetric object, DNNs: number of Deep Neural Networks used, n.a.: information not available, Det: object detections are used as prior, Seg: object segmentation masks are used as prior, **bold**: best result, underline: second best result.
to illustrate their potential use in improving registration efficacy, though not obligatory. Specifically in Tab. 2, when we compare with methods based on the same assumptions as ours, FCGF6D achieves state-of-the-art performance, see comparison with PVN3D [13] and FFB6D [12]. When we compare with methods that use 21 models instead of 1 (as ours), we fall slightly behind the best (see comparison with E2EK [26] and Wu et al. [39]).
### Qualitative results
Fig. 4 shows some examples of successes and failures on the test set of LMO dataset. The upper row shows the ground-truth poses, and the bottom one shows the poses predicted by our model. Note how FCGF6D is capable of estimating the correct pose even in case of partial objects (i.e. the glue in the first image). However, our model fails in case of partial objects with ambiguities (the duck in the second image), or of atypical occlusions (the eggbox in the second image: the training set do not contain this degree of occlusions).
Fig. 5 shows some examples of successes and failures on the test set of YCBV. FCGF6D appears prone to rotation errors (the large clamp in the first image), especially in case of partially occluded objects (the bleach cleanser in the second image). However, the poses are generally accurate.
### Ablation study
We conduct an ablation study on the Drill object of the LMO dataset by training FCGF6D for five epochs. We choose the closest setting to FCGF as baseline: no safety threshold in the loss, shared network weights, no RGB information, SGD optimiser with \(\text{lr}_{\text{init}}=10^{-3}\), exponential scheduler with \(\gamma=0.99\). We perform a single experiment for each added component to assess their individual contribution. As metrics, we use ADD(S), Relative Rotation Error (RRE), Relative Translational Error (RTE), and Feature Matching Recall (FMR) [4, 32]. RRE and RTE show how the two pose components (rotation and translation) are affected. FMR indirectly measures the number of iterations required by a registration algorithm, e.g. RANSAC, to estimate the transformation between two point clouds. We set
\begin{table}
\begin{tabular}{l c c c|c c c} \hline \hline Method & \multicolumn{2}{c|}{PVN3D} & FFB6D & FCGF6D & E2EK & Wu et al. & FCGF6D \\ & [13] & [12] & (ours) & [26] & [39] & (ours) \\ \hline DNNs & 1 & 1 & 1 & 1 & 21 & 1 \\ Prior & – & – & – & Seg & Det & Det \\ \hline master chef can & 80.5 & 80.6 & 96.1 & 79.6 & 100.0 & 96.3 \\ cracker box & 94.8 & 94.6 & 96.4 & 95.1 & 98.8 & 96.7 \\ sugar box & 96.3 & 96.6 & 98.1 & 96.7 & 100.0 & 98.1 \\ tomato soup can & 88.5 & 89.6 & 93.1 & 89.8 & 97.5 & 95.8 \\ mustard bottle & 96.2 & 97.0 & 98.3 & 96.5 & 100.0 & 98.3 \\ tuna fish can & 89.3 & 88.9 & 82.8 & 90.7 & 99.9 & 97.6 \\ pudding box & 95.7 & 94.6 & 95.2 & 96.9 & 100.0 & 97.3 \\ gelatin box & 96.1 & 96.9 & 98.7 & 97.5 & 100.0 & 98.7 \\ potted meat can & 88.6 & 88.1 & 79.9 & 90.8 & 84.1 & 89.8 \\ banana & 93.7 & 94.9 & 98.3 & 94.4 & 100.0 & 98.3 \\ pitcher base & 96.5 & 96.9 & 97.9 & 95.6 & 100.0 & 97.9 \\ bleach cleanser & 93.2 & 94.8 & 95.9 & 94.0 & 99.9 & 96.7 \\ bowl\({}^{\star}\) & 90.2 & 96.3 & 97.3 & 96.0 & 94.5 & 98.2 \\ mug & 95.4 & 94.2 & 97.4 & 95.3 & 100.0 & 97.7 \\ power drill & 95.1 & 95.9 & 98.2 & 96.6 & 100.0 & 98.2 \\ wood block\({}^{\star}\) & 90.4 & 92.6 & 95.2 & 93.8 & 98.0 & 96.4 \\ scissors & 92.7 & 95.7 & 93.9 & 97.0 & 100.0 & 95.9 \\ large marker & 91.8 & 89.1 & 97.5 & 95.0 & 99.9 & 98.3 \\ large clamp\({}^{\star}\) & 93.6 & 96.8 & 80.6 & 97.2 & 91.1 & 93.8 \\ extra large clamp\({}^{\star}\) & 88.4 & 96.0 & 77.4 & 96.7 & 81.0 & 94.7 \\ foam brick\({}^{\star}\) & 96.8 & 97.3 & 94.6 & 97.2 & 99.8 & 97.6 \\ \hline Avg & 91.8 & 92.7 & **93.5** & 94.4 & **97.4** & 96.8 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance of RGBD methods on YCBV [40] evaluated in ADD-S AUC. Key: \({}^{\star}\): symmetric object, DNNs: number of Deep Neural Networks used, Det: object detections are used as prior, Seg: object segmentation masks are used as prior, **bold**: best result, underline: second best result.
the inlier distance threshold as \(\tau_{1}=5\) voxels, and the inlier recall ratio as \(\tau_{2}=5\%\).
Tab. 3 shows that the largest contributions in ADD(S)-0.1d are: introducing the safety threshold in the loss (+17.8), adding RGB information (+34.2), and adopting Adam optimiser (+20.2). We also note that the gain in ADD(S)-0.1d is not always consistent with the FMR: when RGB augmentation is added, there is a gain in ADD(S)-0.1d of 2.1, but the FMR drops by 6.5. A more detailed analysis of FMR with different values of \(\tau_{1}\) and \(\tau_{2}\) is shown in Fig. 6.
### Training and inference time
The training time is about one week for each dataset using two NVIDIA A40 GPUs. Tab. 4 reports the comparison of the number of parameters, inference GPU memory footprint, and inference time (using a GeForce RTX 3050 GPU) on YCBV. We were unable to test E2EK [26] as the code is unavailable, whereas we used the authors' original code for the other papers. FCGF6D has a significantly smaller memory footprint than the main competitors, and the inference time is comparable. In a scenario where multiple objects are expected, our closest competitor [39] uses a different model for each object, thereby requiring more memory. Our method requires less memory because we train only a single model. Note that using the whole scene as input is advantageous in a practical scenario where \(N\) instances of the same object are present. Here, we need a single forward pass, followed by \(N\) registrations. Instead, methods that rely on image crops [42, 23, 39] require a forward pass for each instance.
## 6 Conclusions
We revisited the Fully Convolutional Geometric Feature (FCGF) approach to tackle the problem of object 6D pose estimation. FCGF uses sparse convolutions to learn point-wise features while optimising a hardest contrastive loss. Key modifications to the loss, input data representations, training strategies, and data augmentations to FCGF enabled us to outperform competitors on popular benchmarks. A thorough analysis is conducted to study the contribution of each modification to achieve state-of-the-art performance. Future research directions include the application of our approach to generalisable 6D pose estimation [36].
**Limitations.** Minkowski engine is computational efficient but has a large memory footprint at training time. We mitigated this by downsampling the scene point cloud and by adopting quantisation. It would be interesting to understand how not to lose the input point cloud resolution while maintaining a modest memory footprint.
## Acknowledgements
We are grateful to Andrea Caraffa for his support with the computation of the detection priors and to Nicola Saljoughi for his contributions during the early stage of the project.
This work was supported by the European Union's Horizon Europe research and innovation programme under grant agreement No 101058589 (AI-PRISM), and by the PNRR project FAIR - Future AI Research (PE00000013), under the NRRP MUR program funded by the NextGenerationEU.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Method & DNNs & Params [M] & Memory [GB] & Time [ms] (inf.+reg.) \\ \hline PVN3D [13] & 1 & 38.6 & 3.17 & 417 (154 + 263) \\ FEBO [12] & 1 & 33.8 & 2.46 & 285 (146 + 139) \\ Wu et al. [39] & \(N\) & 23.8\(\times\)\(N\) & 2.04\(\times\)\(N\) & 144 (143 + 1) \\ \hline Ours & 1 & 63.5 & 1.3 & 156 (118 + 38) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Inference time and memory footprint. Time is for a single image, and includes network inference (inf.) and registration (reg.) times. \(N\) is the number of trained models.
\begin{table}
\begin{tabular}{l l r r r r r} \hline \hline & Improvements & RRE \(\downarrow\) & RTE \(\downarrow\) & FMR \(\uparrow\) & ADD \(\uparrow\) & \(\Delta\) \\ \hline \multirow{3}{*}{\begin{tabular}{l} \end{tabular} } & Baseline & 2.2 & 9.6 & 0 & 0.2 & – \\ \cline{2-6} & \(+\tau_{NS}=0.1D_{S}\) & 1.8 & 12.2 & 0 & 0.4 & +0.2 \\ & \(+\tau_{NS}=0.1D_{O}\) & 1.1 & 5.3 & 0.2 & 18.2 & +17.8 \\ \hline \multirow{3}{*}{\begin{tabular}{l} \end{tabular} } & + Independent weights & 1.2 & 3.7 & 0 & 29.1 & +10.9 \\ & + Add RGB information & 0.6 & 2.2 & 38.5 & 63.3 & +34.2 \\ \cline{1-1} & + Colour augmentation & 0.6 & 2.2 & 32.0 & 65.4 & +2.1 \\ \cline{1-1} & + Random erasing & 0.3 & 1.8 & 78.4 & 75.6 & +10.2 \\ \hline \multirow{3}{*}{
\begin{tabular}{l} \end{tabular} } & + SGD \(\rightarrow\) Adam & 0.1 & 1.1 & 93.4 & 95.8 & +20.2 \\ \cline{1-1} & + Adam \(\rightarrow\) AdamW & 0.1 & 0.9 & 93.9 & 96.4 & +0.6 \\ \cline{1-1} & + Exp \(\rightarrow\) Cosine & 0.1 & 0.9 & 93.6 & 96.6 & +0.2 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation study on the Drill object of LMO. Performance is compared in RRE [radians] and RTE [cm] errors, FMR and ADD(S)-0.1d (shortened to ADD) scores. \(\Delta\) shows the improvement of each contribution in terms of ADD(S)-0.1d with respect to the previous row.
Figure 6: Feature Matching Recall (FMR) as a function of \(\tau_{1}\) and \(\tau_{2}\). When varying \(\tau_{1}\) (top) we set \(\tau_{2}\)=5%, and when varying \(\tau_{2}\) (bottom) we set \(\tau_{1}\)=10 voxels. |
2308.05182 | Effect of long-range hopping on dynamic quantum phase transitions of an
exactly solvable free-fermion model: Nonanalyticities at almost all times | In this work, we investigate quenches in a free-fermion chain with long-range
hopping which decay with the distance with an exponent $\nu$ and has range $D$.
By exploring the exact solution of the model, we found that the dynamic free
energy is non-analytical, in the thermodynamic limit, whenever the sudden
quench crosses the equilibrium quantum critical point. We were able to
determine the non-analyticities of dynamic free energy $f(t)$ at some critical
times $t^{c}$ by solving nonlinear equations. We also show that the
Yang-Lee-Fisher (YLF) zeros cross the real-time axis at those critical times.
We found that the number of nontrivial critical times, $N_{s},$ depends on
$\nu$ and $D$. In particular, we show that for small $\nu$ and large $D$ the
dynamic free energy presents non-analyticities in any time interval $\Delta
t\sim1/D\ll1$, i.e., there are \emph{non-analyticities at almost all times}.
For the spacial case $\nu=0$, we obtain the critical times in terms of a simple
expression of the model parameters and also show that $f(t)$ is non-analytical
even for finite system under anti-periodic boundary condition, when we consider
some special values of quench parameters. We also show that, generically, the
first derivative of the dynamic free energy is discontinuous at the critical
time instant when the YLF zeros are non-degenerate. On the other hand, when
they become degenerate, all derivatives of $f(t)$ exist at the associated
critical instant. | J. C. Xavier, José A. Hoyos | 2023-08-09T18:37:04Z | http://arxiv.org/abs/2308.05182v3 | Effect of long-range interaction on dynamic quantum phase transitions of an exactly solvable free-fermion model: non-analyticities at almost all times
###### Abstract
In this work, we investigate quenches in a free-fermion chain with long-range hopping which decay with the distance with an exponent \(\nu\) and has range \(D\). By exploring the exact solution of the model, we found that the dynamic free energy is non-analytical, in the thermodynamic limit, whenever the sudden quench crosses the equilibrium quantum critical point. We were able to determine the non-analyticities of dynamic free energy \(f(t)\) at some critical times \(t^{c}\) by solving nonlinear equations. We also show that the Yang-Lee-Fisher (YLF) zeros cross the real-time axis at those critical times. We found that the number of nontrivial critical times, \(N_{s}\), depends on \(\nu\) and \(D\). In particular, we show that for small \(\nu\) and large \(D\) the dynamic free energy presents non-analyticities in any time interval \(\Delta t\sim 1/D\ll 1\), i.e., there are _non-analyticities at almost all times_. For the spacial case \(\nu=0\), we obtain the critical times in terms of a simple expression of the model parameters and also show that \(f(t)\) is non-analytical even for finite system under anti-periodic boundary condition, when we consider some special values of quench parameters. We also show that, generically, the first derivative of the dynamic free energy is discontinuous at the critical time instant when the YLF zeros are non-degenerate. On the other hand, when they become degenerate, all derivatives of \(f(t)\) exist at the associated critical instant.
## I Introduction
Equilibrium phase transitions (PTs) have been detailedly studied and observed in several compounds in the last two centuries [1; 2; 3]. Along the lines (or planes, or points) that separate the distinct phases, the thermodynamic functions are non-analytic. Due to this fact, the systems present unusual physical properties close to these lines. In general, we can not understand the phenomena close to the transition lines by a simple picture, such as the Fermi liquid for instance. For this reason, this subject has been of great interest for several physicist communities. The non-analyticity of the thermodynamic functions is encoded in the zeros of the partition function \(\mathcal{Z}\), the so-called Yang-Lee-Fisher (YLF) zeros [4; 5; 6]. In general, the zeros of \(\mathcal{Z}(q)=\mathrm{Tr}\left(e^{-qH}\right)\) happen for \(q=\beta+i\alpha\), where \(\beta=\frac{1}{k_{B}T}\) and \(\alpha\neq 0\). In the thermodynamic limit, however, these zeros can touch the real temperature axis yielding to non-analyticities of the Helmholtz free energy \(F=-k_{B}T\ln\left(\mathcal{Z}(\beta)\right)\). For a recent experimental verification of this phenomenon, see, for instance, Refs. [7; 8]. A similar equilibrium partition function that is also studied is the boundary partition function \(\mathcal{Z}^{b}(\beta)=\left\langle\psi^{b}\right|e^{-\beta H}\left|\psi^{b}\right\rangle\), i.e., the partition function ruled the Hamiltonian \(H\) with boundaries described by the boundary state \(\left|\psi^{b}\right\rangle\) separated by \(\beta\)[9; 10; 11].
In the last years [12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27], the concept of YLF zeros has been applied to sudden quenches: a parameter \(\delta\) of a system Hamiltonian \(H\left(\delta\right)\) changes from \(\delta_{0}\rightarrow\delta\) at the time instant \(t=0\). Specifically, the dynamical analogue of the boundary partition function is the return probability \(Z(t)=\left\langle\psi_{0}\left|e^{-iH\left(\delta\right)t}\right|\psi_{0}\right\rangle\), where \(\left|\psi_{0}\right\rangle\) is the ground state of the Hamiltonian \(H\left(\delta_{0}\right)\). The dynamical analogue of the free energy is \(f(t)=-\frac{1}{N}\ln\left(\left|Z(t)\right|^{2}\right)\), where \(N\) is the number of degrees of freedom, and can also be a non-analytic function at some critical time \(t^{c}\). For a review and generalizations to other out-of-equilibrium scenarios see, e.g., Ref. [28].
The quantum quench protocol we consider here is the following: the system is prepared in the ground state of \(H(\delta_{0})\) and then is time-evolved according to \(H(\delta)\), being \(\delta\) some tuning parameter of \(H\). The non-analytical behavior of \(f\) in time was called dynamical quantum phase transition (DQPT) [12] and was recently observed in experiments [20; 23; 25]. It is important to mention that, by now, it is well established that there is no one-to-one correspondence between DQPTs and equilibrium phase transitions [13; 14; 16; 17; 18; 19; 22].
In the optical lattice systems [18; 19; 25; 26; 29], which can be exploited to study DQPTs, long-range interactions are always present.) In this context, the effect of the long-range interactions was investigated in the transversal-Field Ising chain [18; 19; 24; 25; 26]. All those studies were done numerically since the long-range interaction breaks integrability. Although numerical results can give strong evidence of the DQPTs, those methods are limited. In particular, the studies based on exact diagonalization and/or matrix product state (MPS) are limited by the size of the system, and/or by the bond dimension, as well as limited to short times. In principle and strictly according to the YLF zeros theory, the DQPTs manifest only in the thermodynamic limit. In this sense, a rigorous proof of the existence of a DQPT in the transversal-Field Ising chain with long-range interaction is still missing. In this vein, it is highly desirable to have a deep understanding of the long-range interaction effects in the context of DQPTs through analytical results. Moreover, to our knowledge, there is no study of the effects of the long-range interactions in other models. Motivated by the aforementioned facts, we investigate DQPTs in an exactly solvable free fermion model with long-range hoppings.
The paper is organized as follows: In Sec. II, we present the model and its exact diagonalization. Analytical expressions
for the dynamic free energy and the YLF zeros are determined in Sec. III together with numerical results. Our concluding remarks are given in Sec. IV.
## II The model
We consider a free fermion chain with long-range hoppings under twisted boundary condition given by the Hamiltonian
\[H(\delta)=\sum_{j=1}^{L}\frac{1+\left(-1\right)^{j}\delta}{2}\sum_{\ell=1}^{D}J_ {j,2\ell-1}\left(c_{j}^{\dagger}c_{j+2\ell-1}+\text{h.c.}\right). \tag{1}\]
We consider systems of \(L\) sites in which \(L\) is even. The hopping amplitude decays as \(J_{j,\ell}=J2^{\nu}(\ell+1)^{-\nu}f_{j+\ell}(\phi)\), where \(f_{\ell}(\phi)=1\) for \(1\leq\ell\leq L\), and \(f_{\ell}(\phi)=\exp\left(\phi\pi i\right)\) otherwise. Here, the constant \(J\) sets the energy (or inverse time) unit of the system (and, from now on, is set to \(J=1\)) and \(\phi\) defines the type of boundary condition: \(\phi=0\) means periodic boundary condition (PBC) and \(\phi=1\) means anti-periodic boundary condition (APBC). In all cases, \(c_{j+L}=c_{j}\). The exponent \(\nu\geq 0\) controls the decay of the hopping amplitude with the distance, \(D\) is the hopping range, and \(\delta\) is the dimerization parameter which tunes the system across an equilibrium quantum phase transition (QPT) at \(\delta=0\). For \(D=1\), this model recovers the dimerized chain with nearest-neighbor hopping, also known as Su-Schrieffer-Heeger (SSH) chain [30]. This model, for some particular choice of the parameters, was used to study symmetry-resolved entanglement entropy [31; 32]. This is an interesting model because it allows one to investigate the effects of long-range interactions and is amenable to be solved by free-fermion techniques.
The diagonalization of the above model can be done by the Fourier series. For the sake of completeness, we present the main steps below. First, we introduce the new fermionic operators \(\gamma_{q}\) and \(\eta_{q}\) by
\[c_{2j}=\sqrt{\frac{2}{L}}\sum_{q}e^{2iqj}\eta_{q},\text{ and }c_{2j-1}=\sqrt{\frac{2}{L}}\sum_{q}e^{iq(2j-1)}\gamma_{q}, \tag{2}\]
where the momenta are \(q=q_{n}=\frac{2\pi}{aL}\left(n-\frac{\phi}{2}\right)\), \(n=1,2,\ldots,L/2\), and, from now on, we set the lattice spacing to \(a=1\). In terms of \(\gamma_{q}\) and \(\eta_{q}\) the Hamiltonian is
\[H = \sum_{q}\left(\begin{array}{cc}\gamma_{q}^{\dagger}&\eta_{q}^{ \dagger}\end{array}\right)\left(\begin{array}{cc}0&C_{q}-i\delta S_{q}\\ C_{q}+i\delta S_{q}&0\end{array}\right)\left(\begin{array}{c}\gamma_{q}\\ \eta_{q}\end{array}\right), \tag{3}\] \[= \sum_{q}\omega_{q,\delta}\left(\alpha_{+,q,\delta}^{\dagger} \alpha_{+,q,\delta}-\alpha_{-,q,\delta}^{\dagger}\alpha_{-,q,\delta}\right),\]
where
\[C_{q}=C_{q}(\nu,D) = \sum_{\ell=1}^{D}\ell^{-\nu}\cos\left((2\ell-1)q\right), \tag{4}\] \[S_{q}=S_{q}(\nu,D) = \sum_{\ell=1}^{D}\ell^{-\nu}\sin\left((2\ell-1)q\right),\] (5) \[\omega_{q,\delta}=\omega_{q,\delta}(\nu,D) = \sqrt{C_{q}^{2}+\delta^{2}S_{q}^{2}}, \tag{6}\]
and
\[\alpha_{\pm,q,\delta}=\frac{1}{\sqrt{2}}\left(e^{i\theta_{q,\delta}}\gamma_{q }\pm e^{-i\theta_{q,\delta}}\eta_{q}\right) \tag{7}\]
are the eigen-operators associated to positive and negative branches of the dispersion relation \(\pm\omega_{q,\delta}(\nu,D)\). Here, \(\cos 2\theta_{q,\delta}=C_{q}/\omega_{q,\delta}\) and \(\sin 2\theta_{q,\delta}=\delta S_{q}/\omega_{q,\delta}\).1
Footnote 1: The quantities defined in Eqs. (4)–(7) depend on \(q\), \(\delta\), \(\nu\) and \(D\). To lighten the notation, only the dependence of \(q\) and \(\delta\) is kept in the subscript.
Finally, notice that
\[C_{\frac{\pi}{2}-q}=-C_{\frac{\pi}{2}+q},\text{ and }S_{\frac{\pi}{2}-q}=S_{ \frac{\pi}{2}+q}, \tag{8}\]
and that the ground state of \(H(\delta)\) is
\[\left|\psi_{0}(\delta)\right\rangle=\prod_{q}\alpha_{-,q,\delta}^{\dagger} \left|0\right\rangle, \tag{9}\]
where the product is over all \(q\)'s in Eq. (2).
It is worth mentioning that for some special values of \(\nu\) and \(D\), the functions \(C_{q}\) and \(S_{q}\) can also be written in terms of some well known functions, as depicted in Table 1. For \(\nu=\infty\) or \(D=1\), Eq. (6) recovers that of the nearest-neighbor hopping problem \(\omega_{q,\delta}=\sqrt{\cos^{2}(q)+\delta^{2}\sin^{2}(q)}\). The case \(\nu=0\) and \(D=L/4\) is very peculiar and presents some anomalous characteristics (see Appendix A): (i) The ground-state energy \(E_{0}\sim aL\text{log}\,L+bL\) is not extensive. (ii) Different boundary conditions lead to distinct behaviors. For PBC (APBC), the system is gapless (gapped) at half filling. In addition, the difference \(E_{0}^{\text{APBC}}-E_{0}^{\text{PBC}}\sim-L\text{log}\,L\).
## III Results
### The dynamic free energy and the YLF zeros
As we already mentioned, in our quench protocol the system is initialized in \(\left|\psi_{0}(\delta_{0})\right\rangle\), the ground state of \(H(\delta_{0})\), and time-evolved according to \(H(\delta)\). Only \(\delta\) is changed in the sudden quench, \(\nu\) and \(D\) remain constants. The return probability amplitude \(Z(t)=\left\langle\psi_{0}(\delta_{0})\left|e^{-iH(\delta)t}\right|\psi_{0}( \delta_{0})\right\rangle\) can be evaluated following the same procedure of Ref. [27; 33]. For completeness, we present below the main steps.
To time-evolved \(\left|\psi_{0}(\delta_{0})\right\rangle\), we need the relation between the pre- and post-quench eigen-operators \(\alpha_{\pm,q,\delta_{0}}\) and \(\alpha_{\pm,q,\delta}\) [see Eq. (3)]. This task is simple, since the wavenumbers \(q\) in (2), and, therefore, \(\gamma_{q}\) and \(\eta_{q}\), do not depend on \(\delta\). Then, from Eq. (7), we find that
\[\alpha_{-,q,\delta_{0}}^{\dagger}=\cos\left(\Delta\theta_{q,\delta,\delta_{0}} \right)\alpha_{-,q,\delta}^{\dagger}+i\sin\left(\Delta\theta_{q,\delta,\delta_{0} }\right)\alpha_{+,q,\delta}^{\dagger}, \tag{10}\]
where \(\Delta\theta_{q,\delta,\delta_{0}}=\theta_{q,\delta}-\theta_{q,\delta_{0}}\). Therefore,
\[Z(t) = \left\langle 0\left|\prod_{q}\alpha_{-,q,\delta_{0}}e^{-iHt}\prod_{k} \alpha_{-,k,\delta_{0}}^{\dagger}\right|0\right\rangle \tag{11}\] \[= \prod_{q}\left[\cos\left(\omega_{q,\delta}t\right)+ig_{q,\delta, \delta_{0}}\sin\left(\omega_{q,\delta}t\right)\right],\]
where \(0\leq g_{q,\delta,\delta_{0}}=\frac{C_{q}^{2}+\delta\delta_{0}\delta_{0}^{2}}{ \omega_{q,\delta}\omega_{q,\delta_{0}}}\leq 1\).
Finally, the dynamic free energy \(f(t)\equiv-L^{-1}\ln\left|Z(t)\right|^{2}\) is
\[f(t)=-\frac{1}{L}\sum_{q}\ln\left[\cos^{2}\left(\omega_{q,\delta}t\right)+g_{q, \delta,\delta_{0}}^{2}\sin^{2}\left(\omega_{q,\delta}t\right)\right], \tag{12}\]
and, in the thermodynamic limit, we can replace the sum by the integral
\[f(t)=-\int\limits_{0}^{\pi/2}\frac{dq}{\pi}\ln\left[\cos^{2}\left(\omega_{q, \delta}t\right)+g_{q,\delta,\delta_{0}}^{2}\sin^{2}\left(\omega_{q,\delta}t \right)\right], \tag{13}\]
where the properties (8) where used to shorten the integration limit.
Let \(\zeta_{n,m}=t_{n,m}+i\tau_{n}\) (or \(\zeta_{q_{n},m}=t_{q_{n},m}+i\tau_{q_{n}}\)) be the YLF zeros of \(Z\). From Eq. (11), it is simple to show that
\[\zeta_{q_{n},m}=\frac{\left(m-\frac{1}{2}\right)\pi+\frac{i}{2}\ln\left(\frac{ 1+g_{q_{n},\delta,\delta_{0}}}{1-g_{q_{n},\delta,\delta_{0}}}\right)}{\omega_ {q_{n},\delta}}, \tag{14}\]
were \(m\in\mathbb{N}_{+}\) is the \(m\)th accumulation line of YLF zeros, and \(n=1,\ldots,\frac{L}{2}\) labels the \(n\)th wavenumber \(q_{n}\) in (2). Although there are \(\frac{L}{2}\) YLF zeros per accumulation line, not all of them are distinct because of (8). For \(\frac{L}{2}\) odd, there are \(\frac{L}{2}\left(\frac{L}{2}-1\right)\) distinct zeroes (which are doubly degenerated), and one (for \(q=\pi\) for PBC and \(q=\frac{\pi}{2}\) for APBC) has \(\left|\tau_{q_{n}}\right|=\infty\). Thus, effectively there are \(\frac{1}{2}\left(\frac{L}{2}-1\right)\) zeros. For PBC and \(\frac{L}{2}\) even, there are \(\frac{1}{2}\left(\frac{L}{2}-2\right)\) distinct zeros (which are doubly degenerated), and \(2\) zeros (for \(q=\frac{\pi}{2}\) and \(\pi\)) with \(\left|\tau_{q_{n}}\right|=\infty\). For APBC and \(\frac{L}{2}\) even, there are \(\frac{L}{4}\) doubly degenerated distinct zeros.
The DQPTs occur whenever \(\tau_{q_{n}}=0\) and, thus, from Eq. (14), they can only happen if \(C_{q^{\prime}}^{2}+\delta\delta_{0}S_{q^{\prime}}^{2}=0\), i.e.,
\[T_{q^{\prime}}^{2}(\nu,D)\equiv\frac{S_{q^{\prime}}^{2}(\nu,D)}{C_{q^{\prime} }^{2}(\nu,D)}=-\frac{1}{\delta\delta_{0}}. \tag{15}\]
Notice the necessary condition \(\delta\delta_{0}<0\) which corresponds to the quench crossing the equilibrium QPT of the model at \(\delta_{\rm eq}^{c}=0\).2 Once the set \(\{q^{c}\}\) is determined from Eq. (15), the time instants of the DQPTs are simply \(t_{\{\sigma\},=}^{c}=\frac{(2m-1)\pi}{2\omega_{q^{c},\delta}}\). Due to the properties (8), if \(q^{c}\) is a solution of (15), so is \(\pi-q^{c}\). In addition, they provide the same YLF zero since \(\omega_{q,\delta}=\omega_{\pi-q,\delta}\). Thus, it is sufficient to consider only the values of \(q\) in the domain \(\left[0,\frac{\pi}{2}\right]\) when solving for \(\{q^{c}\}\) in (15).
Footnote 2: \(\delta_{\rm eq}^{c}\) is determined by requiring \(\omega_{q^{c}}=0\) in Eq. (6). For \(\nu\neq 0\), \(S_{q}(\nu,D)\neq 0\)\(\forall q\), and thus, \(\delta_{\rm eq}^{c}=0\). Notice it does not depend on \(\nu\) which is quite different from the transverse-field Ising model with long-range interaction [18].
In general, Eq. (15) admits no solution for finite systems since \(\{q_{n}\}\) in Eq. (2) is a discrete set. Nonetheless, as reported in Appendix A, for some special values of \(\nu\), \(D\), \(\delta\) and \(\delta_{0}\), Eq. (15) admits solutions for finite systems and, thus, for a real-time instant \(t^{c}\), \(f(t^{c})\) is non-analytic even for finite \(L\). Non-analyticities in finite-size systems were also reported in Refs. [13; 15].
### The case of nearest-neighbor (\(D=1\)) and third-nearest-neighbor (\(D=2\)) hoppings
For completeness, we now briefly review the results for \(D=1\) and compare them with the case \(D=2\). It turns out that this comparison is very instructive to understand the case of generic \(D\).
For \(D=1\) and \(\delta\delta_{0}<0\), Eq. (15) gives a single solution \(q^{c}=\arctan\left(\frac{1}{\sqrt{-\delta\delta_{0}}}\right)\in\left[0,\frac{ \pi}{2}\right]\) [see Fig. 1(a)]. This means that each accumulation line in (14) provides only one real-time instant \(t_{q^{c},m}=\left(m-\frac{1}{2}\right)\pi\sqrt{\frac{1-\delta\delta_{0}}{ \delta(\delta-\delta_{0})}}\) in which the dynamic free energy is non-analytic in the thermodynamic limit [see Fig. 1(b)]. This non-analyticity is manifest as a cusp in \(f(t)\) (or a discontinuity in the dynamic internal energy \(u\equiv\frac{\partial f}{\partial t^{\prime}}\)) at \(t=\zeta_{q^{c},m}=t_{q^{c},m}\) [see Fig. 1(c)]. Note that for the case \(D=1\), the results do not depend on the value of \(\nu\).
Precisely, the non-analyticity of the dynamic free energy can be quantified by analyzing the behavior of the YLF zeros near the real-time axis. From the Weierstrass factorization theorem [4; 5; 6], the singular part of the free energy due to the zero in the \(m\)th accumulation line is \(f_{\text{n-a}}=-L^{-1}\sum_{n}\ln\left(\zeta-\zeta_{q_{n},m}\right)+\text{c.c.}\). Here, c.c. stands for complex conjugate and accounts for the zeros of \(\bar{Z}\) (the complex conjugate of \(Z\)). In the thermodynamic limit, \(\zeta_{q_{n},m}\) in Eq. (14) can be expanded near \(q^{c}\) [see Fig. 1(a)]. Then, the real-time non-analyticity of the dynamic internal energy is quantified by
\[u_{\text{n-a}}=-\frac{1}{\pi}\int_{-\delta q}^{\delta q}\frac{d\tilde{q}}{ \Delta t-\left(A_{\mathcal{G},m}-iB_{q^{c}}\right)\tilde{q}}+\text{c.c.}, \tag{16}\]
where \(\Delta t=t-\zeta_{q^{c},m}=t-t_{q^{c},m}\), \(\tilde{q}=q-q^{c}\), \(A_{\mathcal{G},m}=\left(m-\frac{1}{2}\right)\pi\sqrt{\frac{1-\delta\delta_{0}}{ \delta(\delta-\delta_{0})}}\). The non-analyticity of the dynamic internal energy is quantified by
\[u_{\text{n-a}}=-\frac{1}{\pi}\int_{-\delta q}^{\delta q}\frac{d\tilde{q}}{ \Delta t-\left(A_{\mathcal{G},m}-iB_{q^{c}}\right)\tilde{q}}+\text{c.c.}, \tag{17}\]
where \(\Delta t=t-\zeta_{q^{c},m}=t-t_{q^{c},m}\), \(\tilde{q}=q-q^{c}\), \(A_{\mathcal{G},m}=\left(m-\frac{1}{2}\right)\pi\sqrt{\frac{1-\delta\delta_{0}}{ \delta(\delta-\delta_{0})}}\). The non-analyticity of the dynamic internal energy is quantified by
\[u_{\text{n-a}}=-\frac{1}{\pi}\int_{-\delta q}^{\delta q}\frac{d\tilde{q}}{ \Delta t-\left(A_{\mathcal{G},m}-iB_{q^{c}}\right)\tilde{q}}+\text{c.c.}, \tag{18}\]
where \(\Delta t=t-\zeta_{q^{c},m}=t-t_{q^{c},m}\), \(\tilde{q}=q-q^{c}\), \(A_{\mathcal{G},m}=\left(m-\frac{1}{2}\right)\pi\sqrt{\frac{1-\delta\delta_{0}}{ \delta(\delta-\delta_{0})}}\). The non-analyticity of the dynamic internal energy is quantified by
\[u_{\text{n-a}}=-\frac{1}{\pi}\int_{-\delta q}^{\delta q}\frac{d\tilde{q}}{ \Delta t-\left(A_{\mathcal{G},m}-iB_{q^{c}}\right)\tilde{q}}+\text{c.c.}, \tag{19}\]
where \(\Delta t=t-\zeta_{q^{c},m}=t-t_{q^{c},m}\), \(\tilde{q}=q-q^{c}\), \(A_{\mathcal{G},m}=\left(m-\frac{1}{2}\right)\pi\sqrt{\frac{1-\delta\delta_{0}}{ \delta(\delta-\delta_{0})}}\). The non-analyticity of the dynamic internal energy is quantified by
\[u_{
\[\left.\frac{\partial_{t_{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{\rm{\rm{\rm{\rm{}}}}}}}}}}}}}}}}}{\Big{|}_{q=q^{c}}}\ =\ -\frac{\delta_{0}(1-\delta^{2})}{(\delta-\delta_{0})\sqrt{-\delta \delta_{0}}}I_{q^{c},m},\quad B_{q^{c}}\ =\ -\ \left.\frac{\partial_{q}}{\partial q}\right|_{q=q^{c}}\ =\ 2\left| \delta\right|\omega_{q^{c},\delta}^{-3},\quad\delta q\ \
This is because \(T_{q}^{2}(\nu,D)\) has a local minimum at \(q_{\text{min}}=\frac{1}{2}\arccos\left(-\frac{4\nu+3}{2^{2}\nu+\nu}\right)\). Thus, each accumulation line of YLF zeros crosses the real-time axis at three different instants [see Fig. 2(b)]. The corresponding density of YLF zeros crossing the real-time axis is a constant. Hence, as in the case \(D=1\), the corresponding non-analyticities are cusps in \(f(t)\) at those time instants [see Figs. 2(c) and (d)].
However, it is not straightforward to anticipate the resulting singularity when the two additional YLF zeros become degenerate, i.e., when \(-\frac{1}{8\delta_{0}}=T_{q_{\text{min}}}^{2}(\nu,D)\). Following the same steps as in Eq. (16), the singular part of the dynamical internal energy around the time instant \(\zeta_{q_{\text{min}},m}=t_{q_{\text{min}},m}^{*}\) is
\[u_{\text{n-a}}=-\frac{1}{\pi}\int_{-\delta q}^{\delta q}\frac{d\tilde{q}}{ \Delta t-\left(A_{q_{\text{min}},m}\tilde{q}-\frac{1}{2}C_{q_{\text{min}}} \tilde{q}^{2}\right)}+\text{c.c.} \tag{18}\]
where \(\Delta t=t-\zeta_{q_{\text{min}},m}=t-t_{q_{\text{min}},m}^{*}\), \(\tilde{q}=q-q_{\text{min}}\), \(A_{q_{\text{min}},m}=\frac{\partial q_{\text{min}}}{\partial q}\Big{|}_{q=q_{ \text{min}}}\), \(C_{q_{\text{min}}}=-\frac{\partial^{2}\tau_{q}}{\partial q^{2}}\Big{|}_{q=q_{ \text{min}}}\), and \(\delta q\), as before, is an unimportant positive constant. As for the case \(D=1\), the non-analytical behavior of \(u_{\text{n-a}}\) comes when a pole crosses the real-\(\tilde{q}\) axis. However, we now face the situation where the integrand of \(u_{\text{n-a}}\) has two poles. It is easy to see that one of the poles always remains far from the real-\(\tilde{q}\) axis and, thus, does not contribute to the non-analyticity. The other one does not cross the real-\(\tilde{q}\) axis either. It only touches it when \(\Delta t=0\). As a result, the limit \(u_{\text{n-a}}(t)\) as \(t\to t_{q_{\text{min}}}\) exists, i.e., \(\Delta t=\lim_{\Delta t\to 0^{+}}u_{\text{n-a}}-\lim_{\Delta t\to 0^{+}}u_{\text{n-a}}=0\). The same reasoning applies to all derivatives of \(u\). Finally, we conclude that although \(f\) is non-analytic at \(t_{q_{\text{min}}}\), it is a smooth function (all derivatives exist) at that time instant [see Fig. 2(d)]. Nonetheless, we recall that this non-analyticity poses a numerical challenge in computing \(f\) and its derivatives at that time instant.
In analogy to the Ehrenfest's classification of the order of the equilibrium phase transitions [34], we could classify the order of the DQPTs by the lowest derivative of the dynamic free energy that is discontinuous at the transition. With this classification in mind, we observe that when the YLF zeros are not degenerate, the DQPT is of first order. On the other hand, when the YLF zeros become degenerate, the DQPT is of infinite order. It is then tempting to state that this is the dynamic analogue of the Berezinskii-Kosterlitz-Thouless (BKT) transition of equilibrium systems. However, BKT transition has a continuous of YLF zeros in one of the phases. Here, there is no continuous distribution of YLF zeros after or before the instant of non-analyticity \(t_{q_{\text{min}},m}^{*}\).
### Numerical results
As we show below, this feature of two dynamical QPTs becoming degenerate (either by fine-tuning \(\delta\delta_{0}\) or \(\nu\)) and the associated cusps annihilating each other is a general feature for all other values of the hopping range \(D\).
We plot in Fig. 3(a) \(T_{q}^{2}(\nu,D)=\frac{S_{q}^{2}(\nu,D)}{C_{q}^{2}(\nu,D)}\) [see Eqs. (4) and (5)] for \(\nu=0.5\) and \(D=4\). Notice that \(T_{\nu,D}^{2}\) diverges for \(q\)'s such that \(C_{q}(\nu,D)=0\). When \(\nu\) is sufficiently small, \(T_{\nu,D}^{2}\) has \(D-1\) local minima in the domain \(q\in\left[0,\frac{\pi}{2}\right]\). This means that, for sufficiently large \(-\frac{1}{8\delta_{0}}\), there are \(2D-1\) solutions of Eq. (15). Let \(\left\{q_{k}^{c}\right\}\) be the set of solutions of Eq. (15) for generic values \(-\frac{1}{8\delta_{0}}>0\). Then, \(k\) runs from \(1\) to \(N_{s}\), where \(1\leq N_{s}\leq 2D-1\). The corresponding critical times are \(t_{k,m}^{c}=\frac{(2m-1)\pi}{2a_{\delta_{0},\delta}}\). As a representative example, we plot in Fig. 3(b) \(f(t)\) for \(\delta_{0}=-\delta=0.5\) and \(L=40\,000\). For these parameters, we have that \(N_{s}=3\) with \(t_{1,1}^{c}\approx 0.380\pi\), \(t_{1,2}^{c}\approx 0.954\pi\), \(t_{1,3}^{c}\approx 1.591\pi\), and \(t_{2,1}^{c}\approx 1.141\pi\). The corresponding non-analyticities are cusps. Evidently, these cusps become rounded for finite systems (see, for instance, the inset of Fig. 3(b)). However, for the case \(\nu=0\), non-analyticities occur even for finite systems (see Appendix A).
As previously argued, the number of minima in \(T_{q}^{2}\) is \(D-1\) for sufficiently small \(\nu\), yielding up to \(N_{s}=2D-1\) solutions of Eq. (15) (critical time instants per accumulation line). This number has to diminish when \(\nu\) increases as \(N_{s}=1\) for \(\nu\to\infty\). This is clearly demonstrated in Fig. 3(a) for \(\nu=0.75\). Notice that, instead of only local minima, \(T_{q}^{2}\) develops local maxima for larger values of \(\nu\). This means that the number of critical time instants \(N_{s}\) per accumulation line [solutions ] is a non-monotonic function of the quench parameters \(\delta\) and \(\delta_{0}\). This non-trivial behavior is demonstrated in Figs. 4(a) and (b)
Figure 3: (a) \(T_{\nu,D}^{2}\) vs. \(q\) for \(D=4\) and \(\nu=0.5\) and \(0.75\). (b) The dynamic free energy \(f(t)\) vs. \(t\) for a system of size \(L=40\,000\), \(D=4\), \(\nu=0.5\), and \(\delta_{0}=-\delta=0.5\) [meaning Eq. (15) has three solutions for \(0<q<\frac{\pi}{2}\)]. The arrows indicate the cusp positions, which are located at \(t_{k,m}^{c}\) (see text). Inset: \(f(t)\) for \(L=40\) and \(L=4000\).
where we plot \(N_{s}\) as a function of \(\nu\) and \(\delta\) for fixed \(\delta_{0}=1\) and \(D=4\) and \(40\). Notice that \(N_{s}\) always change by \(\pm 2\) as these solutions always appear or disappear in pairs. At the transition lines, two solutions degenerate. The resulting non-analiticity is a smooth one as demonstrated for the case \(D=2\).
Having discussed the cases of large and small \(\nu\), and small \(D\), we now discuss the interesting case of small \(\nu\) and \(D\gg 1\). As we have argued there can be \(2D-1\) solutions of Eq. (15). This means the existence of many critical time instants per accumulation line. More interesting, it can be demonstrated that the largest critical time instant is of order unity and the smallest one is of order \(D^{-1}\) [see Fig. 4(d)]. As shown in Fig. 4(c), these time instants are somewhat evenly distributed in the interval \(\left[\sim D^{-1},\sim 1\right]\) (see more details in Appendix A). Intriguingly, this means that for large values of \(D\) the dynamic free energy \(f(t)\) will present a large number of non-analyticities in time. This is not only because the number of critical time instants is of order \(D\) per accumulation line. As many of those instants happen at \(t^{\mathrm{c}}\sim D^{-1}\), they "reappear" yet at short time-scales in the other accumulation lines. As a result, \(f(t)\) has _non-analyticities at almost all times_ if the quantum quench crosses the transition, \(D\) is sufficiently large, and \(\nu\) is sufficiently small (see Fig. 5).
## IV Further discussions and conclusions
We studied the dynamic free energy \(f(t)\) of a free fermion chain with long-range hopping couplings, which is described by Eq. (1), focusing on its non-analyticities and the associated Yang-Lee-Fisher zeros.
For effective short-range hoppings (small \(D\) or large \(\nu\)) the YLF zeros cross the real-time axis only in a few instants per accumulation line. In contrast, when the hoppings are sufficiently long ranged (large \(D\) and small \(\nu\)), the number of times the YLF zeros cross the real-time axis increases with \(D\) and are more or less evenly spread in the short time interval \(0<Jt\lesssim 1\), where \(J\) is the microscopic energy scale.
We point out that these many non-analyticities are different from other cases studied in the literature, where the YLF zeros accumulate in an area on the complex-time plane. This is the case for the Kitaev honeycomb model [35] and for disordered systems exhibiting dynamical Griffiths singularities [27]. In the thermodynamic limit, the infinitely many zeros crossing the real-time axis yield to non-analyticities only at the edges of those distributions of zeros. Here, for the model Hamiltonian (1), the zeros do not become continuously distributed over an area on the complex-time plane. They remain distributed in lines that cross the real-time axis in many different time instants. Evidently, when the distance between these singularities increases beyond numerical or experimental resolution, they will appear as a smooth function of time, resembling the case of continuously distributed zeros over a time window.
We emphasize that the singularities are prominent only in sufficiently large systems (rigorously, only in the thermodynamic limit), specially when \(D\) is large and \(\nu\) small. Therefore, the observation of these many singularities in the current cold-atom platform, where the system size is not too large,
may be a challenging task. Perhaps, the best way to circumvent this obstacle is to consider model with anti-periodic boundary condition, \(D=L/4\), and \(\nu=0\) (see Appendix A). For this situation, the YLF zeros lay on the real-time axis even for finite systems. We note that anti-periodic boundary conditions can be realized by considering the one dimensional chain with periodic boundary condition with a magnetic field passing through the ring. For a particular choice of the flux magnetic, it is possible to map this model to one with the anti-periodic boundary condition (see, for instance, Refs. [36, 37, 38]).
To the best of our knowledge, long-range interaction effects in the context of DQPTs have only been studied for the transverse-Field Ising chain [18, 19, 24, 25, 26]. Although the model studied here is different, the present work may shed light on what happens in other models. For instance, in the transverse-field Ising model anomalous cusps (associated with the emergence of new cusps) in the dynamic free energy were reported when \(\nu\lesssim 2.2\), at least for some quench parameters [18]. These cusps were denominated as anomalous simply because they are not equally spaced in time. As we have explicitly shown, new cusps not evenly separated in time appear for sufficiently long-range interactions (small \(\nu\)) in a non-trivial fashion (see Fig. 4) as predicted by Eq. (15). It is then desirable to understand Eq. (15) in a more fundamental way and/or generalize it to other systems, in particular, to non-integrable ones. To this end, we recast Eq. (15) is terms of general quantities and find that it is equivalent to \(\delta_{0}\omega_{q,\delta}^{2}+\delta\omega_{q,\delta_{0}}^{2}=0\). Thus, in the lack of a better analogy, the number of YLF zeros (or cusps) equals the number of Fermi point pairs of this "weighted dispersion" with zero "chemical potential". While this is a simple fact for the model we studied, it would be desirable to verify it to other models. For the conventional nearest-neighbor transverse-field Ising chain, the analogous relation can be obtained by recasting the results of Ref. [12]: it is simply \(\omega_{q,\delta}^{2}+\omega_{q,\delta_{0}}^{2}=(g-g_{0})^{2}\), where the dispersion relation is \(\omega_{q,\delta}=\sqrt{\left(g-\cos q\right)^{2}+\sin^{2}q}\) and \(g=h/J\) is the ratio between the transverse field and the ferromagnetic coupling. Again, one needs to find the Fermi points of a weighted dispersion with chemical potential \(\left(g-g_{0}\right)^{2}\). We emphasize that, in both models, the YLF zeros are determined uniquely by the knowledge of the dispersion relation and of the pre- and post-quench parameters. It certainly desirable to verify whether this remains true for other models.
Finally, we mention that smaller the value of \(\nu\), harder is the detection of the non-analyticities numerically. In particular, the cusps become rounded if the system size is not sufficiently large [see Fig. 3] precluding its detection with exact diagonalization. On the other hand, powerful numerical techniques like the tDMRG or the MPS use, typically, a time step \(\Delta t\sim 0.01/J\) to evolve the initial state. Our results indicate that such time step is not sufficiently small to detect the non-analyticities that appear already at short time scales when \(1/(DJ)<0.01/J\) (or \(D\gtrsim 100\)).
###### Acknowledgements.
This research was supported by the Brazilian agencies FAPEMIG, CNPq, and FAPESP. J.A.H. thanks IIT Madras for a visiting position under the IoE program which facilitated the completion of this research work.
## Appendix A The case \(\nu=0\)
In this appendix, we consider the special case that \(\nu=0\), where Eqs. (4) and (5) become
\[C_{q}=\frac{\sin\left(Dq\right)\cos\left(Dq\right)}{\sin q}\text{ and }S_{q}=\frac{\sin^{2}\left(Dq\right)}{\sin q}. \tag{18}\]
### Critical time intants
We need to solve Eq. (15) with the care of having \(\omega_{q}\neq 0\). Thus, we need to solve
\[\cos^{2}\left(Dq^{c}\right)+\delta_{0}\delta\sin^{2}\left(Dq^{c}\right)=0. \tag{19}\]
As we are interested in solutions in the interval \(q\in[0,\frac{7}{2}]\), then,
\[q_{k}^{c}=\frac{1}{D}\left((k-1)\pi+\arcsin\left(\frac{1}{\sqrt{1-\delta \delta_{0}}}\right)\right),\;k=1,...,\frac{D}{2}. \tag{20}\]
As we already mentioned, in the Sec. III, to solve Eq. (19) we need that \(D\ll L\), otherwise, there are not enough q's to satisfy this equation. Once we determine critical values of \(q\) that satisfy Eq. (19) we obtain the critical times \(t_{i,n}^{c}=\frac{\left(2n-1\right)\pi}{2\omega_{q_{i}^{c}}\left(\delta\right)}\), \(n=1,2,...\), which are given by
\[t_{k,m}^{c}=\left(m-\frac{1}{2}\right)\pi\frac{1-\delta\delta_{0}}{\sqrt{ \delta\left(\delta-\delta_{0}\right)}}\sin\left(q_{k}^{c}\right). \tag{21}\]
Figure 5: The dynamical free energy \(f\) as a function of time \(t\) in the case \(D=50\) and \(\nu=0\) for the quantum quench from \(\delta_{0}=-0.9\) to \(\delta=0.8\). Many non-analyticities appear already at short time scales.
The limit \(D\gg 1\)
In this limit, the first critical instants (\(k\ll D\)) of each accumulation line \(m\) become
\[t_{k,m}^{c}\approx\left(m-\frac{1}{2}\right)\pi\frac{1-\delta\delta_{0}}{\sqrt{ \delta(\delta-\delta_{0})}}\left(\frac{(k-1)\pi+\arcsin\left(\frac{1}{\sqrt{1- \delta\delta_{0}}}\right)}{D}\right). \tag{30}\]
Thus, they vanish \(\sim D^{-1}\).
#### a.2.2 The case \(D=L/4\) and \(\delta\delta_{0}=-1\)
When \(\delta\delta_{0}=-1\), Eq. (28) becomes \(q_{k}^{c}=\frac{\pi}{2D}\left(2k-\frac{3}{2}\right)\). The Fourier wavevectors in (2) are \(q_{n}=\frac{2\pi}{L}\left(n-\frac{\phi}{2}\right)\). Thus, interestingly, when \(D=L/4\) and the anti-periodic boundary condition is considered (\(\phi=1\)), all critical wavevectors \(q_{k}^{c}\) exist even for finite systems (evidently, \(L\) is a multiple of \(4\)). The associated critical instants are
\[t_{k,m}^{c}=\left(2m-1\right)\pi\frac{1}{\sqrt{1+\delta^{2}}}\sin\left(q_{k}^ {c}\right). \tag{31}\]
Notice also that, because the zeros of \(Z\) are on the real-time axis even for finite systems, the dynamic free energy diverges at \(t_{m,k}^{c}\). Similar non-analyticities at finite systems were observed in other models [13; 15; 33]. We illustrate this peculiar behavior of the \(f(t)\) in Fig. 6 for a quench where \(\delta=-\frac{1}{\delta_{0}}=2\) and \(L=16\) and \(L=1\,600\). The peaks are finite due to the finite time step we used (\(\sim 10^{-4}\)). Evidently, \(f\left(t\right)\) becomes analytic in the thermodynamic limit as there will be a continuous distribution of YLF zeros over the real-time axis.
### Ground state energy for \(D=L/4\)
We now compute the ground state energy for systems with PBC (\(\phi=0\)) and APBC (\(\phi=1\)), \(\nu=0\), and \(D=L/4\). The dispersion (6) becomes
\[\omega_{q_{n},\delta}=\frac{\sqrt{\phi+\delta^{2}\left(1+2\left(1-\phi\right) \left(-1\right)^{n}\right)^{2}}}{2\sin q_{n}}, \tag{32}\]
for \(n=1,...,\frac{L}{2}\), except for \(n=\frac{L}{2}\) and \(\phi=0\). Instead, in that case, \(\omega_{\pi,\delta}=\frac{L}{4}\). Notice that the system is gapless (gapful) for PBC (APBC) \(\phi=0\) (\(\phi=1\)) regardless of the value of the dimerization parameter \(\delta\). A similar situation appears in the topological insulators (TIs). However, in the TIs the bulk is gapped under PBC and there are gapless boundary states for OBC. In the present model, we have gapless states in the bulk for the PBC case, and a gapped state for \(\phi\neq 0\). In Fig. 7(a), we illustrate the dispersion relation Eq. (32) for \(L=100\) and \(\delta=0.5\) for the model with PBC and APBC. It is interesting to note that, in the thermodynamic limit, the system with PBC has two degenerate flat bands.
The ground state energy \(E_{0}^{\text{APBC}}(\delta,\nu=0)\) for the system with APBC is
\[E_{0}^{\text{APBC}}(\delta,0)=-\frac{\sqrt{1+\delta^{2}}}{2}\sum_{n=1}^{\frac {L}{2}}\frac{1}{\sin q_{n}}. \tag{33}\]
We can replace a sum by a integral by using the Euler-Maclaurin sum
\[\sum_{m=0}^{n}F(a+kh)=\frac{1}{h}\int_{a}^{b}F(q)dq+\frac{1}{2}\left(F(b)+F(a )\right)+R, \tag{34}\]
where \(R\) is the residual term. So,
Figure 6: (a) The dynamic free energy \(f(t)\) vs. \(t/\pi\) for the case \(\nu=0\), \(D=L/4\), \(\delta=-\frac{1}{\delta_{0}}=-2\). (a) For a system size \(L=16\). The arrows indicate the critical time positions given by Eq. (31). (b) The same as (a) but \(L=1\,600\). Inset: shows a zoom of the region close to \(t=0.06\pi\). |
2306.02341 | Epidemic models with varying infectivity on a refining spatial grid. I.
The SI model | We consider a space-time SI epidemic model with infection age-dependent
infectivity and non-local infections constructed on a grid of the torus
$\mathbb{T}^1 =(0, 1]^d$, where the individuals may migrate from node to
another. The migration processes in either of the two states are assumed to be
Markovian. We establish a functional law of large numbers by letting jointly
$N$ the initial approximate number of individuals on each node go to infinity
and $\varepsilon$ the mesh size of the grid go to zero. The limit is a system
of parabolic PDE/integral equations. The constraint on the speed of convergence
of the parameters $N$ and $\varepsilon$ is that $N\varepsilon^d \to \infty$ as
$(N, \varepsilon)\to (+\infty, 0)$. | Anicet Mougabe-Peurkor, Étienne Pardoux, Ténan Yeo | 2023-06-04T12:19:47Z | http://arxiv.org/abs/2306.02341v1 | # Epidemic models with varying infectivity on a refining spatial grid. I. The SI model
###### Abstract.
We consider a space-time SI epidemic model with infection age dependent infectivity and non-local infections constructed on a grid of the torus \(\mathbb{T}^{d}=(0,1]^{d}\), where the individuals may migrate from node to another. The migration processes in either of the two states are assumed to be Markovian. We establish a functional law of large numbers by letting jointly \(N\) the initial approximate number of individuals on each node go to infinity and \(\varepsilon\) the mesh size of the grid go to zero. The limit is a system of parabolic PDE/integral equations. The constraint on the speed of convergence of the parameters \(N\) and \(\varepsilon\) is that \(N\varepsilon^{d}\to\infty\) as \((N,\varepsilon)\to(+\infty,0)\).
Key words and phrases:**Key words.** epidemic model, varying infectivity, non-local infections, law of large numbers, integral equations, space-time.
## 1. Introduction
We consider an epidemic model on a refining grid of the \(d\) dimensional torus \(\mathbb{T}^{d}\). Like in the earlier work [8], the individuals move from one patch to its neighbors according to a random walk. The first novelty of this paper is that the infectivity of each individual is a random function, which evolves with the time elapsed since infection, as first considered in [6], and recently studied in [3] and [4]. The second novelty is that we allow infection of a susceptible individual by infectious individuals located in distinct patches, and we use a very general rate of infections.
There are two parameters in our model, \(N\) which is the order of the number of individuals in each patch, and \(\varepsilon\), which is the distance between two neighboring sites. The total number of patches is \(\varepsilon^{-d}\), and the total number of individuals in the model is \(N\varepsilon^{-d}\). Our goal is to study the limit of the renormalized stochastic finite population model as both \(N\to\infty\) and \(\varepsilon\to 0\). In this paper we obtain a convergence result in \(L^{\infty}\) under the restriction that \(N\varepsilon^{d}\to\infty\). In [8], the restriction was much weaker, thanks to clever martingale estimates due to Blount [2]. However, in contradiction with the model in [8], our model is non Markovian, and several of the fluctuating processes are not martingales. As a result, it does not seem possible to extend the techniques of [2] to the situation studied in the present paper.
There are three models in the present paper. The stochastic SDE model parametrized by the pair \((N,\varepsilon)\), the deterministic model which is an ODE parametrized by \(\varepsilon\) on the patches (and is the LLN limit of the first model when \(N\to\infty\) with \(\varepsilon\) fixed), and the PDE model on the torus \(\mathbb{T}^{d}\), which is the limit of the ODE model as \(\varepsilon\to 0\). The convergence of the ODE model to the PDE model exploits standard arguments on semigroup and their approximation, based on some result in [5]. The main new argument in the present paper consists in showing that the difference in \(L^{\infty}\) between the stochastic and the ODE models, which tends to zero as \(N\to\infty\) while \(\varepsilon\) is fixed according to [4], tends also to zero when \((N,\varepsilon)\to(+\infty,0)\), provided \(N\varepsilon^{d}\to\infty\).
In this paper, we consider the SI model, S as susceptible, I as infected. An infected individual has an age of infection dependent infectivity, which we suppose to vanish after some random time. It would be natural to decide that at that time the individual leaves the I compartment, and enters the R compartment, R as recovered. For the sake of simplifying our model, we decide that after being infected, an individual remains in the I compartment for ever. This does not affect the evolution of the epidemic, since when its infectivity remains zero, an individual does not contribute anymore to the propagation of the illness, exactly as an individual in the R compartment of an SIR model. However, there are two drawbacks of the present model. First, we do not follow the evolution of the number of infectious individuals, since we have so to speak merged the I and the R compartments. Second, while we distinguish the rate of movements of the S type and the I type individuals we do not distinguish that rate between the infectious and the recovered individuals. The reason for studying the SI model separately is that, in our "Varying Infectivity" model the techniques for proving the convergence as \(\varepsilon\to 0\) of the ODE model to the PDE model which we are using in the SI case will not be available in the SIR case. One is forced to use different techniques. We will study the extension of the present results to the SIR model in a future publication. But our conviction is that it is worth to present the results in the SI case, due to the possibility in this case of using classical semigroup techniques.
Let us finally comment on the assumptions on the age of infection dependent infectivity. We assume that to each individual who gets infected is attached a random infectivity function, the functions attached to the various individuals being i.i.d., all having the law of a random function \(\lambda\) (the law is different for the initially infected individuals). In this paper, as in [4], we only assume that \(\lambda\) belongs a.s. to the Skorohod space of calag function \(\mathbf{D}\), and satisfies \(0\leq\lambda(t)\leq\lambda^{*}\), for some \(\lambda^{*}>0\). This is weaker than the assumptions made in [3]. The proof in [4] is quite different from the proof in [3]. Here we use a proof similar to that in [3]. The limitation is that we obtain only the pointwise convergence of the renormalised total infectivity function, while we obtain uniform in \(t\) convergence of the proportions of susceptible and infected individuals. We believe that this proof is interesting, due to its simplicity.
Note that there is some literature on similar models, but mainly without movements of the various individuals, see in particular [1] for a SIS Markov model, and [9] for a SIR varying infectivity model. Our previous publication [8] treats a Markov SIR model with movements and only local infections. The paper is organized as follows. We describe our model in detail in section 2, in particular the complex form of the rate of infection. In section 3, we state the law of large numbers limit as \(N\to\infty\), with \(\varepsilon\) fixed, referring to [4] for the proof. In section 4, we take the limit as \(\varepsilon\to 0\) in the ODE model. Finally, in section 5, we study the difference between the stochastic and the ODE model, as \((N,\varepsilon)\to(+\infty,0)\), and conclude our main result.
## 2. Model description
We consider a total population size \(N\varepsilon^{-d}\) initially distributed on the \(\varepsilon^{-d}\) nodes of a refining spatial grid \(\mathrm{D}_{\varepsilon}:=[0,1)^{d}\cap\varepsilon\mathbb{Z}^{d}\), in which an infection is introduced. Here \(\varepsilon\) is the mesh size of the grid (we assume that \(\varepsilon^{-1}\in\mathbb{N}\backslash\{0\}\)). We focus our attention to the periodic boundary conditions on the hypercube \([0,1]^{d}\), that is, our domain is the torus \(\mathbb{T}^{d}:=[0,1]^{d}\). Our results can be extended to a bounded domain of \(\mathbb{R}^{d}\) with smooth boundary, and Neumann boundary conditions.
### Set-up and notations
We split the population in two subsets \(S^{N,\varepsilon}\) and \(I^{N,\varepsilon}\). \(S^{N,\varepsilon}\) stands for the susceptible individuals, who do not have the disease and who can get infected, while \(I^{N,\varepsilon}\) is referred to the subset of those individuals who are suffering from the illness or have recovered from the disease.
We shall denote by \(x_{\varepsilon}\) the nodes of the grid \(\mathrm{D}_{\varepsilon}\). \(S^{N,\varepsilon}(t,x_{\varepsilon})\) denotes the number of susceptible
individuals at site \(x_{\varepsilon}\) at time \(t\). Let \(B^{N,\varepsilon}(t,x_{\varepsilon})\) be the total number of individuals at site \(x_{\varepsilon}\) at time \(t\), i.e. \(B^{N,\varepsilon}(t,x_{\varepsilon}):=S^{N,\varepsilon}(t,x_{\varepsilon})+I^{ N,\varepsilon}(t,x_{\varepsilon})\). We define \(S^{N,\varepsilon}(t)\) (resp. \(I^{N,\varepsilon}(t)\)) as the total number of susceptible individuals (resp. infected individuals) at time \(t\) in the whole population, that is:
\[S^{N,\varepsilon}(t):=\sum_{x_{\varepsilon}}S^{N,\varepsilon}(t,x_{\varepsilon }),\quad\text{and}\;\;I^{N,\varepsilon}(t):=\sum_{x_{\varepsilon}}I^{N, \varepsilon}(t,x_{\varepsilon})\,,\;\;\forall t\geq 0.\]
We have \(B^{N,\varepsilon}(t):=\sum_{x_{\varepsilon}}B^{N,\varepsilon}(t)=N \varepsilon^{-d}\,,\;\;\forall t\geq 0\).
To each individual \(j\) is attached a random infection-age dependent infectivity process \(\{\lambda_{-j}(t)\,:t\geq 0\}\) or \(\{\lambda_{j}(t)\,:t\geq 0\}\). \(\lambda_{-j}(t)\) is the infectivity at time \(t\) of the \(j\)-th initially infected individual. The initially susceptible individual \(j\) who is infected at a random time \(\tau_{j}^{N,\varepsilon}\), has at time \(t\) the infectivity \(\lambda_{j}(t-\tau_{j}^{N,\varepsilon})\), i.e. \(\lambda_{j}(t)\) is the infectivity at time \(t\) after its time of infection of the \(j\)-th initially susceptible individual. We assume that \(\lambda_{j}=0\) on \(\mathbb{R}_{-}\) and that \(\{\lambda_{-j}\,:j\geq 1\}\) and \(\{\lambda_{j}\,:j\geq 1\}\) are two mutually independent sequences of i.i.d \(\mathbb{R}_{+}\)-valued random functions.
We define the infected periods of newly and initially infected individual \(j>0\) and \(j<0\) respectively, by the random variables
\[\eta_{j}:=\sup\{t>0:\;\lambda_{j}(t)>0\},\;j\in\mathbb{Z}\backslash\{0\}.\]
We define \(F(t):=\mathbb{P}\left(\eta_{1}\leq t\right)\),, \(F_{0}(t):=\mathbb{P}\left(\eta_{-1}\leq t\right)\), the distributions functions of \(\lambda_{j}\) for \(j\geq 1\) and for \(j\leq-1\) respectivelly. Let \(F^{c}(t):=1-F(t)\) and \(F^{c}_{0}(t):=1-F_{0}(t).\) We moreover define
\[\overline{\lambda}(t):=\mathbb{E}\left[\lambda_{1}(t)\right]\;\text{ and }\overline{\lambda}_{0}(t):=\mathbb{E}\left[\lambda_{-1}(t)\right].\]
Note that, under the i.i.d. assumption of the random variables \(\{\lambda_{j}(.)\}_{j\geq 1}\), the sequence of random variables \(\{\eta_{j}\}_{j\geq 1}\) is i.i.d. Also, the sequence of random variables \(\{\eta_{j}\}_{j\leq-1}\) is i.i.d.
We assume that susceptible individuals move from patch to patch according to a time-homogenous Markov process \(X(t)\) with jump rates \(\nu_{S}/\varepsilon^{2}\) and transition function
\[p_{\varepsilon}^{x_{\varepsilon},y_{\varepsilon}}(s,t)=\mathbb{P}\left(X(t)=y _{\varepsilon}|X(s)=x_{\varepsilon}\right),\]
and while infectious individuals move from patch to patch according to a time-homogeneous Markov process \(Y(t)\) with jump rates \(\nu_{I}/\varepsilon^{2}\) and transition function
\[q_{\varepsilon}^{x_{\varepsilon},y_{\varepsilon}}(s,t)=\mathbb{P}\left(Y(t)=y _{\varepsilon}|Y(s)=x_{\varepsilon}\right).\]
\(\nu_{S}\) and \(\nu_{I}\) are positive diffusion coefficients for the susceptible and infected subpopulations, respectively. We assume that those movements of the various individuals are mutually independent.
In addition, we use \(X_{j}^{s,x_{\varepsilon}}(t)\) (resp. \(Y_{j}^{s,x_{\varepsilon}}(t)\)) to denote the position at time \(t\) of the individual \(j\) if it is susceptible (resp. infected) during the time interval \((s,t)\), and was in location/node \(x_{\varepsilon}\) at time \(s\).
For all \(x_{\varepsilon}\in\mathrm{D}_{\varepsilon}\), let \(V_{\varepsilon}(x_{\varepsilon})\) be the cube centered at the site \(x_{\varepsilon}\) with volume \(\varepsilon^{d}\). Let \(\mathrm{H}^{\varepsilon}\subset L^{2}(\mathbb{T}^{d})\) denote the space of real valued step functions that are constant on each cell \(V_{\varepsilon}(x_{\varepsilon})\).
\(\Delta_{\varepsilon}\) is the discrete Laplace operator defined as follows
\[\Delta_{\varepsilon}f(x_{\varepsilon})=\sum_{i=1}^{d}\varepsilon^{-d}\big{[}f( x_{\varepsilon}+\varepsilon e_{i})-2f(x_{\varepsilon})+f(x_{\varepsilon}- \varepsilon e_{i})\big{]},\;\;f\in\mathrm{H}^{\varepsilon}\]
and we define the operators \(\Delta_{\varepsilon}^{S}f:=\nu_{S}\Delta_{\varepsilon}f\) and \(\Delta_{\varepsilon}^{I}f:=\nu_{I}\Delta_{\varepsilon}f\), \(f\in\mathrm{H}^{\varepsilon}\).
\(\Delta\) denotes the d-dimensional Laplace operator. Let \(T_{S,\varepsilon}\) (resp. \(T_{I,\varepsilon}\)) be the semigroup acting on \(\mathrm{H}^{\varepsilon}\) generated by \(\nu_{S}\Delta_{\varepsilon}\) (resp. \(\nu_{I}\Delta_{\varepsilon}\)). Similary, we denote by \(T_{S}\) (resp. \(T_{I}\)) the semigroup acting on \(L^{2}(\mathbb{T}^{d})\) generated by \(\nu_{S}\Delta\) (resp. \(\nu_{I}\Delta\)).
### Model formulation
All random variables and processes are defined on a common complete probability space \((\Omega,\mathcal{F},\mathbb{P})\). We consider a SI epidemic model where each infectious individual has an infectivity that is randomly varying with the time elapsed since infection. We assume that a susceptible individual in patch \(x_{\varepsilon}\) has contacts with infectious individuals of patch \(y_{\varepsilon}\) at rate \(\beta_{\varepsilon}^{x_{\varepsilon},y_{\varepsilon}}(t)\) at time \(t\).
Given a site \(x_{\varepsilon}\), the total force of infection at each time \(t\) at site \(x_{\varepsilon}\) is the aggregate infectivity of all the individuals that are currently infectious in this site:
\[\mathfrak{F}^{N,\varepsilon}(t,x_{\varepsilon}) =\sum_{j=1}^{I^{N,\varepsilon}(0)}\lambda_{-j}(t)\mathds{1}_{Y_{ j}(t)=x_{\varepsilon}}\] \[+\sum_{y_{\varepsilon}}\int_{0}^{t}\int_{0}^{\infty}\int_{ \mathbf{D}}\!\!\int_{\mathbf{D}}\lambda(t-s)\mathds{1}_{u\leq S^{N,\varepsilon }(s^{-},y_{\varepsilon})}\overline{\mathds{1}}^{N,\varepsilon}_{(s^{-},y_{ \varepsilon})}\mathds{1}_{Y^{s,y_{\varepsilon}}(t)=x_{\varepsilon}}Q^{y_{ \varepsilon}}(ds,du,d\lambda,dY),\]
where
\[\overline{\mathds{1}}^{N,\varepsilon}(t,y_{\varepsilon}):=\frac{1}{N^{1- \gamma}[B^{N,\varepsilon}(t,y_{\varepsilon})]^{\gamma}}\sum_{x_{\varepsilon }}\beta_{\varepsilon}^{y_{\varepsilon},x_{\varepsilon}}(t)\mathfrak{F}^{N, \varepsilon}(t,x_{\varepsilon})\]
is the force of infection exerted on each susceptible individual in patch \(x_{\varepsilon}\), and \(\{Q^{y_{\varepsilon}},y_{\varepsilon}\in D_{\varepsilon}\}\) are i.i.d. standard Poisson random measures (PRM) on \(\mathbb{R}_{+}^{2}\times\mathbf{D}^{2}\) with intensity \(ds\otimes du\otimes d\mathbb{P}_{\lambda}\otimes d\mathbb{P}_{Y}\). \(\mathbf{D}\) denotes the space of cadlag paths from \(\mathbb{R}_{+}\) into \(\mathbb{R}_{+}\), which we equip with the Skorohod topology. We assume that \(\gamma\in[0,1]\). By an abuse of notation, we denote by \(Q^{x_{\varepsilon}}(ds,du)\) the projection of \(Q^{x_{\varepsilon}}(ds,du,d\lambda,dY)\) on the first two coordinates. Let, with \(\Upsilon^{N,\varepsilon}(t,x_{\varepsilon}):=S^{N,\varepsilon}(t,x_{ \varepsilon})\overline{\mathds{1}}^{N,\varepsilon}(t,x_{\varepsilon})\),
\[A^{N,\varepsilon}(t,x_{\varepsilon}):=\int_{0}^{t}\int_{0}^{\infty}\mathds{1} _{u\leq\Upsilon^{N,\varepsilon}(s^{-},x_{\varepsilon})}Q^{x_{\varepsilon}}(ds,du).\]
In what follows, \(x_{\varepsilon}\sim y_{\varepsilon}\) means that the nodes \(x_{\varepsilon}\) and \(y_{\varepsilon}\) are neighbors (each point of \(\mathrm{D}_{\varepsilon}\) has \(2d\) neighbors).
The epidemic dynamic of the model can be described by the following equations
\[S^{N,\varepsilon}(t,x_{\varepsilon}) =S^{N,\varepsilon}(0,x_{\varepsilon})-A^{N,\varepsilon}(t,x_{ \varepsilon})-\sum_{y_{\varepsilon}\sim x_{\varepsilon}}P_{S}^{x_{\varepsilon },y_{\varepsilon}}\left(\int_{0}^{t}\frac{\nu_{S}}{\varepsilon^{2}}S^{N, \varepsilon}(s,x_{\varepsilon})ds\right)+\sum_{y_{\varepsilon}\sim x_{ \varepsilon}}P_{S}^{y_{\varepsilon},x_{\varepsilon}}\left(\int_{0}^{t}\frac{ \nu_{S}}{\varepsilon^{2}}S^{N,\varepsilon}(s,y_{\varepsilon})ds\right) \tag{2.1}\] \[I^{N,\varepsilon}(t,x_{\varepsilon}) =I^{N,\varepsilon}(0,x_{\varepsilon})+A^{N,\varepsilon}(t,x_{ \varepsilon})-\sum_{y_{\varepsilon}\sim x_{\varepsilon}}P_{I}^{x_{\varepsilon },y_{\varepsilon}}\left(\int_{0}^{t}\frac{\nu_{I}}{\varepsilon^{2}}I^{N, \varepsilon}(s,x_{\varepsilon})ds\right)+\sum_{y_{\varepsilon}\sim x_{ \varepsilon}}P_{I}^{y_{\varepsilon},x_{\varepsilon}}\left(\int_{0}^{t}\frac{ \nu_{I}}{\varepsilon^{2}}I^{N,\varepsilon}(s,y_{\varepsilon})ds\right),\]
where \(P_{S}^{x_{\varepsilon},y_{\varepsilon}}\), \(P_{I}^{x_{\varepsilon},y_{\varepsilon}}\), \(x_{\varepsilon}\,,y_{\varepsilon}\in\mathrm{D}_{\varepsilon}\) are mutually independent standard Poisson processes.
In the above equations \(P_{S}^{x_{\varepsilon},y_{\varepsilon}}\) (resp. \(P_{I}^{x_{\varepsilon},y_{\varepsilon}}\)) is the counting process of susceptible (resp. infected) individuals that migrate from the patch \(x_{\varepsilon}\) to \(y_{\varepsilon}\).
In the sequel of this paper we may use the same notation for different constants (we use the generic notations \(c\), \(C\) for positive constants). These constants can depend upon some parameters of the model, as long as these are independent of \(\varepsilon\) and \(N\), and we will not necessarily mention this dependence explicitly. The exact value may change from line to line.
## 3. Law of large numbers as \(N\to\infty\), \(\varepsilon\) being fixed
We consider the renormalized model by dividing the number of individuals in each compartment and at each patch by \(N\). Hence, we define
\[\overline{S}^{N,\varepsilon}(t,x_{\varepsilon}):=\frac{1}{N}S^{N,\varepsilon}( t,x_{\varepsilon}),\quad\overline{I}^{N,\varepsilon}(t,x_{\varepsilon}):=\frac{1}{N}I^{N, \varepsilon}(t,x_{\varepsilon}),\text{ and }\overline{\mathfrak{F}}^{N,\varepsilon}(t,x_{ \varepsilon}):=\frac{1}{N}\mathfrak{F}^{N,\varepsilon}(t,x_{\varepsilon}).\]
**Assumption 3.1**: _We make the following assumptions on the initial conditions. We assume that:_
**(i)**: _there exists a collection of positive numbers_ \(\{\,\overline{S}^{\varepsilon}(0,x_{\varepsilon}),\,\overline{I}^{\varepsilon}(0,x _{\varepsilon}),\;x_{\varepsilon}\in\mathrm{D}_{\varepsilon}\;\}\) _such that_
\[\sum_{x_{\varepsilon}}\left[\overline{S}^{\varepsilon}(0,x_{\varepsilon})+ \overline{I}^{\varepsilon}(0,x_{\varepsilon})\right]=\varepsilon^{-d}\;,\]
_and_ \(\left|S^{N,\varepsilon}(0)-N\overline{S}^{\varepsilon}(0)\right|\leq 1\;, \qquad\left|I^{N,\varepsilon}(0)-N\overline{I}^{\varepsilon}(0)\right|\leq 1\)_;_
**(ii)**: _there exists two continuous functions_ \(\overline{\mathbf{S}}\)_,_ \(\overline{\mathbf{I}}:\mathbb{T}^{d}\longrightarrow\mathbb{R}_{+}\) _such that_ \(c\leq\overline{\mathbf{S}}(x)\leq C\)_,_ \(\overline{\mathbf{I}}(x)\leq C\) _for all_ \(x\in\mathbb{T}^{d}\)_,_ \(\int_{\mathbb{T}^{d}}\left[\overline{\mathbf{S}}(x)+\overline{\mathbf{I}}(x) \right]dx=1\) _and_
\[\overline{S}^{\varepsilon}(0,x_{\varepsilon})=\varepsilon^{-d}\int_{V_{ \varepsilon}(x_{\varepsilon})}\overline{\mathbf{S}}(x)dx,\quad\overline{I}^{ \varepsilon}(0,x_{\varepsilon})=\varepsilon^{-d}\int_{V_{\varepsilon}(x_{ \varepsilon})}\overline{\mathbf{I}}(x)dx\,.\]
**(iii)**: \(\{X_{j}(0)\,,1\leq j\leq S^{N,\varepsilon}(0)\}\) _and_ \(\{Y_{j}(0)\,,1\leq j\leq I^{N,\varepsilon}(0)\}\) _are two mutually independent collections of i.i.d. random variables satisfying_ \(\mathbb{P}\left(X_{j}(0)=x_{\varepsilon}\right)=\dfrac{\overline{S}^{ \varepsilon}(0,x_{\varepsilon})}{\overline{S}^{\varepsilon}(0)},\) _and_ \(\mathbb{P}\left(Y_{j}(0)=x_{\varepsilon}\right)=\dfrac{\overline{I}^{ \varepsilon}(0,x_{\varepsilon})}{\overline{I}^{\varepsilon}(0)}\) _for all_ \(x_{\varepsilon}\in\mathrm{D}_{\varepsilon}\)_, where_ \(\overline{S}^{\varepsilon}(0):=\sum_{x_{\varepsilon}}\overline{S}^{ \varepsilon}(0,x_{\varepsilon})\) _and_ \(\overline{I}^{\varepsilon}(0):=\sum_{x_{\varepsilon}}\overline{I}^{ \varepsilon}(0,x_{\varepsilon})\)_. Moreover_ \(S^{N,\varepsilon}(0,x_{\varepsilon})=\sum_{j=1}^{S^{N,\varepsilon}(0)} \mathds{1}_{X_{j}(0)=x_{\varepsilon}}\) _and_ \(I^{N,\varepsilon}(0,x_{\varepsilon})=\sum_{j=1}^{I^{N,\varepsilon}(0)} \mathds{1}_{Y_{j}(0)=x_{\varepsilon}}\)_._
**Assumption 3.2**:
* _We assume that_ \(\beta_{\varepsilon}^{x_{\varepsilon},y_{\varepsilon}}(t)=\beta_{t}(x_{ \varepsilon},V_{\varepsilon}(x_{\varepsilon}))\)_, where_ \(\beta_{t}(x,A)\) _is a transition kernel and there exists a constant_ \(\beta^{*}\) _such that_ \(\beta_{t}(x,\mathbb{T}^{d})\leq\beta^{*}\)_, for all_ \(x\in\mathbb{T}^{d}\) _and_ \(t\geq 0\)_._
* _there exists a positive constant_ \(\lambda^{*}>0\) _such that_ \(0\leq\lambda_{j}(t)\leq\lambda^{*}\)_, for all_ \(j\in\mathbb{Z}\backslash\{0\}\) _and_ \(t\geq 0\)_._
Under 3.1 and 3.2, we have the
**Theorem 3.1** (**Law of Large Numbers: \(\mathbf{N}\rightarrow\infty\), \(\boldsymbol{\varepsilon}\) being fixed**):
_As \(N\rightarrow\infty\), \(\left(\overline{S}^{N,\varepsilon}(t,x_{\varepsilon}),\,\overline{\mathbf{S}} ^{N,\varepsilon}(t,x_{\varepsilon}),\,\overline{I}^{N,\varepsilon}(t,x_{ \varepsilon}),\,\,\,t\geq 0,\,x_{\varepsilon}\in\mathrm{D}_{\varepsilon}\right)\) converges in \(\mathbf{D}^{3\varepsilon^{-d}}\) in probability, to the unique solution \(\left(\overline{S}^{\varepsilon}(t,x_{\varepsilon}),\,\overline{\mathbf{S}} ^{\varepsilon}(t,x_{\varepsilon}),\,\overline{I}^{\varepsilon}(t,x_{ \varepsilon}),\,\,\,t\geq 0,\,x_{\varepsilon}\in\mathrm{D}_{\varepsilon}\right)\) of the following system of integral equations_
\[\begin{cases}\overline{S}^{\varepsilon}(t,x_{\varepsilon})=\overline{S}^{ \varepsilon}(0,x_{\varepsilon})-\int_{0}^{t}\overline{S}^{\varepsilon}(s,x_{ \varepsilon})\overline{\Gamma}^{\varepsilon}(s,x_{\varepsilon})ds+\int_{0}^{t} \left[\Delta_{\varepsilon}^{S}\overline{S}^{\varepsilon}\right](s,x_{ \varepsilon})ds\\ \overline{\mathbf{S}}^{\varepsilon}(t,x_{\varepsilon})=\overline{\lambda}_{0}(t) \sum_{y_{\varepsilon}}\overline{I}^{\varepsilon}(0,y_{\varepsilon})q^{y_{ \varepsilon},x_{\varepsilon}}(0,t)+\sum_{y_{\varepsilon}}\int_{0}^{t} \overline{\lambda}(t-s)\overline{S}^{\varepsilon}(s,y_{\varepsilon})\overline{ \Gamma}^{\varepsilon}(s,y_{\varepsilon})q^{y_{\varepsilon},x_{\varepsilon}}(s,t)ds \\ \overline{I}^{\varepsilon}(t,x_{\varepsilon})=\overline{I}^{\varepsilon}(0,x_{ \varepsilon})+\int_{0}^{t}\overline{S}^{\varepsilon}(s,x_{\varepsilon}) \overline{\Gamma}^{\varepsilon}(s,x_{\varepsilon})ds+\int_{0}^{t}\left[\Delta_{ \varepsilon}^{I}\overline{I}^{\varepsilon}\right](s,x_{\varepsilon})ds,\\ t\geq 0,\,\,x_{\varepsilon}\in\mathrm{D}_{\varepsilon},\end{cases} \tag{3.1}\]
_where_
\[\overline{\Gamma}^{\varepsilon}(t,x_{\varepsilon})=\dfrac{1}{\left[\,\overline{B }^{\varepsilon}(t,x_{\varepsilon})\right]^{\gamma}}\sum_{y_{\varepsilon}}\beta_{ \varepsilon}^{x_{\varepsilon},y_{\varepsilon}}(t)\overline{\mathbf{S}}^{ \varepsilon}(t,y_{\varepsilon})\,\,\,\text{and}\,\,\,\,\overline{B}^{ \varepsilon}(t,x_{\varepsilon})=\overline{S}^{\varepsilon}(t,x_{\varepsilon})+ \overline{I}^{\varepsilon}(t,x_{\varepsilon}).\]
This Theorem is a special case of Theorem 3.1 in [4], whose proof written for a multi-patch multi-group SIR model is easily adapted to our case.
## 4. Limit as \(\varepsilon\to 0\) in the deterministic model
Before letting \(\varepsilon\) go to zero in the limit system (3.1) extended on the whole space \(\mathbb{T}^{d}\), we prove some technical lemmas.
**Lemma 4.1**: _Let \(T>0\). There exists a positive constant \(C\) such that \(\left\|\overline{S}^{\varepsilon}(t)\right\|_{\infty}\leq C\) and \(\left\|\overline{I}^{\varepsilon}(t)\right\|_{\infty}\leq C\), for all \(\varepsilon>0\) and \(t\in[0\,,\,T]\)._
**Proof.** Using the Duhamel formula, we have \(\|\overline{S}^{\varepsilon}(t)\|_{\infty}\leq\sup\limits_{x_{\varepsilon}} \overline{S}^{\varepsilon}(0,x_{\varepsilon})\leq C\).
We now consider the term \(\overline{I}^{\varepsilon}\). First using the previous estimate, we obtain
\[\frac{\overline{S}^{\varepsilon}(s,x_{\varepsilon})}{\left(\overline{B}^{ \varepsilon}(s,x_{\varepsilon})\right)^{\gamma}}=\left(\frac{\overline{S}^{ \varepsilon}(s,x_{\varepsilon})}{\overline{B}^{\varepsilon}(s,x_{ \varepsilon})}\right)^{\gamma}\left[\,\overline{S}^{\varepsilon}(s,x_{ \varepsilon})\right]^{1-\gamma}\leq C(T,\gamma).\]
Next we have \(\sum\limits_{y_{\varepsilon}}\beta_{\varepsilon}^{x_{\varepsilon},y_{ \varepsilon}}\overline{S}^{\varepsilon}(s,y_{\varepsilon})\leq\lambda^{*} \left\|\overline{I}^{\varepsilon}(s)\right\|_{\infty}\sum\limits_{y_{ \varepsilon}}\beta_{\varepsilon}^{x_{\varepsilon},y_{\varepsilon}}(s)\leq \lambda^{*}\beta^{*}\big{\|}\overline{I}^{\varepsilon}(s)\big{\|}_{\infty}.\) Thus
\[\left\|\overline{I}^{\varepsilon}(t)\right\|_{\infty} \leq\,\left\|\left(T_{I,\varepsilon}(t)\overline{I}^{\varepsilon} (0)\right)\right\|_{\infty}+\int_{0}^{t}T_{I,\varepsilon}(t-s)C\left\| \overline{I}^{\varepsilon}(s)\right\|_{\infty}ds\] \[\leq C+C\int_{0}^{t}\left\|\overline{I}^{\varepsilon}(s)\right\|_ {\infty}ds.\]
The second statement then follows from Gronwall's Lemma. \(\square\)
**Lemma 4.2**: _For any \(T>0\), there exists \(\varepsilon_{0}\) and \(c>0\) such that \(\overline{B}^{\varepsilon}(t,x_{\varepsilon})\geq c\), for all \(0<\varepsilon\leq\varepsilon_{0}\), \(x_{\varepsilon}\in\mathrm{D}_{\varepsilon}\) and \(0\leq t\leq T\)._
**Proof.** Let \(c\) and \(C\) be two positive constants such that \(0<c\leq\dfrac{\inf_{x_{\varepsilon}}\overline{S}^{\varepsilon}(0,x_{ \varepsilon})}{2}\leq\dfrac{C}{2}\), and let \(T_{c}^{\varepsilon}:=\inf\{t>0\,,\ \inf\limits_{x_{\varepsilon}}\overline{S}^{ \varepsilon}(t,x_{\varepsilon})<c\}\). On the interval \([0\,,\,T_{c}^{\varepsilon}]\), \(\overline{S}^{\varepsilon}(t,x_{\varepsilon})\geq c\), \(\forall x_{\varepsilon}\in\mathrm{D}_{\varepsilon}\). For \(t\leq T_{c}^{\varepsilon}\), we have
\[\overline{I}^{\varepsilon}(t,x_{\varepsilon}) =\frac{1}{\big{[}\,\overline{B}^{\varepsilon}(t,x_{\varepsilon} )\big{]}^{\gamma}}\sum\limits_{y_{\varepsilon}}\beta_{\varepsilon}^{x_{ \varepsilon},y_{\varepsilon}}(t)\overline{\overline{S}}^{\varepsilon}(t,y_{ \varepsilon})\leq\frac{\lambda^{*}\beta^{*}}{c^{\gamma}}\big{\|}\overline{I}^{ \varepsilon}(t)\big{\|}_{\infty}:=\overline{c},\] \[\overline{S}^{\varepsilon}(t,x_{\varepsilon}) \leq\overline{S}^{\varepsilon}(0,x_{\varepsilon})-\overline{c} \int_{0}^{t}\overline{S}^{\varepsilon}(s,x_{\varepsilon})+\int_{0}^{t} \big{[}\Delta_{\varepsilon}^{S}\overline{S}^{\varepsilon}\big{]}(s,x_{ \varepsilon})ds.\]
Hence \(e^{\overline{c}t}\overline{S}^{\varepsilon}(t,x_{\varepsilon})\geq\inf_{y_{ \varepsilon}}\overline{S}^{\varepsilon}(0,y_{\varepsilon})=2c\), and consequently \(T_{c}^{\varepsilon}\geq\log 2/\overline{c}\). Then for all \(0\leq t\leq T_{c}^{\varepsilon}\), we have \(e^{\overline{c}t}\overline{S}^{\varepsilon}(t,x_{\varepsilon})\geq 2c\). So \(\overline{S}^{\varepsilon}(t,x_{\varepsilon})\geq 2e^{-\overline{c}t}c\geq c\) iff \(e^{-\overline{c}t}\geq\frac{1}{2}\)
From 3.1 **(ii)** and the fact that \(\overline{\mathbf{I}}(0)\neq 0\), there exists a ball \(B(x_{0},\rho)\) and \(a>0\) such that \(\overline{\mathbf{I}}(y)\geq a\), for all \(y\in B(x_{0},\rho)\). Let us consider the following ODE
\[\frac{d\,u_{\varepsilon}}{dt}=\nu_{I}\Delta_{\varepsilon}u_{\varepsilon},\quad u _{\varepsilon}(0)=a\mathds{1}_{B(x_{0},\rho)}.\]
We have that \(u_{\varepsilon}\longrightarrow u\) in \(L^{\infty}\left([0,T]\times\mathbb{T}^{d}\right)\) as \(\varepsilon\to 0\), where \(u\) is the solution of
\[\frac{d\,u}{dt}=\nu_{I}\Delta u,\quad u(0)=a\mathds{1}_{B(x_{0},\rho)}.\]
For all \(\dfrac{\log 2}{\overline{c}}<t\leq T\), there exists a positive constant \(\underline{c}\), such that \(u(t,x)\geq 2\underline{c},\ \forall x\in\mathbb{T}^{d}\). Then, there exists \(\varepsilon_{0}>0\) such that \(\forall\varepsilon\leq\varepsilon_{0}\), \(\overline{I}^{\varepsilon}(t,x_{\varepsilon})\geq u_{\varepsilon}(t,x_{ \varepsilon})\geq\underline{c}\), for all \(\dfrac{\log 2}{\overline{c}}<t\leq T\).
We have shown that \(\overline{B}^{\varepsilon}(t,x_{\varepsilon})\geq c\wedge\underline{c}\), for all \(0\leq t\leq T\), \(x\in\mathrm{D}_{\varepsilon}\), \(\varepsilon_{0}\leq\varepsilon\).
We now extend the solution of the system (3.1) to the whole space \(\mathbb{T}^{d}\). So, we define
\[\overline{\mathbf{S}}^{\varepsilon}(t,x) :=\sum_{x_{\varepsilon}}\overline{S}^{\varepsilon}(t,x_{ \varepsilon})\mathds{1}_{V_{\varepsilon}(x_{\varepsilon})}(x),\ \overline{\mathbf{T}}^{\varepsilon}(t,x):=\sum_{x_{\varepsilon}}\overline{ \Gamma}^{\varepsilon}(t,x_{\varepsilon})\mathds{1}_{V_{\varepsilon}(x_{ \varepsilon})}(x),\ \overline{\mathbf{F}}^{\varepsilon}(t,x):=\sum_{x_{\varepsilon}} \overline{\overline{\mathbf{F}}}^{\varepsilon}(t,x_{\varepsilon})\mathds{1}_{V _{\varepsilon}(x_{\varepsilon})}(x),\] \[\overline{\mathbf{X}}^{\varepsilon} :=(\overline{\mathbf{S}}^{\varepsilon}\,,\,\overline{\mathbf{F}}^ {\varepsilon}\,,\,\overline{\mathbf{I}}^{\varepsilon}).\]
**Theorem 4.1**: _For all \(T\geq 0\), \(\sup_{0\leq t\leq T}\left\|\overline{\mathbf{X}}^{\varepsilon}(t)-\overline{ \mathbf{X}}(t)\right\|_{\infty}\longrightarrow 0\) as \(\varepsilon\to 0\), where \(\overline{\mathbf{X}}:=(\overline{\mathbf{S}}\,,\,\overline{\mathbf{F}}\,,\, \overline{\mathbf{I}})\) is the unique solution of the following system of parabolic PDE/integral equations._
\[\left\{\begin{aligned} \overline{\mathbf{S}}(t,x)&= \overline{\mathbf{S}}(0,x)-\int_{0}^{t}\overline{\mathbf{S}}(s,x)\overline{ \Gamma}(s,x)ds+\int_{0}^{t}\big{[}\Delta^{S}\overline{\mathbf{S}}\,\big{]}(s, x)ds,\\ \overline{\mathbf{F}}(t,x)&=\overline{\lambda}_{0}(t )\left(T_{I}(t)\overline{\mathbf{I}}(0)\right)(x)+\int_{0}^{t}\overline{ \lambda}(t-s)T_{I}(t-s)\left(\overline{\mathbf{S}}(s)\overline{\Gamma}(s) \right)(x)ds,\\ \overline{\mathbf{I}}(t,x)&=\overline{\mathbf{I}}(0,x)+\int_{0}^{t}\overline{\mathbf{S}}(s,x)\overline{\Gamma}(s,x)ds+\int_{0}^{t }\big{[}\Delta^{l}\overline{\mathbf{I}}\big{]}(s,x)ds,\\ \text{with}\ \ \overline{\mathbf{S}}(t,x)\overline{\Gamma}(t,x)= \frac{\overline{\mathbf{S}}(t,x)}{\big{[}\overline{\mathbf{B}}(t,x)\big{]}^{ \gamma}}\int_{\mathbb{T}^{d}}\overline{\mathbf{F}}(t,y)\beta(x,dy),\ t\geq 0,\ \ x\in\mathbb{T}^{d}.\end{aligned}\right. \tag{4.1}\]
_where \(T_{I}\) denotes the semigroup generated by \(\nu_{I}\Delta\)._
Before proving this theorem, we first establish two Propositions.
**Proposition 4.1**: _Let \(T>0\). If \((\overline{\mathbf{S}}\,,\,\overline{\mathbf{F}}\,,\,\overline{\mathbf{I}})\) is a solution of (4.1), then for all \(0\leq t\leq T\), there exists \(C\), \(c>0\) such that \(\left\|\overline{\mathbf{S}}(t)\right\|_{\infty}\leq C\), \(\left\|\overline{\mathbf{I}}(t)\right\|_{\infty}\leq C\) and \(\overline{\mathbf{B}}(t,x)\geq c\), for all \(x\in\mathbb{T}^{d}\)._
**Proof.** The arguments used in the proof of 4.1 and 4.2 are easy to transpose to the present situation. \(\square\)
**Remark 4.1**: _Let \(\mathscr{H}\left(\overline{\mathbf{S}},\overline{\mathbf{I}},\overline{ \mathbf{F}}\right)(t,x):=\frac{\big{[}\,\overline{\mathbf{S}}(t,x)\lor 0\big{]}\wedge C \big{]}}{\big{[}\,\overline{\mathbf{B}}(t,x)\lor c\big{]}^{\gamma}}\int_{ \mathbb{T}^{d}}\beta_{t}(x,dy)\left[\,\overline{\mathbf{F}}(t,y)\wedge\lambda ^{*}C\right]\) where \(C\) is the upper bound in Lemma 4.1, and \(c\) the lower bound in Lemma 4.2. Note \(\big{(}\overline{\mathbf{S}}\,,\,\overline{\mathbf{I}}\,,\,\overline{\mathbf{F}} \big{)}\) is a solution of (4.1) iff it is a solution of the following system_
\[\left\{\begin{aligned} \overline{\mathbf{S}}(t,x)&=\Big{(}T_{S}(t) \overline{\mathbf{S}}(0)\Big{)}(x)-\int_{0}^{t}\Big{(}T_{S}(t-s)\mathscr{H} \left(\overline{\mathbf{S}}(s),\overline{\mathbf{I}}(s),\overline{\mathbf{F}}(s )\right)\Big{)}(x)ds,\\ \overline{\mathbf{F}}(t,x)&=\overline{\lambda}_{0}(t )\Big{(}T_{I}(t)\overline{\mathbf{I}}(0)\Big{)}(x)+\int_{0}^{t}\overline{ \lambda}(t-s)\Big{(}T_{I}(t-s)\mathscr{H}\left(\overline{\mathbf{S}}(s), \overline{\mathbf{I}}(s),\overline{\mathbf{F}}(s)\right)\Big{)}(x)ds,\\ \overline{\mathbf{I}}(t,x)&=\Big{(}T_{I}(t)\overline{ \mathbf{I}}(0)\Big{)}(x)+\int_{0}^{t}\Big{(}T_{I}(t-s)\mathscr{H}\left( \overline{\mathbf{S}}(s),\overline{\mathbf{I}}(s),\overline{\mathbf{F}}(s) \right)\Big{)}(x)ds,\ 0\leq t\leq T,\ \ x\in\mathbb{T}^{d}.\end{aligned}\right. \tag{4.2}\]
_Note also that the map \(\mathscr{H}:\Big{(}L^{\infty}(\mathbb{T}^{d})\Big{)}^{3}\longrightarrow L^{ \infty}(\mathbb{T}^{d})\) is bounded and globally Lispchitz._
**Proposition 4.2**: _The system of equations (4.2) has a unique solution._
**Proof.** The uniqueness of the solution uses the contraction character of the semigroups \(T_{S}\) and \(T_{I}\) on \(L^{\infty}(\mathbb{T}^{d})\), and the fact that the map \(\mathscr{H}\) is bounded and globally Lispchitz. The existence of the solution can be proved using the Picard iteration procedure.
\(\square\)
We introduce the canonical projection \(\mathrm{P}_{\varepsilon}:L^{2}(\mathbb{T}^{d})\longrightarrow\mathrm{H}^{\varepsilon}\) given by
\[\varphi\longmapsto\mathrm{P}_{\varepsilon}\varphi(x)=\varepsilon^{-d}\int_{V_{ \varepsilon}(x_{\varepsilon})}\varphi(y)dy\ \ \ \ \text{if}\ x\in V_{\varepsilon}(x_{\varepsilon}).\]
**Proof of Theorem 4.1**.
Using the fact that the map \(\mathscr{H}\) is bounded and globally Lispchitz, we have, provided that \(\varepsilon\leq\varepsilon_{0}\),
\[\left\|\overline{\mathbf{X}}^{\,\varepsilon}(t)-\overline{\mathbf{X}}(t) \right\|_{\infty}\leq C(\lambda^{*},\beta^{*})\int_{0}^{t}\left\|\overline{ \mathbf{X}}^{\,\varepsilon}(s)-\overline{\mathbf{X}}(s)\right\|_{\infty}\!ds +\pi_{\varepsilon}(t),\]
where \(\pi_{\varepsilon}(t)=\pi_{\varepsilon}^{S}(t)+\pi_{\varepsilon}^{I}(t)+\pi_{ \varepsilon}^{\mathfrak{F}}(t)\), with
\[\pi_{\varepsilon}^{S}(t)=\left\|T_{S,\varepsilon}(t)\overline{ \mathbf{S}}^{\,\varepsilon}(0)-T_{S}(t)\overline{\mathbf{S}}(0)\right\|_{\infty}\] \[+\int_{0}^{t}\left\|\mathrm{P}_{\varepsilon}\left(\frac{ \overline{\mathbf{S}}(s)}{\big{[}\,\overline{\mathbf{B}}(s)\big{]}^{\gamma}} \int_{\mathbb{T}^{d}}\overline{\mathbf{F}}(s,y)\beta_{s}(.,dy)\right)-T_{S}(t -s)\left(\frac{\overline{\mathbf{S}}(s)}{\big{[}\,\overline{\mathbf{B}}(s) \big{]}^{\gamma}}\int_{\mathbb{T}^{d}}\overline{\mathbf{F}}(s,y)\beta_{s}(., dy)\right)\right\|_{\infty}ds,\]
\(\pi_{\varepsilon}^{I}(t)\) is a quantity similar to \(\pi_{\varepsilon}^{S}(t)\), with \(T_{I,\varepsilon}\) (resp. \(T_{I}\), \(\overline{\mathbf{I}}^{\,\varepsilon}\) and \(\overline{\mathbf{I}}\)) in place of \(T_{S,\varepsilon}\) (resp. \(T_{S}\), \(\overline{\mathbf{S}}^{\,\varepsilon}\) and \(\overline{\mathbf{S}}\)), and
\[\pi_{\varepsilon}^{\mathfrak{F}}(t)=\lambda^{*}\Big{\|}T_{I, \varepsilon}(t)\overline{\mathbf{I}}^{\,\varepsilon}(0)-T_{I}(t)\overline{ \mathbf{I}}(0)\Big{\|}_{\infty}\] \[+\int_{0}^{t}\left\|\mathrm{P}_{\varepsilon}\left(\frac{ \overline{\mathbf{S}}(s)}{\big{[}\,\overline{\mathbf{B}}(s)\big{]}^{\gamma}} \int_{\mathbb{T}^{d}}\overline{\mathbf{F}}(s,y)\beta_{s}(.,dy)\right)-\frac{ \overline{\mathbf{S}}(s)}{\big{[}\,\overline{\mathbf{B}}(s)\big{]}^{\gamma}} \int_{\mathbb{T}^{d}}\overline{\mathbf{F}}(s,y)\beta_{s}(.,dy)\right\|_{\infty}\!ds\] \[+\int_{0}^{t}\left\|T_{I,\varepsilon}(t-s)\mathrm{P}_{ \varepsilon}\left(\frac{\overline{\mathbf{S}}(s)}{\big{[}\,\overline{\mathbf{ B}}(s)\big{]}^{\gamma}}\int_{\mathbb{T}^{d}}\overline{\mathbf{F}}(s,y)\beta_{s}(.,dy) \right)-T_{I}(t-s)\left(\frac{\overline{\mathbf{S}}(s)}{\big{[}\,\overline{ \mathbf{B}}(s)\big{]}^{\gamma}}\int_{\mathbb{T}^{d}}\overline{\mathbf{F}}(s,y )\beta_{s}(.,dy)\right)\Big{\|}_{\infty}\!ds.\]
Then from Gronwall's lemma, \(\sup_{0\leq t\leq T}\left\|\overline{\mathbf{X}}^{\,\varepsilon}(t)-\overline {\mathbf{X}}(t)\right\|_{\infty}\to 0\) follows from \(\sup_{0\leq t\leq T}\pi_{\varepsilon}(t)\to 0\).
Since the maps \(x\longmapsto\overline{\mathbf{S}}(0,x)\), \(x\longmapsto\overline{\mathbf{I}}(0,x)\) and \(x\longmapsto\frac{\overline{\mathbf{S}}(t,x)}{\left[\overline{\mathbf{B}}(t, x)\right]^{\gamma}}\int_{\mathbb{T}^{d}}\overline{\mathbf{F}}(t,y)\beta_{t}(x,dy)\) are continuous on \(\mathbb{T}^{d}\), and the fact that \(T_{S,\varepsilon}\longrightarrow T_{S}\) and \(T_{I,\varepsilon}\longrightarrow T_{I}\) in \(L^{\infty}\) as \(\varepsilon\to 0\), then \(\sup_{0\leq t\leq T}\pi_{\varepsilon}(t)\longrightarrow 0\), as \(\varepsilon\to 0\) (see Kato [5], chapter 9, Section 3, Example 3.10).
\(\square\)
## 5. Limit as \(N\to\infty\) and \(\varepsilon\to 0\)
In this section, we extend our stochastic model on the whole space \(\mathbb{T}^{d}\) and let both \(N\to\infty\) and \(\varepsilon\to 0\) in such a way that \(N\varepsilon^{d}\to\infty\). Before stating the main theorem of this section, we first prove some lemmas and propositions.
**Lemma 5.1**: _There exist two constants \(0<c<C\) such that for all \(t\geq 0\), \(\varepsilon>0\) and \(x_{\varepsilon}\in\mathrm{D}_{\varepsilon}\),_
\[c\varepsilon^{d}\leq\mathbb{P}(X(t)=x_{\varepsilon})\leq C\varepsilon^{d}.\]
**Proof.** Define \(u^{\varepsilon}(t,x_{\varepsilon}):=\mathbb{P}(X(t)=x_{\varepsilon})\). We have that \(u^{\varepsilon}(t,x_{\varepsilon})=\left(e^{t\big{[}\Delta_{\varepsilon}^{S} \big{]}^{*}}u_{0}^{\varepsilon}\right)(x_{\varepsilon})\). Using the assumption on the initial condition \(\mathbb{P}(X(0)=x_{\varepsilon})\), then \(0<c\varepsilon^{d}\leq u^{\varepsilon}(0,x_{\varepsilon})\leq C\varepsilon^{d}\), from which we deduce that \(0<c\varepsilon^{d}\leq e^{t\big{[}\Delta_{\varepsilon}^{S}\big{]}^{*}}u^{ \varepsilon}(0,x_{\varepsilon})\leq C\varepsilon^{d}\), hence the result.
**Lemma 5.2**: _There exits a positive constant \(C\) such that for all \(0\leq s\leq t\), \(\varepsilon>0\) and \(x_{\varepsilon}\in\mathrm{D}_{\varepsilon}\)_
\[\sum_{y_{\varepsilon}}q_{\varepsilon}^{y_{\varepsilon},x_{\varepsilon}}(s,t)=1 \quad\text{and}\quad\mathbb{P}\left(Y_{j}(t)=x_{\varepsilon}\right)\leq C \varepsilon^{d}.\]
**Proof.** The uniform distribution on \({\rm D}_{\varepsilon}\) is invariant for the process \(Y(t)\). So if we start \(Y\) at time \(s\) with the uniform distribution i.e. \(\mathbb{P}\left(Y(s)=x_{\varepsilon}\right)=\varepsilon^{d}\), the law of \(Y\) at time \(t\) is also the uniform law. But
\[\mathbb{P}\left(Y(t)=x_{\varepsilon}\right)=\sum_{y_{\varepsilon}}\mathbb{P} \left(Y(s)=y_{\varepsilon}\right)q_{\varepsilon}^{y_{\varepsilon},x_{ \varepsilon}}(s,t)\ {\rm i.e}\ \varepsilon^{d}=\varepsilon^{d}\sum_{y_{\varepsilon}}q_{ \varepsilon}^{y_{\varepsilon},x_{\varepsilon}}(s,t),\]
thus \(\sum_{y_{\varepsilon}}q_{\varepsilon}^{y_{\varepsilon},x_{\varepsilon}}(s,t)=1.\) Finally
\[\mathbb{P}\left(Y_{j}(t)=x_{\varepsilon}\right) =\sum_{y_{\varepsilon}}\mathbb{P}\left(Y_{j}(0)=y_{\varepsilon} \right)q_{\varepsilon}^{y_{\varepsilon},x_{\varepsilon}}(0,t)\] \[\leq\sup_{y_{\varepsilon}}\mathbb{P}\left(Y_{j}(0)=y_{ \varepsilon}\right)\sum_{y_{\varepsilon}}q_{\varepsilon}^{y_{\varepsilon},x_ {\varepsilon}}(0,t).\]
Hence the second result follows from the first one and Assumption 3.1 (**ii**) and (**iii**). \(\square\)
Let define \(\overline{\mathfrak{F}}_{0}^{N,\varepsilon}(t,x_{\varepsilon}):=\dfrac{1}{N} \sum_{j=1}^{I^{N,\varepsilon}(0)}\lambda_{-j}(t)\mathds{1}_{Y_{j}(t)=x_{ \varepsilon}}\) and \(\ \overline{\mathfrak{F}}_{0}^{\varepsilon}(t,x_{\varepsilon}):=\overline{ \lambda}_{0}(t)\sum_{y_{\varepsilon}}\overline{I}^{\varepsilon}(0,y_{ \varepsilon})q_{\varepsilon}^{y_{\varepsilon},x_{\varepsilon}}(0,t).\)
We have the
**Lemma 5.3**: _Let us assume that \((N,\varepsilon)\to(\infty,0)\), in such a way that \(N\varepsilon^{d}\to\infty\). Then for all \(T>0\),_
\[\sup_{0\leq t\leq T}\mathbb{E}\left(\left\|\overline{\mathfrak{F}}_{0}^{N, \varepsilon}(t)-\overline{\mathfrak{F}}_{0}^{\varepsilon}(t)\right\|_{\infty }^{2}\right)\longrightarrow 0,\quad\text{as }\,(N\,,\,\varepsilon)\to(\infty\,,\,0).\]
**Proof.**\(\ \ \overline{\mathfrak{F}}_{0}^{N,\varepsilon}(t,x_{\varepsilon})\) can be decomposed as follows
\[\overline{\mathfrak{F}}_{0}^{N,\varepsilon}(t,x_{\varepsilon})=\dfrac{1}{N} \sum_{j=1}^{I^{N,\varepsilon}(0)}\left(\lambda_{-j}(t)-\overline{\lambda}_{0} (t)\right)\mathds{1}_{Y_{j}(t)=x_{\varepsilon}}+\overline{\lambda}_{0}(t) \dfrac{1}{N}\sum_{j=1}^{I^{N,\varepsilon}(0)}\mathds{1}_{Y_{j}(t)=x_{ \varepsilon}}.\]
Let consider the first term. Since \(\left(\lambda_{-j}(t)\right)_{j}\) are independent and identically distributed and independent of \(Y_{j}(t)\), then
\[\mathbb{E}\left[\left(\dfrac{1}{N}\sum_{j=1}^{I^{N,\varepsilon}( 0)}\left(\lambda_{-j}(t)-\overline{\lambda}_{0}(t)\right)\mathds{1}_{Y_{j}(t)= x_{\varepsilon}}\right)^{2}\right] =\dfrac{1}{N^{2}}\sum_{j=1}^{I^{N,\varepsilon}(0)}\mathbb{E} \left[\left|\lambda_{-j}(t)-\overline{\lambda}_{0}(t)\right|^{2}\mathds{1}_{Y _{j}(t)=x_{\varepsilon}}\right]\] \[\leq\dfrac{1}{N^{2}}C(\lambda^{*})I^{N,\varepsilon}(0)\mathbb{P} \left(Y_{1}(t)=x_{\varepsilon}\right)\leq\dfrac{C(\lambda^{*})}{N}.\]
Now, since
\[\mathbb{E}\left[\sup_{x_{\varepsilon}\in{\rm D}_{\varepsilon}} \left(\dfrac{1}{N}\sum_{j=1}^{I^{N,\varepsilon}(0)}\left(\lambda_{-j}(t)- \overline{\lambda}_{0}(t)\right)\mathds{1}_{Y_{j}(t)=x_{\varepsilon}}\right)^ {2}\right] \leq\sum_{x_{\varepsilon}}\mathbb{E}\left[\left(\dfrac{1}{N}\sum_ {j=1}^{I^{N,\varepsilon}(0)}\left(\lambda_{-j}(t)-\overline{\lambda}_{0}(t) \right)\mathds{1}_{Y_{j}(t)=x_{\varepsilon}}\right)^{2}\right]\] \[\leq\dfrac{C(\lambda^{*})}{N}\varepsilon^{-d}\quad\to 0, \tag{5.1}\]
provided \(N\varepsilon^{d}\to\infty\). It remains to show that
\[\sup_{x_{\varepsilon}\in{\rm D}_{\varepsilon}}\left|\overline{\lambda}_{0}(t) \dfrac{1}{N}\sum_{j=1}^{I^{N,\varepsilon}(0)}\mathds{1}_{Y_{j}(t)=x_{ \varepsilon}}-\overline{\lambda}_{0}(t)\sum_{y_{\varepsilon}}\overline{I}^{ \varepsilon}(0,y_{\varepsilon})q_{\varepsilon}^{y_{\varepsilon},x_{\varepsilon }}(0,t)\right|\longrightarrow 0,\ \text{as }(N,\varepsilon)\longrightarrow(\infty,0).\]
We have
\[\frac{1}{N}\sum_{j=1}^{I^{N,\varepsilon}(0)}\mathds{1}_{Y_{j}(t)=x_{ \varepsilon}}=\frac{1}{N}\sum_{j=1}^{I^{N,\varepsilon}(0)}\left[\mathds{1}_{Y_{j }(t)=x_{\varepsilon}}-\mathbb{P}\left(Y_{j}(t)=x_{\varepsilon}\right)\right]+ \frac{1}{N}\sum_{j=1}^{I^{N,\varepsilon}(0)}\mathbb{P}\left(Y_{j}(t)=x_{ \varepsilon}\right).\] \[\mathbb{E}\left\{\left(\frac{1}{N}\sum_{j=1}^{I^{N,\varepsilon}( 0)}\left[\mathds{1}_{Y_{j}(t)=x_{\varepsilon}}-\mathbb{P}\left(Y_{j}(t)=x_{ \varepsilon}\right)\right]\right)^{2}\right\} =\frac{1}{N^{2}}\sum_{j=1}^{I^{N,\varepsilon}(0)}\mathbb{E}\left( \left|\mathds{1}_{Y_{j}(t)=x_{\varepsilon}}-\mathbb{P}\left(Y_{j}(t)=x_{ \varepsilon}\right)\right|^{2}\right)\] \[\leq\frac{C}{N}\,.\]
It follows that
\[\mathbb{E}\left\{\sup_{x_{\varepsilon}\in\mathrm{D}_{\varepsilon}}\left(\frac {1}{N}\sum_{j=1}^{I^{N,\varepsilon}(0)}\left[\mathds{1}_{Y_{j}(t)=x_{ \varepsilon}}-\mathbb{P}\left(Y_{j}(t)=x_{\varepsilon}\right)\right]\right)^ {2}\right\}\leq\frac{C}{N\varepsilon^{d}}\quad\to 0, \tag{5.2}\]
provided \(N\varepsilon^{d}\to 0\).
Since \(\overline{\lambda}_{0}(t)\) is bounded, it remains to evaluate the quantity \(\frac{1}{N}\sum_{j=1}^{I^{N,\varepsilon}(0)}\mathbb{P}\left(Y_{j}(t)=x_{ \varepsilon}\right)-\sum_{y_{\varepsilon}}\overline{T}^{\varepsilon}(0,y_{ \varepsilon})q_{\varepsilon}^{y_{\varepsilon},x_{\varepsilon}}(0,t)\).
We have
\[\frac{1}{N}\sum_{j=1}^{I^{N,\varepsilon}(0)}\mathbb{P}\left(Y_{j}(t)=x_{ \varepsilon}\right)=\frac{1}{N}\sum_{y_{\varepsilon}}\sum_{j=1}^{I^{N, \varepsilon}(0)}\mathbb{P}\left(Y_{j}(0)=y_{\varepsilon}\right)q_{\varepsilon }^{y_{\varepsilon},x_{\varepsilon}}(0,t),\ \text{thus}\]
\[\sup_{x_{\varepsilon}}\left|\frac{1}{N}\sum_{j=1}^{I^{N, \varepsilon}(0)}\mathbb{P}\left(Y_{j}(t)=x_{\varepsilon}\right)-\sum_{y_{ \varepsilon}}\overline{T}^{\varepsilon}(0,y_{\varepsilon})q_{\varepsilon}^{y_ {\varepsilon},x_{\varepsilon}}(0,t)\right| \leq\frac{1}{N}\sup_{x_{\varepsilon}}\sum_{y_{\varepsilon}}q_{ \varepsilon}^{y_{\varepsilon},x_{\varepsilon}}(0,t)\bigg{|}\sum_{j=1}^{I^{N, \varepsilon}(0)}\mathbb{P}\left(Y_{j}(0)=y_{\varepsilon}\right)-N\overline{T} ^{\varepsilon}(0,y_{\varepsilon})\bigg{|}\] \[\leq\frac{1}{N}\sup_{x_{\varepsilon}}q_{\varepsilon}^{y_{ \varepsilon},x_{\varepsilon}}(0,t)\frac{\overline{T}^{\varepsilon}(0,y_{ \varepsilon})}{\overline{T}^{\varepsilon}(0)}\bigg{|}I^{N,\varepsilon}(0)-N \overline{T}^{\varepsilon}(0)\bigg{|}\] \[\leq\frac{C}{N}\quad\longrightarrow 0\,. \tag{5.3}\]
Combining (5.1), (5.2) and (5.3), we finally have
\[\sup_{0\leq t\leq T}\mathbb{E}\left(\sup_{x_{\varepsilon}\in\mathrm{D}_{ \varepsilon}}\left|\frac{1}{N}\sum_{j=1}^{I^{N,\varepsilon}(0)}\lambda_{-j}(t )\mathds{1}_{Y_{j}(t)=x_{\varepsilon}}-\overline{\lambda}_{0}(t)\sum_{y_{ \varepsilon}}\overline{T}^{\varepsilon}(0,y_{\varepsilon})q_{\varepsilon}^{y_ {\varepsilon},x_{\varepsilon}}(0,t)\right|^{2}\right)\longrightarrow 0\,, \tag{5.4}\]
provided \(N\varepsilon^{d}\to+\infty\).
Let \(\sigma^{N,\varepsilon}\) be the stopping time defined by
\[\sigma^{N,\varepsilon}(\omega):=\inf\left\{t>0\,,\omega\notin A_{t,\delta} \cap B_{t,\delta}\right\}, \tag{5.5}\]
where for all \(t\leq T\), \(\delta>0\),
\[A_{t,\delta}=\left\{\left\|\int_{0}^{t}T_{S,\varepsilon}(t-s)d\mathscr{M}_{S} ^{N,\varepsilon}(s)\right\|_{\infty}\leq\delta\right\},\quad B_{t,\delta}= \left\{\left\|\int_{0}^{t}T_{I,\varepsilon}(t-s)d\widetilde{\mathscr{M}}_{I}^{N,\varepsilon}(s)\right\|_{\infty}\leq\delta\right\},\]
with
\[\mathscr{M}_{S}^{N,\varepsilon}(t)=\sum_{y_{\varepsilon}\sim x_{ \varepsilon}}\frac{1}{N}M_{S}^{y_{\varepsilon},x_{\varepsilon}}\left(N\int_{0}^ {t}\frac{\nu_{S}}{\varepsilon^{2}}\overline{S}^{N,\varepsilon}(s,y_{ \varepsilon})ds\right)-\sum_{y_{\varepsilon}\sim x_{\varepsilon}}\frac{1}{N}M _{S}^{x_{\varepsilon},y_{\varepsilon}}\left(N\int_{0}^{t}\frac{\nu_{S}}{ \varepsilon^{2}}\overline{S}^{N,\varepsilon}(s,x_{\varepsilon})ds\right),\] \[\widetilde{\mathscr{M}}_{I}^{N,\varepsilon}(t)=\mathscr{M}_{I}^{N,\varepsilon}(t)+\mathscr{M}_{SI}^{N,\varepsilon}(t),\quad\text{ where }\] \[\mathscr{M}_{I}^{N,\varepsilon}(t)=\sum_{y_{\varepsilon}\sim x_{ \varepsilon}}\frac{1}{N}M_{I}^{y_{\varepsilon},x_{\varepsilon}}\left(N\int_{0 }^{t}\frac{\nu_{I}}{\varepsilon^{2}}\overline{I}^{N,\varepsilon}(s,y_{ \varepsilon})ds\right)-\sum_{y_{\varepsilon}\sim x_{\varepsilon}}\frac{1}{N} M_{I}^{x_{\varepsilon},y_{\varepsilon}}\left(N\int_{0}^{t}\frac{\nu_{I}}{ \varepsilon^{2}}\overline{I}^{N,\varepsilon}(s,x_{\varepsilon})ds\right),\] \[\mathscr{M}_{SI}^{N,\varepsilon}(t)=\frac{1}{N}\int_{0}^{t}\int_ {0}^{\infty}\mathds{1}_{u\leq S^{N,\varepsilon}(s^{-},x_{\varepsilon})} \overline{\Gamma}^{N,\varepsilon}(s^{-},x_{\varepsilon})\overline{Q}^{x_{ \varepsilon}}(ds,du).\]
\(\overline{Q}^{x_{\varepsilon}}(ds,du):=Q^{x_{\varepsilon}}(ds,du)-dsdu\) is the compensated PRM associated with \(Q^{x_{\varepsilon}}_{\varepsilon}(ds,du)\), and we have used the notations
\[M_{S}^{x_{\varepsilon},y_{\varepsilon}}(t)=P_{S}^{x_{\varepsilon},y_{ \varepsilon}}(t)-t,\quad M_{I}^{x_{\varepsilon},y_{\varepsilon}}(t)=P_{I}^{x _{\varepsilon},y_{\varepsilon}}(t)-t.\]
Let \(\bar{c}:=\dfrac{\lambda^{*}\beta^{*}\left\|\overline{I}^{N,\varepsilon}(t) \right\|_{\infty}}{c^{\gamma}}\), where \(c\) stands for the bound in 4.2. We define the stopping time
\[\tau^{N,\varepsilon}=\inf\left\{t>0\,,\,\left\|\int_{0}^{t}e^{(t-s)\left( \Delta_{\varepsilon}^{S}-\bar{c}I_{d}\right)}d\widetilde{\mathscr{M}}_{S}^{ N,\varepsilon}(s)\right\|_{\infty}\geq\frac{c}{8}\right\},\]
where \(I_{d}\) is the identity operator on \(\mathrm{H}^{\varepsilon}\), and \(\widetilde{\mathscr{M}}_{S}^{N,\varepsilon}(t,x_{\varepsilon}):=\mathscr{M}_{ S}^{N,\varepsilon}(t,x_{\varepsilon})-\mathscr{M}_{SI}^{N,\varepsilon}(t,x_{ \varepsilon})\).
In the proof of the next Proposition, we shall need the following Lemma.
**Lemma 5.4**: _As \((N,\varepsilon)\to(\infty,0)\) in such way that \(N\varepsilon^{d}\to\infty\), \(\left\|\overline{S}^{N,\varepsilon}(0,.)-\overline{S}^{\varepsilon}(0,.) \right\|_{\infty}\longrightarrow 0\) in \(L^{2}(\Omega)\)._
**Proof.** We have
\[\overline{S}^{N,\varepsilon}(0,x_{\varepsilon})-\overline{S}^{ \varepsilon}(0,x_{\varepsilon}) = \frac{1}{N}\sum_{j=1}^{S^{N,\varepsilon}(0)}\mathds{1}_{X_{j}=x_{ \varepsilon}}-\mathbb{P}\left(X=x_{\varepsilon}\right)\overline{S}^{ \varepsilon}(0)\] \[= \overline{S}^{\varepsilon}(0)\frac{1}{N\overline{S}^{\varepsilon }(0)}\sum_{j=1}^{S^{N,\varepsilon}(0)}\left[\mathds{1}_{X_{j}=x_{ \varepsilon}}-\mathbb{P}\left(X=x_{\varepsilon}\right)\right]+\frac{\mathbb{P} \left(X=x_{\varepsilon}\right)}{N}\left[\overline{S}^{N,\varepsilon}(0)-N \overline{S}^{\varepsilon}(0)\right].\]
\[\mathbb{E}\left[\left|\overline{S}^{N,\varepsilon}(0,x_{ \varepsilon})-\overline{S}^{\varepsilon}(0,x_{\varepsilon})\right|^{2}\right] \leq \frac{2}{N^{2}}\sum_{j=1}^{S^{N,\varepsilon}(0)}Var\left[ \mathds{1}_{X=x_{\varepsilon}}\right]+\frac{2\left[\mathbb{P}\left(X=x_{ \varepsilon}\right)\right]^{2}}{N^{2}}\] \[\leq \frac{\overline{S}^{\varepsilon}(0)}{N}\frac{C}{c}\varepsilon^{d} +\frac{C\varepsilon^{2d}}{N^{2}}\leq\frac{C^{\prime}}{N}+\frac{C\varepsilon^{2 d}}{N^{2}}.\]
Then
\[\mathbb{E}\left[\sup_{x_{\varepsilon}\in\mathrm{D}_{\varepsilon}} \left|\overline{S}^{N,\varepsilon}(0,x_{\varepsilon})-\overline{S}^{\varepsilon}(0,x_{\varepsilon})\right|^{2}\right] \leq \frac{C^{\prime}}{N\varepsilon^{d}}+\frac{C\varepsilon^{d}}{N^{2}}.\]
The result follows. \(\square\)
**Proposition 5.1**: _For all \(T>0\), there exists \(C\) such that for \(N\) large enough if \(t\leq\sigma^{N,\varepsilon}\wedge T\), then \(\left\|\overline{S}^{N,\varepsilon}(t)\right\|_{\infty}\leq C\) and \(\left\|\overline{I}^{N,\varepsilon}(t)\right\|_{\infty}\leq C\), for all \(\varepsilon>0\). Moreover there exists \(\varepsilon_{0}>0\) and \(c_{0}>0\) such that if \(t\leq\sigma^{N,\varepsilon}\wedge\tau^{N,\varepsilon}\wedge T\), \(\overline{B}^{N,\varepsilon}(t,x_{\varepsilon})\geq c_{0}\), for all \(x_{\varepsilon}\in\mathrm{D}_{\varepsilon}\), provided \(\varepsilon\leq\varepsilon_{0}\)._
**Proof.** Let first treat the term \(\left\|\overline{S}^{N,\varepsilon}(t)\right\|_{\infty}\).
Using the Duhamel formula, we have
\[\overline{S}^{N,\varepsilon}(t,x_{\varepsilon})\leq\left(T_{S,\varepsilon}(t) \overline{S}^{N,\varepsilon}(0,.)\right)(x_{\varepsilon})+\int_{0}^{t}\left(T_ {S,\varepsilon}(t-s)d\mathscr{M}_{S}^{N,\varepsilon}(s,.)\right)(x_{ \varepsilon}).\]
Since \(\overline{S}^{N,\varepsilon}(0,x_{\varepsilon})\leq C\), for all \(x_{\varepsilon}\in\mathrm{D}_{\varepsilon}\), we obtain that for \(t\leq\sigma^{N,\varepsilon}\wedge T\),
\[\left\|\overline{S}^{N,\varepsilon}(t)\right\|_{\infty}\leq C+\delta\,.\]
We now consider the term \(\left\|\overline{T}^{N,\varepsilon}(t)\right\|_{\infty}\). Arguing as in the proof of Lemma 4.1, we have for \(t\leq\sigma^{N,\varepsilon}\wedge T\),
\[\left\|\overline{T}^{N,\varepsilon}(t)\right\|_{\infty} \leq e^{Ct}\left(C+\sup_{0\leq t\leq T}\,\left\|\int_{0}^{t}T_{I, \varepsilon}(t-s)d\widetilde{\mathscr{M}}_{I}^{N,\varepsilon}(s)\right\|_{ \infty}\right)\] \[\leq\left(C+\delta\right)e^{CT}.\]
We finally consider the term \(\overline{B}^{N,\varepsilon}(t,x_{\varepsilon})\). It follows from Lemma 5.4 that \(\left\|\overline{S}^{N,\varepsilon}(0,.)-\overline{S}^{\varepsilon}(0,.) \right\|_{\infty}\longrightarrow 0\) and from Lemma 4.2 that \(\overline{S}^{\varepsilon}(0,x_{\varepsilon})\geq c\), for all \(x_{\varepsilon}\in\mathrm{D}_{\varepsilon}\), then for \(N\) large enough, \(\mathbb{P}\left(\inf_{x_{\varepsilon}}\overline{S}^{N,\varepsilon}(0,x_{ \varepsilon})\geq\frac{c}{2}\right)\) is close to \(1\). Let \(T_{c}^{N,\varepsilon}=\inf\left\{t\,,\inf_{x_{\varepsilon}}\overline{S}^{N, \varepsilon}(t,x_{\varepsilon})<\frac{c}{4}\right\}\). On the interval \([0\,,T_{c}^{N,\varepsilon})\), \(\overline{S}^{N,\varepsilon}(t,x_{\varepsilon})\geq\frac{c}{4}\), \(\forall x_{\varepsilon}\in\mathrm{D}_{\varepsilon}\). For all \(t\leq T_{c}^{N,\varepsilon}\wedge\sigma^{N,\varepsilon}\wedge T\), we have
\[\overline{\Gamma}^{N,\varepsilon}(t,x_{\varepsilon})=\frac{1}{\left\lfloor \overline{B}^{N,\varepsilon}(t,x_{\varepsilon})\right\rfloor^{\gamma}}\sum_{ y_{\varepsilon}}\beta_{\varepsilon}^{x_{\varepsilon},y_{\varepsilon}}(t) \overline{\mathfrak{F}}^{N,\varepsilon}(t,y_{\varepsilon})\leq\frac{4^{ \gamma}\lambda^{*}\beta^{*}\left\|\overline{T}^{N,\varepsilon}(t)\right\|_{ \infty}}{c^{\gamma}}=\bar{c}\]
and then, if moreover \(t\leq\tau^{N,\varepsilon}\),
\[\overline{S}^{N,\varepsilon}(t,x_{\varepsilon}) \geq \left(e^{(\Delta_{\varepsilon}^{S}-\overline{c}I_{d})t}\overline{ S}^{N,\varepsilon}(0)\right)(x_{\varepsilon})+\int_{0}^{t}\left(e^{(t-s)(\Delta_{ \varepsilon}^{S}-\overline{c}I_{d})}d\widetilde{\mathscr{M}}_{S}^{N,\varepsilon }(s)\right)(x_{\varepsilon})\] \[\geq \frac{c}{2}e^{-\overline{c}t}-\frac{c}{8}. \tag{5.6}\]
We note that \(\frac{c}{2}e^{-\overline{c}t}\geq\frac{c}{4}\quad\text{iff}\quad t\leq\frac{ \log 2}{\bar{c}}=T_{\bar{c}}\).
So, on the event \(\tau^{N,\varepsilon}\wedge\sigma^{N,\varepsilon}\wedge T\geq T_{\bar{c}}\), \(\overline{S}^{N,\varepsilon}(t,x_{\varepsilon})\geq\frac{c}{8},\quad\forall \,0\leq t\leq T_{\bar{c}}\).
\[\text{For }t>T_{\bar{c}},\quad\overline{T}^{N,\varepsilon}(t,x_{\varepsilon}) \geq\left(T_{I,\varepsilon}(t)\overline{I}^{N,\varepsilon}(0)\right)(x_{ \varepsilon})+\int_{0}^{t}\left(T_{I,\varepsilon}(t-s)d\mathscr{M}_{I}^{N, \varepsilon}(s)\right)(x_{\varepsilon}).\]
We choose \(T>T_{\bar{c}}\) arbitrary. We know from the proof of Lemma 4.2 that there exists \(\varepsilon_{0}\) and \(\underline{c}\) such that \(\overline{T}^{\varepsilon}(t,x_{\varepsilon})\geq\underline{c}\) for all \(\varepsilon\leq\varepsilon_{0}\), \(x_{\varepsilon}\in\mathrm{D}_{\varepsilon}\) and \(\frac{\log 2}{\bar{c}}\leq t\leq T\). If we now choose \(\delta=\frac{\underline{c}}{2}\) in the definition of \(\sigma^{N,\varepsilon}\), we deduce that for any \(\varepsilon\leq\varepsilon_{0}\), \(x_{\varepsilon}\in\mathrm{D}_{\varepsilon}\), \(T_{\bar{c}}\leq t\leq\sigma^{N,\varepsilon}\wedge T\), \(\overline{T}^{N,\varepsilon}(t,x_{\varepsilon})\geq\frac{\underline{c}}{2}\).
\(\square\)
From now, we decree that \(\sigma^{N,\varepsilon}=0\) whenever \(\inf_{x_{\varepsilon}}\overline{S}^{N,\varepsilon}(0,x_{\varepsilon})<\frac{c} {2}\), or \(\varepsilon>\varepsilon_{0}\).
**Lemma 5.5**: _Given \(T>0\), there exists \(C>0\) such that for any \(t<\tau^{N,\varepsilon}\wedge\sigma^{N,\varepsilon}\), we have_
\[\begin{split}\left\|\,\overline{S}^{N,\varepsilon}(t)\overline{ \Gamma}^{N,\varepsilon}(t)-\overline{S^{c}}(t)\overline{\Gamma}^{\varepsilon} (t)\,\right\|_{\infty}&\leq C\Bigg{(}\,\left\|\,\overline{S}^{N, \varepsilon}(t)-\overline{S}^{\varepsilon}(t)\,\right\|_{\infty}\\ &+\,\left\|\overline{\mathfrak{F}}^{N,\varepsilon}(t)-\overline{ \mathfrak{F}}^{\varepsilon}(t)\,\right\|_{\infty}+\,\left\|\,\overline{I}^{N, \varepsilon}(t)-\overline{I}^{\varepsilon}(t)\,\right\|_{\infty}\Bigg{)}. \end{split} \tag{5.7}\]
**Proof.** Note that, using the map \(\mathscr{H}\) defined in Remark 4.1, with a slight modification of the constants, we have
\[\overline{S}^{N,\varepsilon}(t,x_{\varepsilon})\overline{\Gamma}^{N, \varepsilon}(t,x_{\varepsilon})-\overline{S}^{\varepsilon}(t,x_{\varepsilon })\overline{\Gamma}^{\varepsilon}(t,x_{\varepsilon})=\mathscr{H}\left( \overline{S}^{N,\varepsilon},\overline{I}^{N,\varepsilon},\overline{ \mathfrak{F}}^{N,\varepsilon}\right)(t,x_{\varepsilon})-\mathscr{H}\left( \overline{S}^{\varepsilon},\overline{I}^{\varepsilon},\overline{\mathfrak{F} }^{\varepsilon}\right)(t,x_{\varepsilon}),\]
and the result the follows from the fact that \(\mathscr{H}\) is bounded and globally Lipschitz.
\(\square\)
We define \(\omega^{N,\varepsilon}(t)=\omega^{N,\varepsilon}_{S}(t)+\omega^{N,\varepsilon }_{I}(t)+\omega^{N,\varepsilon}_{\mathfrak{F}}(t)\), with
\[\begin{split}\omega^{N,\varepsilon}_{S}(t)=&\, \left\|\overline{S}^{N,\varepsilon}(0)-\overline{S}^{\varepsilon}(0)\, \right\|_{\infty}+\,\left\|\int_{0}^{t}T_{S,\varepsilon}(t-s)d\widetilde{ \mathscr{H}}^{N,\varepsilon}_{S}(s)\,\right\|_{\infty},\\ \omega^{N,\varepsilon}_{I}(t)=&\,\left\|\,\bar{I}^{ N,\varepsilon}(0)-\overline{I}^{\varepsilon}(0)\,\right\|_{\infty}+\,\left\|\int_{0}^{t}T _{I,\varepsilon}(t-s)d\widetilde{\mathscr{M}}^{N,\varepsilon}_{I}(s)\,\right\| _{\infty},\\ \omega^{N,\varepsilon}_{\mathfrak{F}}(t)=&\,\left\| \overline{\mathfrak{F}}^{N,\varepsilon}_{0}(t)-\overline{\mathfrak{F}}^{ \varepsilon}_{0}(t)\,\right\|_{\infty}+\,\left\|\,\mathscr{M}^{N,\varepsilon }_{\mathfrak{F}}(t)\,\right\|_{\infty},\end{split} \tag{5.8}\]
where
\[\mathscr{M}^{N,\varepsilon}_{\mathfrak{F}}(t,x_{\varepsilon})=\frac{1}{N}\sum _{y_{\varepsilon}}\int_{0}^{t}\int_{0}^{\infty}\int_{\mathbf{D}}\int_{\mathbf{ D}}\lambda(t-s)\mathds{1}_{u\leq S^{N,\varepsilon}(s^{-},y_{\varepsilon}) \overline{\Gamma}^{N,\varepsilon}(s^{-},y_{\varepsilon})}\mathds{1}_{Y^{s,y \varepsilon}(t)=x_{\varepsilon}}\overline{Q}^{y_{\varepsilon}}(ds,du,d \lambda,dY).\]
Note that \(\mathscr{M}^{N,\varepsilon}_{\mathfrak{F}}\) is not a martingale.
**Lemma 5.6**: _As \((N,\varepsilon)\to(\infty,0)\), in such a way that \(N\varepsilon^{d}\to\infty\),_
\[\sup_{0\leq t\leq T}\,\mathbb{E}\left(\mathds{1}_{t\leq\sigma^{N,\varepsilon \wedge\tau^{N,\varepsilon}\wedge T}}\left[\omega^{N,\varepsilon}(t)\right]^{ 2}\right)\to 0.\]
**Proof.** We shall use the following notation
\[\left\|\Phi^{\varepsilon}\right\|_{\mathtt{H}^{\varepsilon}}:=\left[\sum_{x_ {\varepsilon}}\left|\Phi^{\varepsilon}_{x_{\varepsilon}}\right|^{2}\right]^{ 1/2},\]
for any step function \(\Phi^{\varepsilon}\) (\(\Phi^{\varepsilon}_{x_{\varepsilon}}\) denoting the value of \(\Phi^{\varepsilon}\) on the cell \(V_{\varepsilon}(x_{\varepsilon})\)).
Thanks to Theorem 2.1 in P. Kotelenez [7], we have
\[\begin{split}\mathbb{E}\left[\sup_{t\leq\sigma^{N,\varepsilon \wedge\tau^{N,\varepsilon}\wedge T}}\,\left\|\,\int_{0}^{t}T_{S,\varepsilon}(t-s )d\mathscr{M}^{N,\varepsilon}_{SI}(s)\,\right\|_{\mathtt{H}^{\varepsilon}}^{2} \right]&\leq C\mathbb{E}\left[\,\left\|\,\mathscr{M}^{N, \varepsilon}_{SI}(\sigma^{N,\varepsilon}\wedge\tau^{N,\varepsilon}\wedge T) \,\right\|_{\mathtt{H}^{\varepsilon}}^{2}\right]\\ &\leq\frac{C}{N}\sum_{x_{\varepsilon}}\mathbb{E}\left(\int_{0}^{T} \overline{S}^{N,\varepsilon}(s\wedge\sigma^{N,\varepsilon}\wedge\tau^{N, \varepsilon},x_{\varepsilon})\overline{\Gamma}^{N,\varepsilon}(s\wedge\sigma^{N, \varepsilon}\wedge\tau^{N,\varepsilon},x_{\varepsilon})ds\right).\end{split}\]
Provided \(t\leq\sigma^{N,\varepsilon}\wedge\tau^{N,\varepsilon}\wedge T\), \(\overline{\Gamma}^{N,\varepsilon}(t,x_{\varepsilon})\leq C(\lambda^{*}, \beta^{*})\) and \(\overline{S}^{N,\varepsilon}(t,x_{\varepsilon})\leq C\). Then
\[\mathbb{E}\left[\sup_{t\leq\sigma^{N,\varepsilon}\wedge\tau^{N,\varepsilon} \wedge T}\,\left\|\int_{0}^{t}T_{S,\varepsilon}(t-s)d\mathscr{M}^{N,\varepsilon }_{SI}(s)\,\right\|_{\mathtt{H}^{\varepsilon}}^{2}\right]\leq C(\lambda^{*}, \beta^{*})\frac{1}{N\varepsilon^{d}}.\]
Since the \(L^{\infty}\) norm is bounded by the \(\mathtt{H}^{\varepsilon}\) norm, as \((N,\varepsilon)\to(\infty,0)\), provided \(N\varepsilon^{d}\to 0\),
\[\mathbb{E}\left[\sup_{t\leq\sigma^{N,\varepsilon}\wedge\tau^{N, \varepsilon}\wedge T}\bigg{\|}\int_{0}^{t}T_{S,\varepsilon}(t-s)d\mathscr{M}_{ SI}^{N,\varepsilon}(s)\bigg{\|}_{\infty}^{2}\right]\longrightarrow 0. \tag{5.9}\]
The same argument can be used for the term \(\bigg{\|}\int_{0}^{t}T_{S,\varepsilon}(t-s)d\mathscr{M}_{S}^{N,\varepsilon}(s) \bigg{\|}_{\infty}\). We conclude that as \((N,\varepsilon)\longrightarrow(\infty,0)\), in such a way that \(N\varepsilon^{d}\to 0\),
\[\sup_{t\leq\sigma^{N,\varepsilon}\wedge\tau^{N,\varepsilon}\wedge T}\omega_{S }^{N,\varepsilon}(t)\longrightarrow 0\ \ \text{in}\ L^{2}(\Omega)\,. \tag{5.10}\]
A similar proof establishes that
\[\sup_{t\leq\sigma^{N,\varepsilon}\wedge\tau^{N,\varepsilon}\wedge T}\omega_{I }^{N,\varepsilon}(t)\longrightarrow 0\ \ \text{in}\ L^{2}(\Omega)\,. \tag{5.11}\]
We now consider \(\omega_{\mathfrak{F}}^{N,\varepsilon}(t)\) The convergence to zero of the first term has been established in Lemma 5.3. We now consider the second term. We have
\[\sup_{t\leq T}\mathbb{E}\left(\mathds{1}_{t\leq\sigma^{N, \varepsilon}\wedge\tau^{N,\varepsilon}\wedge T}\sup_{x_{\varepsilon}}\Big{|} \mathscr{M}_{\mathfrak{F}}^{N,\varepsilon}(t,x_{\varepsilon})\Big{|}^{2}\right)\] \[\qquad\qquad\qquad=\frac{1}{N^{2}}\sup_{t\leq T}\mathbb{E}\left[ \mathds{1}_{t\leq\sigma^{N,\varepsilon}\wedge\tau^{N,\varepsilon}\wedge T} \sup_{x_{\varepsilon}}\left(\sum_{y_{\varepsilon}}\int_{0}^{t}\int_{0}^{ \infty}\int_{\mathbf{D}}\int_{\mathbf{D}}\lambda(t-s)\mathds{1}_{u\leq S^{N, \varepsilon}(s-,y_{\varepsilon})}\overline{\Gamma}^{N,\varepsilon}(s^{-},y_{ \varepsilon})\right.\right.\] \[\qquad\qquad\qquad\qquad\times\mathds{1}_{Y^{s,y_{\varepsilon}}( t)=x_{\varepsilon}}\overline{Q}^{y_{\varepsilon}}(ds,du,d\lambda,dY)\Big{)}^{2}\right]\] \[\qquad\qquad\leq\frac{1}{N^{2}}\sum_{x_{\varepsilon},y_{ \varepsilon}}\mathbb{E}\int_{0}^{\sigma^{N,\varepsilon}\wedge\tau^{N, \varepsilon}\wedge T}\lambda^{2}(t-s)S^{N,\varepsilon}(s,y_{\varepsilon}) \overline{\Gamma}^{N,\varepsilon}(s,y_{\varepsilon})q_{\varepsilon}^{y_{ \varepsilon},x_{\varepsilon}}(s,t)ds\] \[\qquad\qquad\qquad\leq\frac{(\lambda^{*})^{2}}{N}\sum_{x_{ \varepsilon}}\mathbb{E}\left[\int_{0}^{\sigma^{N,\varepsilon}\wedge\tau^{N, \varepsilon}\wedge T}\sup_{y_{\varepsilon}}\left|\overline{S}^{N,\varepsilon}( s,y_{\varepsilon})\overline{\Gamma}^{N,\varepsilon}(s,y_{\varepsilon}) \right|\sum_{y_{\varepsilon}}q_{\varepsilon}^{y_{\varepsilon},x_{\varepsilon}}( s,t)ds\right]\] \[\qquad\qquad\qquad\leq C(\lambda^{*})\frac{T}{N\varepsilon^{d}}. \tag{5.12}\]
The result follows. Note that since \(\mathscr{M}_{\mathfrak{F}}^{N,\varepsilon}(t,x_{\varepsilon})\) is not a martingale, the result for \(\omega_{\mathfrak{F}}^{N,\varepsilon}(t)\) is weaker than (5.10) and (5.11).
Lemma 5.6 clearly implies
**Lemma 5.7**: _As \((N,\varepsilon)\longrightarrow(\infty,0)\) in such way that \(N\varepsilon^{d}\to\infty\), \(\mathds{1}_{t\leq\sigma^{N,\varepsilon}\wedge\tau^{N,\varepsilon}\wedge T} \int_{0}^{t}\omega^{N,\varepsilon}(s)ds\longrightarrow 0\) in probability._
It remains to establish the next result.
**Lemma 5.8**: _As \((N,\varepsilon)\to(\infty,0)\), \(\mathbb{P}\left(\sigma^{N,\varepsilon}<T\right)\longrightarrow 0\) and \(\mathbb{P}\left(\tau^{N,\varepsilon}<T\right)\longrightarrow 0\)._
**Proof.** We have
\[\mathbb{P}\left(\sigma^{N,\varepsilon}<T\right) \leq\mathbb{P}\left(\sup_{t\leq\sigma^{N,\varepsilon}\wedge T} \bigg{\|}\int_{0}^{t}T_{S,\varepsilon}(t-s)d\mathscr{M}_{S}^{N,\varepsilon}( s)\bigg{\|}_{\infty}\geq\delta/2\right) \tag{5.13}\] \[\qquad\qquad+\mathbb{P}\left(\sup_{t\leq\sigma^{N,\varepsilon} \wedge T}\bigg{\|}\int_{0}^{t}T_{I,\varepsilon}(t-s)d\widetilde{\mathscr{M}}_{ I}^{N,\varepsilon}(s)\bigg{\|}_{\infty}\geq\delta/2\right).\]
We consider the second term only. The first one is treated similarly.
\[\left\|\,\int_{0}^{t}T_{I,\varepsilon}(t-s)d\widetilde{\mathscr{M}}_{I}^{N, \varepsilon}(s)\right\|_{\infty}\leq\left\|\,\int_{0}^{t}T_{I,\varepsilon}(t-s )d\mathscr{M}_{SI}^{N,\varepsilon}(s)\,\right\|_{\infty}+\left\|\,\int_{0}^{t }T_{I,\varepsilon}(t-s)d\mathscr{M}_{I}^{N,\varepsilon}(s)\,\right\|_{\infty},\]
from Proposition 3.2 of [8], we have
\[\mathbb{P}\left(\sup_{t\leq\sigma^{N,\varepsilon}\wedge T}\left\|\,\int_{0}^{ t}T_{I,\varepsilon}(t-s)d\mathscr{M}_{I}^{N,\varepsilon}(s)\,\right\|_{ \infty}\geq\frac{\delta}{2}\right)\leq 4\varepsilon^{-d-2}\exp\left(- \mathtt{a}\frac{\delta^{2}}{16}N\right) \tag{5.14}\]
Since we assume that \(N\varepsilon^{d}\longrightarrow 0\), the right side, hence also the left hand side of (5.14) tends to \(0\). By Chebyshev's inequality, we have
\[\mathbb{P}\left(\sup_{t\leq\sigma^{N,\varepsilon}\wedge T}\left\|\,\int_{0}^ {t}T_{I,\varepsilon}(t-s)d\mathscr{M}_{SI}^{N,\varepsilon}(s)\,\right\|_{ \mathtt{H}^{\varepsilon}}\geq\frac{\delta}{2}\right)\leq\frac{4}{\delta^{2}} \mathbb{E}\left[\sup_{t\leq\sigma^{N,\varepsilon}\wedge T}\,\left\|\,\int_{0} ^{t}T_{I,\varepsilon}(t-s)d\mathscr{M}_{SI}^{N,\varepsilon}(s)\,\right\|_{ \mathtt{H}^{\varepsilon}}^{2}\right].\]
The right hand side tends to \(0\) as shown in the proof of Lemma 5.6. Since the \(L^{\infty}\) norm is bounded by the \(\mathtt{H}^{\varepsilon}\) norm, this finishes the proof that \(\mathbb{P}\left(\sigma^{N,\varepsilon}<T\right)\to 0\). A similar proof establishes the same result for \(\tau^{N,\varepsilon}\).
\(\square\)
We now extend our stochastic process to the whole space \(\mathbb{T}^{d}\). So, we define
\[\overline{\mathbf{S}}^{\,N,\varepsilon}(t,x):=\sum_{x_{\varepsilon }}\overline{\mathbf{S}}^{\,\varepsilon}(t,x_{\varepsilon})\mathds{1}_{V_{ \varepsilon}(x_{\varepsilon})}(x),\quad\overline{\mathbf{I}}^{\,N, \varepsilon}(t,x):=\sum_{x_{\varepsilon}}\overline{I}^{\,\varepsilon}(t,x_{ \varepsilon})\mathds{1}_{V_{\varepsilon}(x_{\varepsilon})}(x)\] \[\overline{\mathbf{B}}^{\,N,\varepsilon}(t,x):=\sum_{x_{ \varepsilon}}\overline{\mathbf{B}}^{\,\varepsilon}(t,x_{\varepsilon})\mathds{1 }_{V_{\varepsilon}(x_{\varepsilon})}(x),\quad\overline{\mathbf{F}}^{\,N, \varepsilon}(t,x):=\sum_{x_{\varepsilon}}\overline{\mathbf{S}}^{\,\varepsilon }(t,x_{\varepsilon})\mathds{1}_{V_{\varepsilon}(x_{\varepsilon})}(x)\]
and set \(\overline{\mathbf{X}}^{\,N,\varepsilon}:=(\overline{\mathbf{S}}^{\,N, \varepsilon}\,,\,\overline{\mathbf{F}}^{\,N,\varepsilon}\,,\,\overline{ \mathbf{I}}^{\,N,\varepsilon})\).
Let us recall the following Gronwall's lemma.
**Lemma 5.9**: _Let \(\phi\) and \(\psi\) be two nonegative Borel measurable locally bounded functions on an interval \([0,T)\), with \(T<\infty\) and \(C\) a non-negative constant. If for all \(t\in[0,T)\), the following inequality is satisfied :_
\[\phi(t)\leq C\int_{0}^{t}\phi(s)ds+\psi(t), \tag{5.15}\]
_then \(\phi(t)\leq C\int_{0}^{t}e^{C(t-s)}\psi(s)ds+\psi(t)\) for all \(t\leq T\)._
**Theorem 5.1**: _Let us assume that \((N,\varepsilon)\rightarrow(\infty,0)\), in such a way that \(N\varepsilon^{d}\rightarrow\infty\). Then \((N,\varepsilon)\rightarrow(\infty,0)\)_
\[\left\|\,\overline{\mathbf{X}}^{\,N,\varepsilon}(t)-\overline{\mathbf{X}}^{ \,\varepsilon}(t)\,\right\|_{\infty}\longrightarrow 0,\text{ in probability},\ \ \forall\,t\geq 0. \tag{5.16}\]
**Proof.** Since \(\left\|\,\overline{\mathbf{X}}^{\,N,\varepsilon}(t)-\overline{\mathbf{X}}^{ \,\varepsilon}(t)\,\right\|_{\infty}=\left\|\,\overline{X}^{\,N,\varepsilon}( t)-\overline{X}^{\,\varepsilon}(t)\,\right\|_{\infty}\), it suffices to show that
\[\left\|\,\overline{X}^{\,N,\varepsilon}(t)-\overline{X}^{\,\varepsilon}(t)\, \right\|_{\infty}\longrightarrow 0,\ \text{ in probability},\text{for all }t\geq 0.\]
We first consider
\[\overline{\mathbf{S}}^{N,\varepsilon}(t,x_{\varepsilon})=\frac{1}{N}\sum_{j=1 }^{I^{N,\varepsilon}(0)}\lambda_{-j}(t)\mathds{1}_{Y_{j}(t)=x_{\varepsilon} }+\sum_{y_{\varepsilon}}\int_{0}^{t}\overline{\lambda}(t-s)\overline{S}^{\,N \varepsilon}(s,y_{\varepsilon})\overline{\mathbf{I}}^{N,\varepsilon}(s,y_{ \varepsilon})q_{\varepsilon}^{y_{\varepsilon},x_{\varepsilon}}(s,t)ds+\mathscr{ M}_{\overline{\mathbf{S}}}^{\,N,\varepsilon}(t,x_{\varepsilon}),\]
\[\left\|\overline{\overline{S}}^{N,\varepsilon}(t)-\overline{S}^{ \varepsilon}(t)\right\|_{\infty}\leq C\int_{0}^{t}\left\|\overline{X}^{N, \varepsilon}(s)-\overline{X}^{\varepsilon}(s)\right\|_{\infty}ds+\omega_{S}^{N,\varepsilon}(t) \tag{5.20}\] \[\left\|\overline{I}^{N,\varepsilon}(t)-\overline{I}^{\varepsilon} (t)\right\|_{\infty}\leq C\int_{0}^{t}\left\|\overline{X}^{N,\varepsilon}(s)- \overline{X}^{\varepsilon}(s)\right\|_{\infty}ds+\omega_{I}^{N,\varepsilon}(t).\]
It follows that
\[\sup_{0\leq t\leq\sigma^{N,\varepsilon}\wedge\tau^{N,\varepsilon} \wedge T}\left\|\overline{S}^{N,\varepsilon}(t)-\overline{S}^{\varepsilon}(t )\right\|_{\infty} \leq\sup_{0\leq t\leq\sigma^{N,\varepsilon}\wedge\tau^{N, \varepsilon}\wedge T}C\int_{0}^{t}\left\|\overline{X}^{N,\varepsilon}(s)- \overline{X}^{\varepsilon}(s)\right\|_{\infty}ds\] \[+\sup_{0\leq t\leq\sigma^{N,\varepsilon}\wedge\tau^{N, \varepsilon}\wedge T}\omega_{S}^{N,\varepsilon}(t).\]
On the other hand, from (5.19), for all \(t\leq\sigma^{N,\varepsilon}\wedge\tau^{N,\varepsilon}\),
\[\left\|\overline{X}^{N,\varepsilon}(t)-\overline{X}^{\varepsilon}(t)\right\| _{\infty}\leq Ce^{Ct}\int_{0}^{t}\omega^{N,\varepsilon}(s)ds+\omega^{N, \varepsilon}(t). \tag{5.21}\]
So we deduce from Lemmas 5.6, 5.7 and 5.8 and (5.10) that
\[\sup_{0\leq t\leq T}\left\|\overline{S}^{N,\varepsilon}(t)-\overline{S}^{ \varepsilon}(t)\right\|_{\infty}\longrightarrow 0\ \ \mbox{in probability as}\ (N,\varepsilon)\longrightarrow(\infty,0),\]
and the same is true for \(\overline{I}^{N,\varepsilon}(t)-\overline{I}^{\varepsilon}(t)\). Thus the claim follows.
We can now state our main result. \(\square\)
**Theorem 5.3**: _For all \(T>0\), as \((N,\varepsilon)\longrightarrow(\infty,0)\) in such a way that \(N\varepsilon^{d}\to\infty\), we have,_
\[\forall\,t\in[0,T],\quad\left\|\overline{\mathbf{F}}^{N,\varepsilon}(t)- \overline{\mathbf{F}}(t)\right\|_{\infty}\longrightarrow 0,\quad\text{in probability},\]
_and_
\[\sup_{0\leq t\leq T}\Bigg{(}\left\|\,\overline{\mathbf{S}}^{N,\varepsilon}(t) -\overline{\mathbf{S}}(t)\,\right\|_{\infty}+\left\|\overline{\mathbf{I}}^{N, \varepsilon}(t)-\overline{\mathbf{I}}(t)\,\right\|_{\infty}\Bigg{)} \longrightarrow 0\text{ in probability}\]
_as \((N,\varepsilon)\to(\infty,0)\) in such a way that \(N\varepsilon^{d}\to\infty\)._
**Proof.** By using the triangle inequality, the first statement follows from Theorem 4.1 and Theorem 5.1, and the second statement from Theorem 4.1 and Theorem 5.2.
\(\square\)
|
2303.00388 | Gravitation with modified fluid Lagrangian: Variational principle and an
early dark energy model | Variational principle is the main approach to obtain complete and
self-consistent field equations in gravitational theories. This method works
well in pure field cases such as $f(R)$ and Horndeski gravities. However,
debates exist in the literature over the modification of perfect fluid. This
paper aims to clarify this issue. For a wide class of modified fluid
Lagrangian, we show that the variational principle is unable to give complete
field equations. One additional equation is required for completeness. Adopting
the local energy conservation equation gives the modified fluid a good
thermodynamic interpretation. Our result is the first modified fluid theory
that can incorporate energy conservation. As an application of this framework,
we propose a specific modified fluid model to realize early dark energy
triggered by cosmic radiation-matter transition. This model naturally explains
why early dark energy occurs around matter-radiation equality and is useful in
erasing the Hubble tension. | S. X. Tian, Zong-Hong Zhu | 2023-03-01T10:14:23Z | http://arxiv.org/abs/2303.00388v2 | # Gravitation with modified fluid Lagrangian: Variational principle
###### Abstract
Variational principle is the main approach to obtain complete and self-consistent field equations in gravitational theories. This method works well in pure field cases such as \(f(R)\) and Horndeski gravities. However, debates exist in the literature over the modification of perfect fluid. This paper aims to clarify this issue. For a wide class of modified fluid Lagrangian, we show that the variational principle is unable to give complete field equations. One additional equation is required for completeness. Adopting the local energy conservation equation gives the modified fluid a good thermodynamic interpretation. Our result is the first modified fluid theory that can incorporate energy conservation. As an application of this framework, we propose a specific modified fluid model to realize early dark energy triggered by cosmic radiation-matter transition. This model naturally explains why early dark energy occurs around matter-radiation equality and is useful in erasing the Hubble tension.
## I Introduction
Generally speaking, modified gravities belong to classical field theory, in which the variational principle is an important tool to derive the field equations [1; 2]. Fluid is an important source of gravity that describes the Universe, galaxies and stars [3]. The equations of fluid motion are generally given by microscopic particle physics, not by the variational principle. In gravitational theories, the variational principle of general fluid is still controversial, which hinders progress in modifying gravity from the fluid side. Taub [4] first constructed the Lagrangian of perfect fluid, and later Schutz [5] gave a different but also reasonable result. Gonner [6; 7] first discussed the gravitational theory with nonminimal coupling between spacetime and fluid. Two such theories that have been widely discussed recently are \(f(R,\mathcal{L}_{\rm m})\) gravity [8; 9; 10; 11; 12] and \(f(R,T)\) gravity [13; 14; 15]. A comment on the \(f(R,T)\) gravity says that the pure fluid part \(f(T)\) has no physical significance and the resulting theory is exactly perfect fluid [16; 17]. Harko and Moras [18] refute this comment. In addition, energy is generally not conserved in \(f(R,\mathcal{L}_{\rm m})\) and \(f(R,T)\) theories. Gravitational particle creation process is needed to explain the corresponding thermodynamics [19; 20]. Is there a way to generalize the perfect fluid that preserves energy conservation? If such a theory exists, then it can be consistent with conventional thermodynamics, which makes the theory more attractive. The debate on the \(f(R,T)\) gravity and the energy conservation issue are the first two motivations for this paper.
The third motivation is an early dark energy (EDE) model we proposed in [21]. The EDE present at matter-radiation equality (redshift \(\sim 3400\)) can be used to erase the Hubble tension [22; 23; 24; 25; 26; 27; 28]. However, a coincidence problem arises in the scenario -- why the energy scale of EDE is in coincidence with that of matter-radiation equality when their underlying physics seems unrelated [29]. Sakstein _et al._[29; 30] proposed a solution to this coincidence problem based on neutrino physics. Their starting point is that the neutrino mass is close to \(1\,\mathrm{eV}/c^{2}\), which is exactly the energy (temperature) scale of matter-radiation equality. Using such neutrino to trigger the EDE could explain the coincidence. In [21], we proposed a new idea that EDE may be triggered by radiation-matter transition to solve the coincidence problem. We discussed that \(k\)-essence [31] is unable to realize a viable model, and nonminimal coupling between spacetime and matter may be required. Analysis of this possibility requires a complete framework for gravitational theories with modified fluid. In this paper, we will propose a much more simple purely fluid model to realize the desired EDE.
This paper is organized as follows. Section II presents the general framework of our approach to modify fluid and a demonstration in cosmology. We emphasize that we do not consider the nonminimal coupling of spacetime geometry and fluid matter in this paper. Section III discusses the similarities and differences between our result with the minimal coupling cases of \(f(R,\mathcal{L}_{\rm m})\) gravity [10] and \(f(R,T)\) gravity [13]. Section IV presents the desired modified fluid model for EDE. Conclusions are presented in Sec. V.
## II General theory
We adopt the simplest spacetime dynamics and focus on generalizing perfect fluid. The action takes the form [32]
\[S=S_{\rm EH}+S_{\rm F}=\int\mathrm{d}^{4}x\sqrt{-g}\left[\frac{R}{2\kappa}+ \mathcal{L}_{\rm F}\right], \tag{1}\]
where \(\kappa=8\pi G/c^{4}\), \(g=\det(|g_{\mu\nu}|)\), and \(\mathcal{L}_{\textsc{f}}\) is a general modified fluid Lagrangian. Variation of the Einstein-Hilbert action with respect to the metric gives \(\delta S_{\textsc{EH}}=\int\mathrm{d}^{4}x\sqrt{-g}G_{\mu\nu}\delta g^{\mu\nu}/ (2\kappa)\)[3]. Variation of the fluid action can be written formally as \(\delta S_{\textsc{f}}=-\int\mathrm{d}^{4}x\sqrt{-g}T_{\mu\nu}\delta g^{\mu\nu} /2\), where \(T_{\mu\nu}\) is the energy-momentum tensor. These variations give the Einstein field equations \(G_{\mu\nu}=\kappa T_{\mu\nu}\), which in turn give \(\nabla_{\nu}T^{\mu\nu}=0\) based on the Bianchi identity.
More properties about the fluid are needed to derive an explicit expression for \(T_{\mu\nu}\). We assume that \(\mathcal{L}_{\textsc{f}}\) satisfies \(\delta\mathcal{L}_{\textsc{f}}=(\mathrm{d}\mathcal{L}_{\textsc{f}}/\mathrm{d} n)\delta n\) and the fluid satisfies particle number conservation
\[\nabla_{\mu}(nu^{\mu})=0, \tag{2}\]
where \(n\) is the particle number density and \(u^{\mu}\) is the four-velocity of the fluid. The first assumption is used to emphasize that no derivative term of \(\delta n\) appears in \(\delta\mathcal{L}_{\textsc{f}}\). These two assumptions or their equivalents are widely used to derive the energy-momentum tensors of perfect fluid [4; 33] and beyond [10; 13]. Hawking and Ellis [33] present a simple way to derive \(\delta n\). They start by rewriting Eq. (2) as \((1/\sqrt{-g})\times\partial(\sqrt{-g}nu^{\mu})/\partial x^{\mu}=0\), which means \(\delta(\sqrt{-g}nu^{\mu})=0\). Then the variation of \(n^{2}c^{2}=g^{-1}(\sqrt{-g}nu^{\mu}\sqrt{-g}nu^{\nu})g_{\mu\nu}\) gives
\[\delta n=\frac{n}{2}(g_{\mu\nu}+\frac{u_{\mu}u_{\nu}}{c^{2}})\delta g^{\mu\nu}. \tag{3}\]
Considering the expressions of \(\delta\sqrt{-g}\) and \(\delta\mathcal{L}_{\textsc{f}}\), we obtain
\[T_{\mu\nu}=-n\frac{\mathrm{d}\mathcal{L}_{\textsc{f}}}{\mathrm{d}n}\frac{u_{ \mu}u_{\nu}}{c^{2}}+(\mathcal{L}_{\textsc{f}}-n\frac{\mathrm{d}\mathcal{L}_{ \textsc{f}}}{\mathrm{d}n})g_{\mu\nu}. \tag{4}\]
In principle, how fluid participates in gravitational interactions is determined by \(\mathcal{L}_{\textsc{f}}\). We can directly specify an expression for \(\mathcal{L}_{\textsc{f}}(n)\), such as \(\mathcal{L}_{\textsc{f}}\propto n\). In this case, the fluid participates in gravitational interactions in the form of particle number. Alternatively, we can also assume that \(\mathcal{L}_{\textsc{f}}\) directly depends on other thermodynamic quantities, such as \(\mathcal{L}_{\textsc{f}}\propto\rho\), where \(\rho\) is the fluid mass density [32]. In this case, the source of the gravitational interaction is \(\rho\) and other related quantities, rather than \(n\) as in the previous case. As we show later, this case requires an additional equation to determine the dependence of \(\rho\) and \(n\), and this equation cannot be given by the variational principle. Note that both cases satisfy \(\delta\mathcal{L}_{\textsc{f}}=(\mathrm{d}\mathcal{L}_{\textsc{f}}/\mathrm{d }n)\delta n\) formally. Now we discuss the above two cases around the theoretical self-consistency.
Neglecting the spacetime dynamics, if we specify an explicit expression for \(\mathcal{L}_{\textsc{f}}(n)\), then there are five variables \(\{n,u^{\mu}\}\) to describe the fluid but six evolution or constraint equations \(\{\nabla_{\nu}T^{\mu\nu}=0\), \(u^{\mu}u_{\mu}=-c^{2}\), Eq. (2)}. The system is overdetermined as there are more equations than unknowns. However, the system is still self-consistent as these six equations are not independent of each other. To see this, we start from \(u_{\mu}\nabla_{\nu}T^{\mu\nu}=0\). Substituting Eq. (4) into this equation, we obtain
\[0=u_{\mu}\nabla_{\nu}T^{\mu\nu}=\nabla_{\nu}(T^{\mu\nu}u_{\mu}) -T^{\mu\nu}\nabla_{\nu}u_{\mu},\] \[=\nabla_{\nu}(\mathcal{L}_{\textsc{f}}u^{\nu})-(\mathcal{L}_{ \textsc{f}}-n\frac{\mathrm{d}\mathcal{L}_{\textsc{f}}}{\mathrm{d}n})\nabla_{ \nu}u^{\nu},\] \[=u^{\nu}\nabla_{\nu}\mathcal{L}_{\textsc{f}}+n\frac{\mathrm{d} \mathcal{L}_{\textsc{f}}}{\mathrm{d}n}\nabla_{\nu}u^{\nu},\] \[=\frac{\mathrm{d}\mathcal{L}_{\textsc{f}}}{\mathrm{d}n}\nabla_{ \nu}(nu^{\nu}), \tag{5}\]
where the second line uses \(u^{\mu}u_{\mu}=-c^{2}\) and its derivative \(u^{\mu}\nabla_{\nu}u_{\mu}=0\)[3], and the fourth line uses the chain rule \(\nabla_{\nu}\mathcal{L}_{\textsc{f}}=(\mathrm{d}\mathcal{L}_{\textsc{f}}/ \mathrm{d}n)\nabla_{\nu}n\). Therefore, Eq. (2) can be derived from \(\{\nabla_{\nu}T^{\mu\nu}=0,u^{\mu}u_{\mu}=-c^{2}\}\). For gravitational theories with fluid models given by explicit \(\mathcal{L}_{\textsc{f}}(n)\), the Einstein field equations together with \(u^{\mu}u_{\mu}=-c^{2}\) are complete and self-consistent. Note that, in this case, it is not necessary to introduce other fluid thermodynamic quantities such as mass density \(\rho\) and pressure \(p\).
However, other thermodynamic quantities, e.g., \(\rho\), are needed to describe perfect fluid [4; 33]. If we introduce such a quantity into the fluid Lagrangian, then we have one more variable to describe the fluid. At the same time, we need one more equation to determine the motion of the fluid. This equation cannot be obtained from the gravitational field equations or variational principle. For clarity, here we assume that \(\mathcal{L}_{\textsc{f}}\) is an explicit function of \(\rho\), then there are six variables \(\{\rho,n,u^{\mu}\}\) to describe the fluid but only five independent equations \(\{\nabla_{\nu}T^{\mu\nu}=0\), \(u^{\mu}u_{\mu}=-c^{2}\}\). Note that one can repeat the proof given by Eq. (5) as long as \(\mathrm{d}\rho/\mathrm{d}n\) exists. In principle, the additional equation can be arbitrary since the existing equations are underdetermined. In order to be consistent with conventional thermodynamics, we can adopt the local energy conservation equation [3; 33]
\[n\frac{\mathrm{d}\rho}{\mathrm{d}n}=\frac{p}{c^{2}}+\rho, \tag{6}\]
where \(p=p(\rho)\) is given by the ordinary known equation of state (EOS) of the fluid. Here we only consider the isentropic fluid. This is widely used in the studies of modified fluid [10; 13; 18], and is reasonable in many gravitational processes involving fluid, such as big bang nucleosynthesis [35], cosmic recombination [36], and neutron star [37]. We would like to highlight that \(p\) appearing in Eq. (6) is an auxiliary variable to complete the equation, rather than given directly by the variational principle. Adopting Eq. (6) allows us to discard the possible gravitational particle creation process [19; 20] in our framework. Note that particle cannot be created in classical field theory, and the creation is a quantum process. We believe that the modified fluid theory is classical, rather than quantum. This is the key reason for our pursuit of energy conservation. For gravitational theories with fluid models given by explicit \(\mathcal{L}_{\textsc{f}}(\rho)\), the equations \(\{G_{\mu\nu}=\kappa T_{\mu\nu}\), \(u^{\mu}u_{\mu}=-c^{2}\), Eq. (6)} are complete and self-consistent. The above discussion demonstrates our core strategy for modifying fluid theory. More complex fluid Lagrangian will be discussed later and compared with existing methodologies in the literature.
In order to demonstrate the principle discussed above more intuitively, here we present a cosmological appli
cation. The Universe is assumed to be described by the flat Friedmann-Lemaitre-Robertson-Walker metric \(\mathrm{d}s^{2}=-c^{2}\mathrm{d}t^{2}+a^{2}\mathrm{d}\mathbf{x}^{2}\), where \(a=a(t)\), and the four-velocity \(u^{\mu}=(1,0,0,0)\). Substituting these results into the Einstein field equations with Eq. (4), we obtain
\[H^{2}=-\frac{\kappa c^{2}}{3}\mathcal{L}_{\mathrm{r}}, \tag{7a}\] \[\frac{\ddot{a}}{a}=-\frac{\kappa c^{2}}{3}\left(\mathcal{L}_{ \mathrm{r}}-\frac{3n}{2}\frac{\mathrm{d}\mathcal{L}_{\mathrm{r}}}{\mathrm{d}n }\right), \tag{7b}\]
where the Hubble parameter \(H\equiv\dot{a}/a\) and \(\dot{}\equiv\mathrm{d}/\mathrm{d}t\). Independent of \(\mathcal{L}_{\mathrm{r}}\), Eq. (7) gives \(\dot{n}+3Hn=0\), which is exactly Eq. (2). If \(\mathcal{L}_{\mathrm{r}}=\mathcal{L}_{\mathrm{r}}(n)\), then Eq. (7) is complete as there are two equations and two variables \(\{a,n\}\). If \(\mathcal{L}_{\mathrm{r}}=\mathcal{L}_{\mathrm{r}}(\rho)\), then Eq. (7) is not complete as no equation determines the evolution of \(\rho\). In this case, one equation such as Eq. (6) is required. For the photon gas contained in the Universe, regardless of the expression of \(\mathcal{L}_{\mathrm{r}}(\rho)\), we can adopt Eq. (6) with \(p=\rho c^{2}/3\) so that \(n\propto a^{-3}\) and \(\rho\propto a^{-4}\). Therefore, such fluid is consistent with conventional thermodynamics.
## III \(\mathbf{f(\chi)}\) fluid
Perfect fluid is the main gravitational source in general relativity. Its Lagrangian can be written as \(\mathcal{L}_{\mathrm{r}}=-\rho c^{2}\)[32; 33] and the energy-momentum tensor is generally written as \(T_{\mu\nu}^{(\mathrm{pr})}=(\rho+p/c^{2})u_{\mu}u_{\nu}+pg_{\mu\nu}\)[38]. We emphasize that all we obtain from the variational principle is Eq. (4). The appearance of \(p\) in \(T_{\mu\nu}\) is caused by substituting Eq. (6) into Eq. (4) with \(\mathcal{L}_{\mathrm{r}}=-\rho c^{2}\). The essence of \(u_{\mu}\nabla_{\nu}T^{\mu\nu}=0\) is particle number conservation Eq. (2) as depicted by Eq. (5), rather than energy conservation as widely believed in the literature.
One generalization of the perfect fluid is to write the Lagrangian as \(\mathcal{L}_{\mathrm{r}}=f(\chi)\), where \(\chi\) is a scalar related to the fluid, e.g., \(n\), \(\rho\) or the trace of the conventional energy-momentum tensor \(T^{(\mathrm{pr})}\equiv g^{\mu\nu}T_{\mu\nu}^{(\mathrm{pr})}=3p-\rho c^{2}\). In our framework, the gravitational field equations of the first two cases have been discussed before, and case \(\chi=T^{(\mathrm{pr})}\) is formally identical to case \(\chi=\rho\).
This generalization includes the minimal coupling cases of \(f(R,\mathcal{L}_{\mathrm{m}})\) gravity [10] and \(f(R,T)\) gravity [13]. Here is a comparison of our results with those given in the literature [8; 9; 10; 11; 12; 13; 14; 15]. In the series of work on \(f(R,\mathcal{L}_{\mathrm{m}})\) gravity [8; 9; 10; 11], the authors used \(\rho\) to denote _rest_ mass density [32], and obtained \(\delta\rho\) from rest mass conservation. This is essentially the same as our discussion of Eqs. (2) and (3). They then analyzed gravitational applications by treating \(\mathcal{L}_{\mathrm{m}}\) as an explicit function of \(\rho\), which is similar to the case of \(\mathcal{L}_{\mathrm{r}}=\mathcal{L}_{\mathrm{r}}(n)\) in our discussions. For the case of minimal coupling between spacetime and matter, they obtained a result similar to our Eq. (4), and then rewrote the result in the form of \(T_{\mu\nu}^{(\mathrm{pr})}\) with redefined mass/energy density and pressure. Finally a given EOS can be used to reconstruct the explicit expression of \(\mathcal{L}_{\mathrm{m}}(\rho)\) (see Sec. II in [11] for an example). In summary, their result suggests that one \(\mathcal{L}_{\mathrm{m}}(\rho)\) corresponds to one specific EOS if the fluid is still perfect. Note that this procedure aims to reconstruct the Lagrangian of perfect fluid, not to generalize the fluid. This is self-consistent, and the result should be equivalent to those given directly in the perfect fluid case. Here we illustrate this equivalence with an example. In our conventions, Eq. (4) and the form of \(T_{\mu\nu}^{(\mathrm{pr})}\) give the redefined mass density \(\tilde{\rho}=-\mathcal{L}_{\mathrm{r}}/c^{2}\) and pressure \(\tilde{p}=\mathcal{L}_{\mathrm{r}}-n\mathrm{d}\mathcal{L}_{\mathrm{r}}/\mathrm{d}n\). Here the tilde represents redefinition. These redefined quantities satisfy \(n\frac{\mathrm{d}\tilde{\rho}}{\mathrm{d}n}=\frac{\tilde{p}}{c^{2}}+\tilde{\rho}\) as \(u^{\mu}\nabla^{\nu}T_{\mu\nu}^{(\mathrm{pr})}=0\). If the EOS \(w(n)\equiv\tilde{p}/(\tilde{\rho}c^{2})\) is known, then \(\mathcal{L}_{\mathrm{r}}(n)\) is determined by
\[\frac{n}{\mathcal{L}_{\mathrm{r}}}\frac{\mathrm{d}\mathcal{L}_{\mathrm{r}}}{ \mathrm{d}n}=w+1. \tag{8}\]
For the photon gas (\(w=1/3\)), the above equation gives \(\mathcal{L}_{\mathrm{r}}\propto n^{4/3}\), which is consistent with the result obtained in the conventional perfect fluid framework (see the analysis of the photon gas in an expanding Universe in the perfect fluid framework).
Considering the above discussion and the composition of functions, one might guess that any fluid Lagrangian can be regarded as \(\mathcal{L}_{\mathrm{r}}(n)\), so that Eq. (1) can only describe the perfect fluid. In this idea, the physical mass density and pressure should be redefined as discussed above Eq. (8), and the redefined quantities satisfy conventional conservation laws. This is essentially the core of the comment on \(f(R,T)\) gravity given by [16; 17]. However, in our opinion, this is not true. In principle, the minimal coupling case of \(f(R,T)\) gravity is intend to modify the perfect fluid, rather than reconstruct its Lagrangian. The core of modifying fluid lies in the relationship between \(\mathcal{L}_{\mathrm{r}}\) and the physical mass density \(\rho\). We can still generalize the perfect fluid by modifying \(\mathcal{L}_{\mathrm{r}}(\rho)\) as we discussed earlier. We agree with the reply given by [18] that the prior \(\rho\) has physical thermodynamic interpretation, and the mass density should not be redefined based on a conservation law. In particular, there is a counterexample to [16; 17]. In our framework, both the prior \(\rho\) and the redefined \(\tilde{\rho}\) formally satisfy the conservation law Eq. (6) even if \(\mathcal{L}_{\mathrm{r}}(\rho)\) is general. There is no reason to define the physical mass density by the _later_ one as did in [16; 17]. Compared with the minimal coupling case of \(f(R,T)\) gravity [13], our theory can naturally incorporate the conservation law Eq. (6), and no gravitational particle creation process [19; 20] is required.
## IV Ede in \(\mathbf{f(\rho,w)}\) fluid
Similar to \(f(R,w)\) gravity we mentioned but not analyzed in [21], here we use \(f(\rho,w)\) fluid to realize the EDE triggered by cosmic radiation-matter transition. We adopt the Lagrangian
\[\mathcal{L}_{\mathrm{r}}=-\rho c^{2}\times\left[1+\alpha\sin^{\beta}(3w\pi) \right], \tag{9}\]
where the dimensionless parameters \(\alpha=\mathcal{O}(0.1)\) and \(\beta=\mathcal{O}(1)\), and the conventional fluid EOS \(w\equiv p/(\rho c^{2})\). For our EDE purpose, the fluid here includes neutrino, photon, baryon and dark matter. The function \(\sin(3w\pi)\) is chosen such that the modification vanishes at \(w=0\) and \(1/3\). The parameters \(\alpha\) and \(\beta\) control the amplitude and width of \(\Omega_{\text{\tiny EDE}}\), respectively. This realization does not need to specify any energy scale. For the gravitational theory with Eq. (9), the complete and self-consistent field equations are \(\{G_{\mu\nu}=\kappa T_{\mu\nu}\) with Eq. (4), \(u^{\mu}u_{\mu}=-c^{2}\), Eq. (6)}. Note that here \(\frac{\mathrm{d}\mathcal{L}_{\text{e}}}{\mathrm{d}n}=\frac{\partial\mathcal{ L}_{\text{e}}}{\partial\rho}\frac{\mathrm{d}\rho}{\mathrm{d}n}+\frac{\partial \mathcal{L}_{\text{e}}}{\partial w}\frac{\mathrm{d}w}{\mathrm{d}n}\).
For the flat Universe, the complete cosmic evolution equations can be chosen as Eqs. (2), (6) and (7a). The \(w\) is a given variable to characterise the fluid, and Eq. (7b) can be derived from this set of equations. The Friedmann equation (7a) gives the relative energy density of EDE
\[\Omega_{\text{\tiny EDE}}=\frac{\alpha\sin^{\beta}(3w\pi)}{1+\alpha\sin^{ \beta}(3w\pi)}. \tag{10}\]
We define the e-folding number \(N\equiv\ln(a/a_{0})\), where \(a_{0}\) is the cosmic scale factor today. Then \(w=(1/3)/[1+\exp(N-N_{\text{eq}})]\) for the real Universe contains radiation and pressureless matter [21], where \(N_{\text{eq}}=-8.13\) corresponds to matter-radiation equality [39]. Figure 1 plots the cosmic evolutions of \(w\), \(\Omega_{\text{\tiny EDE}}\) and the density \(\rho_{i}\). The parameter \(\alpha=0.1\) roughly corresponds to \(\Omega_{\text{\tiny EDE}}\approx 10\%\) at matter-radiation equality, which is the preferred value given by cosmological parameter constraints [22; 23; 24; 25; 26]. After the equality, we require EDE dilutes away at least as fast as radiation, which corresponds to \(\beta\geq 1\) (see the bottom part of Fig. 1). The model with \(\beta\geq 1\) also exhibits well in the radiation-dominated era. This figure confirms that Eq. (9) completely realizes the idea that EDE triggered by radiation-matter transition, and solves the relevant coincidence problem.
In the limit of \(w\to 0\), we obtain the pressureless perfect fluid from Eq. (9), and then \(\nabla_{\nu}T^{\mu\nu}=0\) gives the geodesic equations \(u^{\nu}\nabla_{\nu}u^{\mu}=0\)[40]. In the solar system, planet can be regarded as pressureless fluid element. Therefore, the planet moves along the geodesic even though the fluid Lagrangian reads Eq. (9). A non-zero \(w\) may affect the motion of the stars, e.g., neutron star. This effect may leave an imprint on the gravitational waveforms of binary neutron star mergers. There is another mechanism leading to similar influences. The \(w\)-modification can affect the structure of neutron star and thus the gravitational waves from binaries through tidal interactions [41; 42; 43; 44; 45; 46; 47]. These effects may be observable by future gravitational wave detectors with optimum sensitivity range from decihertz [48] to kilohertz [49]. Analysis of these issues will be presented in the future.
## V Conclusions
A general framework to modify perfect fluid is presented in this paper. The proof given by Eq. (5) paves the way of constructing the complete and self-consistent field equations, and allows the modified fluid to satisfy energy conservation. Comparisons between our result and previous work are discussed in detail. Our variational method and result for \(T_{\mu\nu}\) are similar to those in [18]. The difference is that we highlight that Eq. (6) needs to be introduced separately, and cannot be given by the variational principle. Our \(\mathcal{L}_{\text{e}}(n)\) case is equivalent to the minimal coupling case of \(f(R,\mathcal{L}_{\text{m}})\) gravity [10]. For the debate on \(f(R,T)\) gravity [13], our \(f(\chi)\) case provides evidence against [16; 17] and supports [18], and we conclude that there is no reason to redefine the physical mass density based on the modified fluid Lagrangian or the formally conservation law. Unlike the minimal coupling case of the \(f(R,T)\) gravity [13], the energy conservation law Eq. (6) can be naturally incorporated in our framework. The nonminimal coupling of spacetime and fluid was not discussed in this paper. This generalization within our framework and a more comprehensive comparison with the \(f(R,T)\) gravity will be studied in a future work.
As an application, we propose the \(f(\rho,w)\) fluid with Eq. (9) to finish the idea that EDE triggered by radiation-matter transition [21] -- one way to solve the EDE coincidence problem. There are other ways to address the EDE coincidence, e.g., neutrino-triggered EDE [29; 30; 50; 51],
Figure 1: Cosmological evolution of the EDE triggered by cosmic radiation-matter transition and realized in \(f(\rho,w)\) fluid. The \(\rho_{i}\) denotes the density of radiation (neutrino and photon, \(\propto a^{-4}\)), matter (baryon and dark matter, \(\propto a^{-3}\)) and EDE [\(=(\rho_{r}+\rho_{\text{m}})\times\alpha\sin^{\beta}(3w\pi)\)], and is rescaled by the matter density at equality \(\rho_{\text{m,eq}}\). The \(\Omega_{\text{\tiny EDE}}\) and \(w\) can be found in the main text. The top axis denotes the cosmological redshift.
dark matter-triggered EDE [52; 53], and multiple scaling fields [54]. Compared with these models, our model does not require any energy scale, and only introduces two dimensionless parameters of order of \(\mathcal{O}(0.1)\) and \(\mathcal{O}(1)\). Such property may make the theory more natural.
In the future, gravitational waves from binary neutron star mergers [41; 42; 43; 44; 45; 46; 47] may be able to provide a cross-check for our EDE model. The possible positive results given by the relevant cross-checking can lead to robust statements about the existence of the \(w\)-modification of perfect fluid.
## Acknowledgements
This work was supported by the National Natural Science Foundation of China under Grants No. 12021003, No. 11920101003 and No. 11633001, and the Strategic Priority Research Program of the Chinese Academy of Sciences under Grant No. XDB23000000. S. X. T. was also supported by the Initiative Postdocs Supporting Program under Grant No. BX20200065 and China Postdoctoral Science Foundation under Grant No. 2021M700481.
|
2306.12152 | Exploiting Multimodal Synthetic Data for Egocentric Human-Object
Interaction Detection in an Industrial Scenario | In this paper, we tackle the problem of Egocentric Human-Object Interaction
(EHOI) detection in an industrial setting. To overcome the lack of public
datasets in this context, we propose a pipeline and a tool for generating
synthetic images of EHOIs paired with several annotations and data signals
(e.g., depth maps or segmentation masks). Using the proposed pipeline, we
present EgoISM-HOI a new multimodal dataset composed of synthetic EHOI images
in an industrial environment with rich annotations of hands and objects. To
demonstrate the utility and effectiveness of synthetic EHOI data produced by
the proposed tool, we designed a new method that predicts and combines
different multimodal signals to detect EHOIs in RGB images. Our study shows
that exploiting synthetic data to pre-train the proposed method significantly
improves performance when tested on real-world data. Moreover, to fully
understand the usefulness of our method, we conducted an in-depth analysis in
which we compared and highlighted the superiority of the proposed approach over
different state-of-the-art class-agnostic methods. To support research in this
field, we publicly release the datasets, source code, and pre-trained models at
https://iplab.dmi.unict.it/egoism-hoi. | Rosario Leonardi, Francesco Ragusa, Antonino Furnari, Giovanni Maria Farinella | 2023-06-21T09:56:55Z | http://arxiv.org/abs/2306.12152v2 | Exploiting Multimodal Synthetic Data for Egocentric Human-Object Interaction Detection in an Industrial Scenario
###### Abstract
In this paper, we tackle the problem of Egocentric Human-Object Interaction (EHOI) detection in an industrial setting. To overcome the lack of public datasets in this context, we propose a pipeline and a tool for generating synthetic images of EHOIs paired with several annotations and data signals (e.g., depth maps or instance segmentation masks). Using the proposed pipeline, we present _EgoISM-HOI_ a new multimodal dataset composed of synthetic EHOI images in an industrial environment with rich annotations of hands and objects. To demonstrate the utility and effectiveness of synthetic EHOI data produced by the proposed tool, we designed a new method that predicts and combines different multimodal signals to detect EHOIs in RGB images. Our study shows that exploiting synthetic data to pre-train the proposed method significantly improves performance when tested on real-world data. Moreover, the proposed approach outperforms state-of-the-art class-agnostic methods. To support research in this field, we publicly release the datasets, source code, and pre-trained models at [https://iplab.dmi.unict.it/egoism-hoi](https://iplab.dmi.unict.it/egoism-hoi).
## 1 Introduction
In recent years, wearable devices have become increasingly popular as they offer a first-person perspective of how users interact with the world around them. One of the advantages of wearable devices is that they allow the collection and processing of visual information without requiring users to hold any devices with their hands, enabling them to perform their activities in a natural way. Intelligent systems can analyze this visual information to provide services to support humans in different domains such as activities of daily living (Damen et al., 2014, 2018; Grauman et al., 2021), cultural sites (Farinella et al., 2019) and industrial scenarios (Sener et al., 2022; Mazzamuto et al., 2023). In particular, egocentric vision can be adopted in the industrial context to understand workers' behavior, improve workplace safety, and increase overall productivity. For example, by detecting the hands of the workers and determining which objects they are interacting with, it is possible to monitor object usage, provide information on the procedures to be carried out, and improve the safety of workers by issuing reminders when dangerous objects are manipulated.
Previous works have investigated the problem of Human-Object Interaction detection (HOI) considering either third-person (Gkioxari et al., 2018; Liao et al., 2020) or first-person (Liu et al., 2022; Zhang et al., 2022) points of view. While these works have considered generic scenarios (e.g., COCO objects) or class-agnostic settings (Shan et al., 2020), their use in industrial contexts is still understudied due to the limited availability of public datasets (Ragusa et al., 2021; Sener et al., 2022). To develop a system capable of detecting Egocentric Human-Object Interactions (EHOI) in this context, it is generally required to collect and label large amounts of domain-specific data, which could be expensive in terms of costs and time and not always possible due to privacy constraints and industrial sectors (Ragusa et al., 2021).
In this paper, we investigate whether the use of synthetic data in first-person vision can mitigate the need for labeled real domain-specific data in model training, which would greatly reduce the cost of gathering a suitable dataset for model development. We propose a pipeline (see Fig. 1) and a tool that, leveraging 3D models of the target environment and objects, produces a large number of synthetic EHOI image examples, automatically labeled with several annotations, such as hand-object 2D-3D bounding boxes, object categories, hand information (i.e., hand side, contact state, and associated active objects) as well as multimodal signals such as depth maps and instance segmentation masks.
Exploiting the proposed pipeline, we present _EgoISM-HOI_ (Egocentric Industrial Synthetic Multimodal dataset for Human-Object Interaction detection), a new photo-realistic dataset of EHOIs in an industrial scenario with rich annotations of hands, objects, and active objects (i.e., the objects the user is interacting with), including class labels, depth maps, and instance segmentation masks (see Fig. 1 (b)). To assess the suitability of the synthetic data generated with the proposed protocol to tackle the EHOI detection task on target real data, we further acquired and labeled 42 real egocentric videos in an industrial laboratory in which different subjects perform test and repair operations on electrical boards1. We annotated all EHOIs
instances of the images identifying the frames in which interactions occur and all active objects with a bounding box associated with the related object class. In addition, we labeled the hands and all the objects in the images.
We investigated the potential of using the generated synthetic multimodal data, including depth maps and instance segmentation masks, to improve the performance of EHOI detection methods. Specifically, we designed an EHOI detection approach based on the method proposed in Shan et al. (2020) which makes use of the different multimodal signals available within our dataset. Experiments show that the proposed method outperforms baseline approaches based on the exploitation of class-agnostic models trained on out-of-domain real-world data. Indeed, the proposed method achieves good performance when trained with our synthetic data and a very small amount of real-world data. Additional experiments show that, by leveraging multimodal signals, the accuracy and robustness of our EHOI detection system increased.
The contributions of this study are the following: 1) we propose a pipeline that exploits 3D models of real objects and environments to generate thousands of domain-specific synthetic egocentric human-object interaction images paired with several labels and modalities; 2) we present _EgoISM-HOI_, a new multimodal dataset of synthetic EHOIs in an industrial scenario with rich annotations of hands and objects. To test the ability of models to generalize to real-world data, we acquire and manually labeled real-world images of EHOIs in the target environment; 3) we design a new method for EHOI detection that exploits additional modalities, such as depth maps and instance segmentation maps to enhance the performance of classic HOI detection approaches; 4) we perform extensive evaluations to highlight the benefit of using synthetic data to pre-train EHOI detection methods, mainly when a limited set of real data is available, and report improvements of our approach over classical class-agnostic state-of-the-art methods; 5) we release the dataset and code publicly at the following link: [https://iplab.dmi.unict.it/egoism-hoi](https://iplab.dmi.unict.it/egoism-hoi).
The remainder of this paper is organized as follows. Section 2 provides a detailed summary of the related work. Section 3 details the proposed data generation pipeline. Section 4 describes the proposed dataset. Section 5 introduces our multimodal EHOI detection method. Section 6 reports and discusses the performed experiments and ablation studies. Finally, Section 7 concludes the paper.
## 2 Related Work
In this Section, we discuss datasets and state-of-the-art methods for detecting human-object interactions from images and videos acquired from both third (TPV) and first-person vision (FPV).
### Datasets for Human-Object Interaction Detection from Third Person View
Previous works have proposed benchmark datasets to study human-object interactions from a third-person vision. _PASCAL VOC_(Everingham et al., 2009) was one of the first datasets focusing on understanding human behavior from images. This dataset has been used for several tasks related to human-object interaction understanding, such as object classification, object detection, and static action classification. Gupta and Malik (2015) introduced _The Verbs in COCO_ (V-COCO) dataset, an extension of the _COCO_ dataset (Lin et al., 2014) that includes 26 verbs classes, along with bounding box annotations of human and objects involved in interactions. Chao et al. (2015) presented _Humans Interacting with COmmon Objects_ (HICO), a dataset for detecting human-object interactions that comprises more than 600 categories of human-object interactions across 117 activities and 80 common objects. Chao et al. (2018) extended _HICO_ to _HICO-DET_, adding more than 150,000 annotated instances of human-object interaction pairs. Li et al. (2020) proposed _AmbiguousHOI_, a benchmark that includes hard ambiguous images of HOI instances selected from existing datasets such as _HICO-DET_(Chao et al., 2018), _V-COCO_(Gupta and Malik, 2015), and _OpenImage_(Kuznetsova et al., 2020). The _Human-Object Interaction for Application_ (HOI-A) dataset has been proposed by Liao et al. (2020). It includes 38,668 annotated images with 11 different types of objects, and 10 action categories, comprising 43,820 human instances, 60,438 object instances, and 96,160 interaction instances. Recently, the human-object interaction detection task has been studied by exploiting multimodal signals. The _BEAAVE_ dataset (Bhatnagar et al., 2022) is a multi-view RGB-D dataset of human-object interactions acquired in natural environments, which contains 3D human and object annotations,
Figure 1: Synthetic EHOI images generation pipeline. (a) We use 3D scanners to acquire 3D models of the objects and environment. (b) We hence use the proposed data generation tool to create the synthetic dataset.
instance segmentation masks, and contact-state labels. Most related to our study, _100 Days of Hands_(Shan et al., 2020) is a large-scale dataset of human-object interactions containing more than 131 days of video footage acquired from both third and first-person points of view. The authors extracted 100K frames and annotated with bounding boxes 189.6K hands and 110.1K objects involved in interactions. Moreover, for each hand, they annotated the contact state considering five different classes (i.e., _none, self, other-person, non-portable object_, and _portable object_).
The aforementioned works focused mostly on a third-person point of view. Our study focuses on understanding human-object interactions from a first-person point of view.
### Datasets for Human-Object Interaction Detection from First Person View
Owing to the aforementioned vantage point given by wearable cameras, previous works have proposed datasets to study human-object interactions from first-person vision. _EgoHands_(Bambach et al., 2015) is a dataset composed of egocentric video pairs of people interacting with their hands in different daily-life contexts, where they are involved in four social situations (i.e., playing cards, playing chess, solving puzzles, and playing Jenga). It is composed of 130,000 frames and 4,800 pixel-level segmentation masks of hands. _EPIC-KITCHENS-100_(Damen et al., 2021) contains over 100 hours, 20 million frames, and 90,000 actions in 700 variable-length videos of unscripted activities in 45 kitchen environments. The authors provide spatial annotations of (1) instance segmentations masks using Mask R-CNN (He et al., 2017) and (2) hand and active object bounding boxes labeled with the system introduced in Shan et al. (2020). Darkhaili et al. (2022) proposed _VISOR_, an extension of _EPIC-KITCHENS-100_, which comprises pixel annotations and a benchmark suite for segmenting hands and active objects in egocentric videos. It contains 272,000 manual segmented semantic masks of 257 object classes, 9.9 million interpolated dense masks, and 67,000 hand-object relations. _EGTEA Gaze+_(Li et al., 2021) contains more than 28 hours of egocentric video acquired by subjects performing different meal preparation tasks. The authors provide several annotations, including binocular gaze tracking data, frame-level action annotations, and 15K hand segmentation masks. Recognizing EHOIs could be particularly useful in industrial scenarios, for example, to optimize production processes or to increase workplace safety. _MECCANO_(Ragusa et al., 2021, 2022) is a multimodal dataset of FPV videos for human behavior understanding collected in an industrial-like scenario. It includes gaze signals, depth maps, and several annotations. MECCANO has been explicitly annotated to study EHOIs with bounding boxes around the hands and active objects, and verbs that describe the interactions. _Assembly101_(Sener et al., 2022) is a multi-view action dataset of people assembling and disassembling 101 toy vehicles. It contains 4321 video sequences acquired simultaneously from 8 TPV and 4 FPV cameras, 1M fine-grained action segments, and 18 million 3D hand poses. _Ego4D_(Grauman et al., 2021) is a multimodal video dataset to study egocentric perception. The dataset contains more than 3,500 video hours of daily life activity captured by 931 subjects and additional modalities such as eye gaze data, audio, and 3D mesh of environments. EGO4D has been annotated with bounding boxes around the hands and objects involved in the interactions. _HO14D_(Liu et al., 2022) is a large-scale 4D egocentric dataset for human-object interaction detection. _HO14D_ contains more than 2 million RGB-D egocentric video frames in different indoor environments of people interacting with 800 object instances.
Unlike these works, we aim to study the usefulness of synthetic data for training models which need to be deployed in a specific environment. To this aim, we provide _EgoISM-HOI_, a photo-realistic multimodal dataset of synthetic images for understanding human-object interactions acquired in an industrial scenario, paired with labeled real-world images of egocentric human-object interactions in the same target environment. Our dataset contains RGB-D images and rich automatically labeled annotations of hands, objects, and active objects, including bounding boxes, object categories, instance segmentation masks, and interaction information (i.e., hand contact state, hand side, and hand-active object relationships).
### Human-Object Interaction simulators and synthetic datasets
This line of research focused on providing 3D simulators which are able to generate automatically labeled synthetic data (Kolve et al., 2017; Savva et al., 2019; Xia et al., 2020; Hwang et al., 2020; Quattrocchi et al., 2023). While these tools allow simulating an agent that navigates in an indoor environment, there are fewer choices for simulating object interaction. Mueller et al. (2017) proposed a data generation framework that tracks and combines real human hands with virtual objects to generate photorealistic images of hand-object interactions. Using the proposed tool, the authors introduced _SynthHands_, a dataset that contains around 200K RGB-D images of hand-object interactions acquired from 5 FPV virtual cameras. _ManipulaTHOR_(Ehsani et al., 2021) is an extension of the _AI2-THOR_ framework (Kolve et al., 2017) that adds a robotic arm to virtual agents, enabling the interaction with objects. Thanks to this framework, the authors introduced the _Arm POINTNAV_ dataset, which contains interactions in 30 kitchen scenes, 150 object categories, and 12 graspable object categories. Hasson et al. (2019) introduced the _ObMan_ dataset, a large-scale synthetic image dataset of hand-object interactions. The peculiarity of this work is that the authors used the _GraspIt_ software (Miller and Allen, 2004) to improve the photo-realism of the generated interactions. The generated dataset contains more than 20,000 hand-object interactions in which the background is randomized by choosing images from the _LSUN_(Yu et al., 2015) and _ImageNet_(Russakovsky et al., 2015) datasets. Wang et al. (2022) introduced _DexGraspNet_, a large-scale synthetic dataset for robotic dexterous grasping containing 1.32M grasps of 5355 objects among 133 object categories. Ye et al. (2023) proposed an approach for synthesizing virtual human hands interacting with real-world objects from RGB images.
Differently from these works, our generation pipeline has been specifically designed to obtain accurate 3D reconstructions of a target environment and the objects it contains. 3D models of the target environment and objects are used by our
tool to generate realistic egocentric hand-object interactions that integrate coherently with the surrounding environment. Moreover, our tool allows the customization of several parameters of the virtual scene, for example, by randomizing the light points, the position of the virtual object in the environment, or the virtual agent's clothing. In addition, the proposed tool is able to output several annotations automatically labeled and data signals, such as 2D-3D bounding boxes, hand labels (i.e., hand contact state and hand side), instance segmentation masks, and depth maps. Another difference with respect to the aforementioned works is that our tool is designed to automatically generate interactions from a first-person point of view without using any additional real-world data or specific hardware devices other than 3D models.
### Methods for Detecting Human-Object Interactions
In the past year, the human-object interaction detection task has been studied from the third-person point of view (Gupta and Malik, 2015; Chao et al., 2015, 2018). Gkioxari et al. (2018) proposed a method for detecting human-object interactions in the form of \(<\)_human, verb, object\(>\)_ triplets, where bounding boxes around objects and humans are also predicted. Specifically, they extended the state-of-the-art object detector Faster R-CNN (Ren et al., 2015) with an additional human-centric branch that uses the features extracted by the backbone to predict a score for candidate human-object pairs and an action class. Liao et al. (2020) proposed a method called _PPDM_ (Parallel Point Detection and Matching) that defines an HOI as a triplet \(<\)_human point, interaction point, object point\(>\)_ composed of three points associated with the human, the active object and the interaction location. Recently, several works figured out the HOI detection task by proposing transformer-based models. Zhang et al. (2022) proposed a new two-stage detector based on a transformer architecture to detect interactions. Wu et al. (2022) proposed an approach for learning a body-part saliency map, which contains informative cues of the person involved in the interaction and other persons in the image, in order to boost HOI detection methods (Chao et al., 2018; Gao et al., 2018). Ma et al. (2023) introduced a transformer-based human-object interaction detector that uses a multi-scale feature extractor and a multi-scale sampling strategy to predict the HOI instances from images with noisy backgrounds in the form of \(<b_{h},b_{o},c_{o},c_{v}>\) quadruplet, where \(b_{h}\) and \(b_{o}\) represent the human and object boxes, and \(c_{o}\) and \(c_{v}\) the object class and the verb class. While previous works all addressed the HOI modeling detecting a bounding box around the human, Shan et al. (2020) addressed the HOI detection task by predicting information about human hands, such as hand location, side, contact state, and, in case of an interaction, a box around the object touched by the hand. Zhang et al. (2022) proposed to use a contact boundary, i.e. the contact region between the hand and the interacting object, to model the interaction relationship between hands and objects. Fu et al. (2022) designed an approach for HOI detection that introduced a new pixel-wise voting function for improving the active object bounding box estimation. Benavent-Lledo et al. (2022) proposed an architecture for human-object interaction detection estimation based on two YOLOv4 object detectors (Bochkovskiy et al., 2020) and an attention-based method. Recently, some work investigated the use of additional modalities, such as 6DOF hand poses or semantic segmentation masks, to learn more robust representations of human-object interactions. Lu and Mayol-Cuevas (2021) introduced an approach that uses contextual information, i.e. hand pose, hand mask, and object mask, to improve the performance of HOI detection systems.
In this work, we focused on detecting human-object interactions from FPV, where, in most cases, the hands are the only portion of the body visible in the images. To this aim, we designed an approach for detecting egocentric human-object interactions using different multimodal signals available within our _EgoISM-HOI_ dataset. Similar to Shan et al. (2020), our method detects hands from RGB images using a two-stage object detector and predicts some attributes of the latter, such as hands side, hands contact state, and the objects involved in the interactions. Additionally, our approach is able to detect all objects present in the image and infer their category. Similar to Lu and Mayol-Cuevas (2021); Zhang et al. (2022), we exploit multi-modal signals (i.e., depth maps and hand segmentation masks) to predict the hand contact state.
## 3 Proposed EHOI Generation Pipeline
To study the egocentric human-object interaction detection task in a realistic industrial scenario, we have set up a laboratory called _ENIGMA Lab_ (Figure 2) that contains different types of work tools and equipment. Specifically, we considered the following 19 object categories: _power supply, oscilloscope, welder station, electric screwdriver, screwdriver, pliers, welder probe tip, oscilloscope probe tip, low voltage board, high voltage board, register, electric screwdriver battery, working area, welder base, socket, left red button, left green button, right red button_, and _right green button_. Figure 3 shows the acquired 3D models of all the objects considered for the experiments. Note that the categories _left red button, left green button, right red button_, and _right green button_, refer to each button of the electrical panel shown in the bottom-left corner of Figure 3.
We propose a pipeline for generating and labeling synthetic human-object interactions from a first-person point of view using 3D models of the target environment and objects, which can be cheaply collected using commercial scanners. Figure 1 shows the overall scheme of our EHOI data generation pipeline,
Figure 2: A picture of the ENIGMA Lab.
which consists of two main phases: 1) the collection of the 3D models, and 2) the generation of EHOI synthetic images using the proposed tool.
In our study, we noted that high-quality object reconstructions are necessary to generate realistic EHOIs, while high accuracy is not required for environment reconstruction. We used two different 3D scanners to create 3D models. Specifically, we used the structured-light 3D scanner _Artec Eva2_ for scanning the objects, and a _MatterPort3_ device for the environment.
Footnote 2: [https://www.artec3d.com/portable-3d-scanners/artec-eva-v2](https://www.artec3d.com/portable-3d-scanners/artec-eva-v2)
Footnote 3: [https://matterport.com/](https://matterport.com/)
We developed a tool based on the Unity4 engine which exploits 3D models of the objects and the environment to generate synthetic egocentric human-object interaction images together with the following data: 1) RGB images (see Fig. 4 - left), 2) depth maps (see Fig. 4 - right), 3) instance segmentation masks (see Fig. 4 - center), 4) bounding boxes for hands and objects including the object categories, 5) EHOI's metadata, such as information about associations between hands and objects in contact (which hand is in contact with which object), and hand attributes (i.e., hand side, and hand contact state).
Footnote 4: [https://unity.com/](https://unity.com/)
Our system exploits the _Unity Perception package_ (Unity Technologies, 2020), which offers different tools for generating large-scale synthetic datasets. This package allows to randomize some aspects of the virtual scene, such as the intensity and the color of the lights, the object textures, the presence and amount of motion blur, as well as visual effects like noise, to make the virtual scene more realistic, and adds further diversity to the generated dataset, making it more representative of the real-world environment. In addition, to include different randomized aspects, we created the following randomizers:
* _SurfaceObjectPlacementRandomizer_: Randomizes the position of a group of objects on a flat surface;
* _CustomRotationRandomizer_: Randomizes object rotation by respecting the constraints of each rotation axis;
* _PlayerPlacementRandomizer_: Randomizes the location of the virtual agent in the environment;
* _TextureShirtRandomizer_: Randomizes the texture and color of the virtual agent's shirt;
* _CameraRandomizer_: Randomizes the observed point of the FPV camera;
Examples of randomization are shown in Figure 5.
The Unity perception package provides a component called _Scenario_ which allows to control the execution flow of the simulation by setting standard simulation parameters, such as the number of iterations, the seed of the randomizers, and the number of frames to acquire for each iteration. We have extended the basic _Scenario_ by adding the following parameters: 1) the
Figure 4: Examples of synthetic images (left) with the corresponding annotations (center) and depth maps (right) generated with the proposed tool.
Figure 5: Our tool is able to randomize different aspects of the virtual scene, such as the camera and user positions or the shirt’s texture and color.
Figure 3: 3D models of the 19 objects considered for the experiments.
probability that an interaction will occur in the current iteration, 2) the target object with which the virtual agent will interact in the current interaction (chosen randomly from a list of objects), 3) the probability that two hands are visible from the camera at the same time, and 4) the hand that will interact with the object (right or left).
Moreover, we used a Unity asset called _Auto Hand - VR Physics Interaction5_ to improve the physics of the agent when it interacts with the objects. This asset provides a Virtual Reality (VR) interaction system that automatically determines an appropriate hand pose during object manipulation. We have integrated this system into our virtual agent by extending it to automate the grabbing process and adding special types of interactions, such as pressing buttons. Examples of the generated images and poses are reported in Figure 4.
Footnote 5: [https://assettstore.unity.com/packages/tools/game-tooklits/auto-hand-vr-physics-interaction-165323](https://assettstore.unity.com/packages/tools/game-tooklits/auto-hand-vr-physics-interaction-165323)
## 4 EgoISM-HOI dataset
We present a new multimodal dataset of EHOIs in the aforementioned industrial scenario called _EgoISM-HOI_. It is composed of two parts: 1) a generated synthetic set of images, and 2) a real-world set of data. Henceforth, we will refer to the synthetic set as _EgoISM-HOI-Synth_, whereas we refer to the real-world data as _EgoISM-HOI-Real_.
_EgoISM-HOI-Synth_. We adopted our EHOI generation pipeline to generate _EgoISM-HOI-Synth_. It contains a total of 23,356 images with associated depth maps and instance segmentation masks, 35,672 hand instances of which 18,884 are involved in an interaction, and 148,024 object instances across the 19 object categories reported in Figure 3. Examples of the data which composes the dataset are reported in Figure 4, while Table 1 reports statistics about the dataset, including the total number of images, hands, objects, and EHOIs.
_EgoISM-HOI-Real_. For _EgoISM-HOI-Real_, we collected and labeled 42 real egocentric videos in the ENIGMA Laboratory. In these videos, subjects performed testing and repairing operations on electrical boards using laboratory tools. To simplify data collection and to make it more consistent, we developed an application for Microsoft Hololens 26 that guides the users through audio and images, suggesting the next steps to perform during the acquisition. We defined 8 procedures composed of several steps, in which we vary the tools and electrical boards interacted by the users. Nineteen subjects participated in the data collection. Two were women and seventeen were men. For privacy reasons, we made sure that no other people are visible in the videos, and all the subjects removed any personal object that might reveal their identities (e.g., rings or wristwatches). We acquired 18 hours, 48 minutes, and 13 seconds of video recordings, with an average duration of 26 minutes and 51 seconds, at a resolution of 2272x1278 pixels and a framerate of 30fps. Table 2 summarizes statistics about the collected data. From these videos, we manually annotated 15,948 images following this strategy: 1) we annotated the first frame in which the hand touches an object (i.e., contact frame), and 2) we annotated the first frame after the hand released the object (i.e., end of contact frame). Finally, we assigned the following attributes: 1) hands and objects bounding boxes, 2) hand side (Left/Right), 3) hand contact state (Contact/No contact), 4) hand-object relationships (e.g., hand \(x\) touches object \(y\)), and 5) object categories. Figure 6 shows some images from this set of data along with the related annotations.
Footnote 6: [https://www.microsoft.com/hololens](https://www.microsoft.com/hololens)
## 5 Proposed approach
Inspired by Shan et al. (2020), our system extends a two-stage object detector with additional modules specialized to recognize human-object interactions. The proposed method is able to exploit different data signals, such as instance segmentation maps and depth maps, to improve the performance of classic HOI detection approaches. Moreover, our method is able to recognize the class of all the objects in the scene. We believe that this knowledge could be used for other downstream tasks.
Figure 7 shows a diagram of the overall architecture of the method. Firstly, the input RGB image is passed to the _backbone_ component to extract the image features. These features are used by the _object detector branch_ and the _instance segmentation branch_ to detect, recognize and generate segmentation masks of all the objects and hands in the image. Simultaneously, the _monocular depth estimation branch_ predicts a depth map of the scene from the RGB image. Then, using the hand boxes predicted by the _object detector branch_ and the features map produced by the backbone, the hand feature vectors are extracted with _RoI pooling_ and sent to the following modules: 1) the _hand side classifier_, 2) _hand state classifier_, and 3) _offset vector regressor_. These modules predict several hand attributes that will be detailed later. Furthermore, the RGB image, the depth map, and the instance segmentation mask of each hand
\begin{table}
\begin{tabular}{l c c c c c c} \hline Set & \#images & \#Brands & \#EHOIs & \#left hands & \#right hands & \#objects \\ \hline Train & 20,788 & 31,790 & 16,786 & 16,019 & 15,771 & 131,968 \\ Val & 2,568 & 3,912 & 2,098 & 1,989 & 1,923 & 16,056 \\ Total & 23,356 & 35,672 & 18,884 & 18,008 & 17,694 & 148,024 \\ \hline \end{tabular}
\end{table}
Table 1: Statistics of _EgoISM-HOI-Synth_.
Figure 6: Examples of _EgoISM-HOI-Real_ images with the corresponding EHOI annotations.
are combined using an early fusion strategy and passed to the _multimodal hand state classifier_ component to predict the hand contact state. As the last step, the resulting outputs of the previous modules are combined and passed to a _matching algorithm_ to predict EHOIs in the form of _<hand, contact state, active object>_ triplets. The various modules composing our system are described in detail in the following.
backboneThis component consists of a ResNet-101 backbone (He et al., 2016) with a Feature Pyramid Network (FPN) (Lin et al., 2017). It takes an RGB image as input and returns a feature map.
object detector branchWe used Faster-RCNN (Ren et al., 2015)7, which uses two branches that take as input the features extracted by a backbone to detect and recognize objects and hands in the image.
instance segmentation branchWe followed Mask-RCNN (He et al., 2017) and add a branch to predict instance segmentation masks from the features extracted by a backbone.
monocular depth estimation branchWe used the system presented in (Ranftl et al., 2022), called _MiDaS_, to build the monocular depth estimation branch. Given a single RGB image as input, this component estimates the 3D distance to the camera of each pixel. To make the prediction scale of the depth values uniform in our domain, we fine-tuned _MiDaS_8 redefining the loss function as follows:
Footnote 8: We used the model _midas.v21.384_ available in the following repository: [https://github.com/isl-org/MiDaS](https://github.com/isl-org/MiDaS)
\[\mathcal{L}_{depth}(d,d^{*})=\alpha\mathcal{L}_{ssim}(e,e^{*})+\beta\mathcal{ L}_{ssim}(d,d^{*})+\gamma\mathcal{L}_{l1}(d,d^{*}) \tag{1}\]
where \(d,d^{*}\) are the prediction and ground truth depth maps, and \(e,e^{*}\) represent the edge maps of \(d,d^{*}\). \(\mathcal{L}_{ssim}\) denotes the _SSIM loss function_, which is used to learn the structure of the depth map, and \(\mathcal{L}_{l1}\) is the standard _L1 Loss function_ used to learn the depth values of each pixel. Finally, the factors \(\alpha\), \(\beta\), and \(\gamma\) are used to regulate the scale of the \(\mathcal{L}_{depth}\) components. During our experiments, we set these factors as follows: \(\alpha=0.85\), \(\beta=0.9\), and \(\gamma=0.9\).
Differently from the loss proposed in (Ranftl et al., 2022), which standardizes the scale of the depth maps for various datasets, the loss in 1 allows the prediction of values convertible into a real 3d distance. Some examples of the considered depth maps are reported in Figure 8.
hand side classifierA Multi-Layer Perceptron (MLP) with a hidden fully connected layer that takes as input an ROI-pooled feature vector of the hand crop to predict the hand side (_left/right_).
hand state classifierThis module classifies the contact state of the detected hands through an additional MLP with a hidden fully connected layer. It takes as input the hand features vector, enlarged by 30% to include information about the surrounding context (e.g., nearby objects), and predicts the hand contact state (_no contact/in contact_).
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline Set & \#videos & \#subjects & \#procedures & cumulative videos length & \#images & \#hands & \#EHOIs & \#left hands & \#right hands & \#objects \\ \hline Train & 2 & 1 & 2 & 1h:00:52s & 1,010 & 1,686 & 1,262 & 758 & 928 & 6,689 \\ Val & 10 & 7 & 6 & 4h:35m:28s & 3,717 & 5,622 & 3,867 & 2,577 & 3,045 & 20,916 \\ Test & 30 & 15 & 8 & 13h:11m:51s & 11,221 & 16,850 & 11,403 & 7,743 & 9,107 & 62,356 \\ Total & 42 & 19 & 8 & 18h:48m:13s & 15,948 & 24,158 & 16,532 & 11,078 & 13,080 & 89,961 \\ \hline \end{tabular}
\end{table}
Table 2: Statistics of _EgoISM-HOI-Real_ data. Since we mainly want to use synthetic data to train models, we used most of the real-world data for testing.
Figure 7: **Overall architecture of the proposed Multimodal EHOI detection system. First, the _backbone_ extracts image features from the input RGB image. Then, the _object detector branch_ and the _instance segmentation branch_ detect and generate segmentation masks for all hands and objects in the image. At the same time, the _monocular depth estimation branch_ predicts a depth map of the scene. Next, the hand feature vectors obtained through _RoI Pooling_ are sent to the following modules for predicting hand attributes: 1) the _hand side classifier_, 2) _hand state classifier_, and 3) _offset vector regressor_. Simultaneously, the RGB image, depth map, and instance segmentation mask of each hand are combined and passed to the _multimodal hand state classifier_ module to predict the hand contact state. Finally, the outputs from the previous components are combined and passed to a _matching algorithm_ to predict EHOIs.**
_multimodal hand state classifier._ This component is based on the EfficientNetV2 architecture (Tan and Le, 2021). It takes as input a combination of RGB, depth map (inferred by the _monocular depth estimation branch_), and instance segmentation mask (predicted by the _instance segmentation branch_) of each hand to estimate the hand contact state. The output of this module is combined with the output of the _hand state classifier_ to obtain the final prediction of the hand contact state.
_offset vector regressor._ This module infers a vector that links the center of the bounding box of each hand to the center of the bounding box of the candidate active object (i.e., the object touched by the hand). This module consists of an MLP which takes as input the ROI-pooled feature vectors of the hands to predict \(<\)\(v_{x}\), \(v_{y}\), \(m\)\(>\) triplets, where \((v_{x},v_{y})\) represent the direction of the vector and \(m\) its magnitude.
_matching algorithm._ The final module of our system is a matching algorithm that exploits the outputs of the previous modules to predict EHOIs as _<hand, contact state, active object_> triplets. For each detected hand, the algorithm calculates an interaction point (\(p_{\textit{ehost}}\)) using the bounding box center of the hand and the corresponding offset vector. \(p_{\textit{ehost}}\) represents the prediction of the bounding box center of the candidate active object. Finally, the object whose center is closest to \(p_{\textit{ehost}}\) is chosen as the active object.
To optimize our system during the training phase, we used the standard Faster R-CNN loss (Ren et al., 2015) for the _object detector branch_, while we utilized the definition of (He et al., 2017) for the _instance segmentation branch_. As previously discussed, to optimize the _monocular depth estimation branch_ we exploited the loss function in Equation (1). We used the standard _binary cross-entropy loss_ for the _hand side classifier_,
Figure 8: Comparison of the depth maps predicted by our _monocular depth estimation branch_. The first row shows RGB video frames, while the second and third rows contain depth maps predicted by two different models fine-tuned, respectively, by using the losses described in Ranffi et al. (2022) and the proposed one in Equation (1). The results of the third row are more uniform, while the predicted depth values of the second row vary considerably between similar frames (e.g., the background of (3) and (4) or the object in contact with the left-hand of (1) and (2)).
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline EgoISM-HOI-Synth & EgoISM-HOI-Real \% & AP Hand & AP H.+Side & AP H.+State & \(\mathrm{mAP\ H.+Obj}\) & \(\mathrm{mAP\ H.+All}\) \\ \hline Yes & 0 & 90.02 & 84.72 & 31.85 & 23.92 & 23.28 \\ Yes & 10 & 90.53 & 89.34 & 46.64 & 30.90 & 30.65 \\ Yes & 25 & 90.66 & 89.71 & 48.31 & 31.76 & 31.33 \\ Yes & 50 & 90.69 & 90.00 & 54.79 & 34.12 & 33.12 \\ Yes & 100 & **90.73** & 89.99 & **56.88** & **35.94** & **35.47** \\ No & 10 & 90.08 & 88.57 & 45.69 & 18.19 & 17.48 \\ No & 25 & 90.43 & 89.45 & 43.73 & 18.72 & 18.31 \\ No & 50 & 90.43 & 89.57 & 52.74 & 19.17 & 19.06 \\ No & 100 & 90.54 & **90.06** & 56.34 & 22.31 & 21.76 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results of the proposed approach on _EgoISM-HOI-Real_ test data. The _EgoISM-HOI-Synth_ column indicates whether the _EgoISM-HOI-Synth_ training set was used for pre-training models. The _EgoISM-HOI-Real_ % column shows the percentage of real-world data used for fine-tuning.
whereas for _offset vector regressor_ we used the _mean squared error loss_. We optimized the _hand state classifier_ and _multi-modal hand state classifier_ according to the following equation:
\[\mathcal{L}_{cs}(cs,cs^{*})=\mathcal{L}_{bvc}(cs_{rgb},cs^{*})+\mathcal{L}_{bvc} (cs_{mm},cs^{*})+\mathcal{L}_{bvc}(cs_{lf},cs^{*}) \tag{2}\]
where \(cs,cs^{*}\) are the prediction and ground truth hand contact states, \(cs_{rgb}\), \(cs_{mm}\), and \(cs_{lf}\) denotes, respectively, the predictions of the hand contact states of the _hand state classifier_, _multimodal hand state classifier_ and the combined predictions of these modules. \(\mathcal{L}_{bce}\) denotes the standard _binary cross-entropy loss_. The final loss of our system is the sum of all the aforementioned losses.
## 6 Experimental results
We conducted a series of experiments to 1) assess how much the generated synthetic data are useful in training models able to generalize to the real-world domain (Section 6.2), 2) highlight the contribution of multimodal signals to tackle the EHOI detection task (Section 6.3), and 3) compare the proposed method with a set of baselines based of state-of-the-art class-agnostic approaches (Section 6.4). Section 6.5 further reports additional results on pre-training our method with external data and improvements obtained by our approach on the object detection task.
### Experimental Settings
_Dataset._ We performed experiments on the proposed _EgoISM-HOI_ dataset. Since we want to exploit synthetic data to train models to detect EHOIs when few or zero real-world data are available, we used the splits reported in Table 1 and Table 2 for the synthetic and real data respectively.
_Evaluation Metrics_. Following Shan et al. (2020), we evaluated our method using metrics based on standard _Average Precision_, which assess the models' ability to detect hands and objects as well as the correctness of some attributes such as the hand state, the hand side, and whether an object is active (i.e., it is involved in an interaction). In addition, since our model predicts active object classes, we computed the _mean Average Precision_ (mAP) to consider the correctness of the predicted object classes. Specifically, we used the following metrics: 1) _AP Hand_: _Average Precision_ of the hand detections, 2) _AP Hand_+_Side_: _Average Precision_ of the hand detections considering the correctness of the hand side, 3) _AP Hand_+_State_: _Average Precision_ of the hand detections considering the correctness of the hand state, 4) _mAP Hand_+_Obj_: _mean Average Precision_ of the _<hand, active object_> detected pairs, and 5) _mAP Hand_+_All_: combinations of _AP Hand_+_Side_, _AP Hand_+_State_, and _mAP Hand_+_Obj_ metrics.
_Training Details._ To perform all the experiments we used a machine with a single _NVIDIA A30_ GPU and an _Intel Xeon Silver 4310_ CPU. We scaled images for both the training and inference phases to a resolution of 1280x720 pixels. We trained models on _EgoISM-HOI-Synth_ with _Stochastic Gradient Descent_ (SGD) for 80,000 iterations with an initial learning rate equal to 0.001, which is decreased by a factor of 10 after 40,000 and 60,000 iterations, and a minibatch size of 4 images. Instead, to fine-tune the models with _EgoISM-HOI-Real_ training data, we froze the _monocular depth estimation branch_ and _instance segmentation branch_ modules. Finally, we trained the models for 20,000 iterations and decreased the initial learning rate (0.001) by a factor of 10 after 12,500 and 15,000 iterations.
### The Impact of Synthetic Data on System Performance
The goal of this set of experiments is to show the ability of a model trained with synthetic data to generalize to real-world data. Specifically, we want to demonstrate how the synthetic data generated by the proposed tool can be used to represent realistic human-object interactions.
We compared models pre-trained on the _EgoISM-HOI-Synth_ training split and fine-tuned using different amounts of _EgoISM-HOI-Real_ training data (i.e., 0%, 10%, 25%, 50%, and 100%) with models trained only with _EgoISM-HOI-Real_ data. Since the _multimodal hand state classifier_, _monocular depth estimation branch_, and _instance segmentation branch_ modules need to be trained with labels available only on synthetic data, we deactivated these components in all the models in this set of experiments for a fair comparison.
Table 3 reports EHOI detection results on the _EgoISM-HOI-Real_ test set. Models pre-trained with _EgoISM-HOI-Synth_ data (rows 1-5) outperform all the corresponding models trained using only _EgoISM-HOI-Real_ data (rows 6-9) by consistent margins according to all evaluation metrics, except for the _AP Hand_+_Side_ measure, in which they obtain the second-best result (row 4). Considering the two models fine-tuned using the 100% of the real-world training set (rows 5 and 9), the improvements of the model pre-trained with _EgoISM-HOI-Synth_ data are significant in the metrics affected by active objects, obtaining +13,63% (35.94 vs 22.31) for the _mAP Hand_+_Obj_ and +13,71% (35.47 vs 21.76) for the _mAP Hand_+_All_. These improvements are also evident if we compare the models trained with smaller portions of the real-world data. However, for the metrics _AP Hand_ and _AP Hand_+_State_, there is only a small
Figure 9: **Performance comparison of the proposed system on our _EgoISM-HOI-Real_ test data in terms of _mAP Hand_+_All_. The blue curve reports the results of the models pre-trained on _EgoISM-HOI-Synth_ and fine-tuned at different percentages of the _EgoISM-HOI-Real_ training set, while the red curve reports the results of the models trained on real-world data only.**
boost in the performance of the model pre-trained on _EgoISM-HOI-Synth_ (row 5) compared to the model trained using only _EgoISM-HOI-Real_ data (row 9), i.e., +0.19% (90.73 vs 90.54) and +0.54% (56.88 vs 56.34). These results suggest that using synthetic data for pre-training models enhances the method's ability to detect active objects which are susceptible to frequent occlusions by the hands. In addition, it is worth noting that the model trained using only the _EgoISM-HOI-Synth_ data (row 1) outperforms the best model that used only the real-world data for the evaluation measures influenced by the active objects, obtaining +1,61% (23.92 vs 22.31) and +1,52% (23.28 vs 21.76) for the _mAP Hand+Obj_ and _mAP Hand+All_ metrics, respectively. Figure 9 further illustrates the results in terms of _mAP Hand+All_ considering different amounts of _EgoISM-HOI-Real_ training data in the fine-tuning.
### Impact of Multimodal training
This set of experiments aims to highlight the contribution of the different modalities involved in our approach. For these experiments, we consider the full architecture illustrated in Figure 7 comprising the _backbone_, the _object detector branch_, the _instance segmentation branch_, the _monocular depth estimation branch_, and the _multimodal hand state classifier_. As a baseline, we considered a model trained by deactivating the _multimodal hand state classifier_, _monocular depth estimation branch_, and _instance segmentation branch_ modules. We compare this baseline with several versions of the proposed architecture in which the _hand contact state_ is estimated using different subsets of modalities (i.e., RGB, Depth, and Mask) and modules (i.e., _multimodal hand state classifier_, and _hand state classifier_). As these modules only affect the prediction of hand contact state, Table 4 reports only the metrics affected by these predictions (i.e., _AP Hand+State_ and _mAP Hand+All_). Note that all the models used in this experiment were pre-trained using _EgoISM-HOI-Synth_ and then fine-tuned using 100% of the _EgoISM-HOI-Real_ training set.
Combining the predictions of the _multimodal hand state classifier_ and _hand state classifier_ modules (rows 2-5) leads to general improvements in the system performance over the models that use only a single branch to predict the _hand contact state_ (rows 1 and 6), with maximum improvements over the baseline (rows 5 vs 1) of +1,52% (58.40 vs 56.88) for the _AP Hand+State_ and +1.04% (36.51 vs 35.47) for the _mAP Hand+All_. Fusing RGB with Depth signals (row 3) brings a small improvement of +0.21% (35.92 vs 35.71) for the _mAP Hand+All_ over the model which uses only the RGB signal (row 2). Interestingly, combining RGB with Mask (row 4) improves the result of +1.42% (58.30 vs 56.88) over the baseline (row 1) in terms of _AP Hand+State_ but leads to a worsening performance of -0.13% (35.34 vs 35.47) considering the _mAP Hand+All_ measure. This suggests that the method is unable to benefit from segmentation masks in the absence of the depth signal. Finally, fusing all the modalities (row 5) leads to the best performance, bringing an improvement over the second-best result (RGB+DEPTH, row 3) of +0.59% (36.51 vs 35.92) for the _mAP Hand+All_ metric. Figure 10 shows some qualitative results obtained with the full proposed architecture.
### Comparison with class-agnostic baselines
Table 5 compares our system with different instances of the class-agnostic method introduced in Shan et al. (2020). Henceforth, we will refer to this method as _Hands In Contact_ (HIC). Since HIC is class agnostic, to compare our method with it, we extend it to recognize the active object classes following two different approaches. In the first approach, we used a Resnet-18 CNN (He et al., 2016) to classify image patches extracted from the active object bounding boxes. We trained the classifier with four different sets of data: 1) _BS1_: we sampled 20,000 frames from 19 videos where a single object of each class is shot at a time. This collection provides a minimal training set that can be collected with a modest labeling effort (comparable with the time needed for acquiring 3D models of the objects in our pipeline); 2) _BS2_: we used images from the proposed _EgoISM-HOI-Real_ training set; 3) _BS3_: we used images from the proposed _EgoISM-HOI-Synth_ training set; 4) _BS4_: we used all _EgoISM-HOI_ data. The second approach (_BS5_) exploits a YOLOv5 object detector, trained to recognize the considered objects (see Fig. 3), to assign a label to the active objects predicted by HIC. Specifically, for each active object prediction, we select the class of the object with the highest _IoU_ among those predicted by the YOLOv5 object detector or discard the proposal if there are no box intersections. It is worth noting that HIC was pre-trained on the large-scale dataset _100DOH_, which contains over 100K labeled frames of HOIs.
Footnote 5: YOLOv5: [https://github.com/ultralytics/yolov5](https://github.com/ultralytics/yolov5)
The best model of the proposed EHOI detection method (row 3) outperforms all the baselines (rows 4-8) with significant improvements ranging from +12.92% (36.51 vs 23.59) to
\begin{table}
\begin{tabular}{l c c c} \hline Contact state & MHS Input Modalities & AP H+State & mAP H+All \\ \hline HS & - & 56.88 & 35.47 \\ HS+MHS & RGB & 58.29 & 35.71 \\ HS+MHS & RGB+DEPTH & 58.37 & 35.92 \\ HS+MHS & RGB+MASK & 58.30 & 35.34 \\ HS+MHS & RGB+DEPTH+MASK & **58.40** & **36.51** \\ MHS & RGB+DEPTH+MASK & 57.56 & 35.81 \\ \hline \end{tabular}
\end{table}
Table 4: Experiments to evaluate the impact on system performance of the different modalities and components involved in our architecture. The _Contect state_ column indicates the branches used to predict the _hand contact state_, i.e., _multimodal hand state classifier_ (MHS), and _Hand state classifier_ (HS). While the _MHS Input Modalities_ column indicates the modalities passed in input to the _multimodal hand state classifier_. The best results are highlighted in bold, whereas the second-best results are underlined.
\begin{table}
\begin{tabular}{l c c c} \hline Method & _EgoISM-HOI-Synth_ & EgoISM-HOI-Real \% & mAP Hand+All \\ \hline Proposed (Base) & Yes & 0 & 23.28 \\ Proposed (Base) & Yes & 10 & 30.65 \\ Proposed (Full) & Yes & 100 & **36.51** \\ HIC+RESNT (BS1) & No & 100* & 09.92 \\ HIC+RESNT (BS2) & No & 100 & 22.18 \\ HIC+RESNT (BS3) & Yes & 0 & 16.39 \\ HIC+RESNT (BS4) & Yes & 100 & 23.59 \\ HIC+YOLOv5 (BS5) & Yes & 100 & 20.62 \\ \hline \end{tabular}
\end{table}
Table 5: Comparison between the proposed system and different baseline approaches based on HIC (Shan et al., 2020).
+26,59% (36.51 vs 9.92). The approach based on Resnet-18 (rows 4-7) leads to better performance compared to the method based on the YOLOv5 object detector (row 8). Indeed, considering only the baselines (rows 4-8), the best result is achieved by BS4 (row 7), which was pre-trained using synthetic and real-world _EgoISM-HOI_ data, with an improvement of +2.97% (23.59 vs 20.62) over the BS5 (row 8). Interestingly, even the BS2 (row 5), which did not use synthetic data during training, obtained a higher result of +1.56% (22.18 vs 20.62) than the BS5 (row 8). These results suggest the limits of this simple approach. In addition, it is worth noting that the model pre-trained on _EgoISM-HOI-Synth_ and fine-tuned using 10% of the _EgoISM-HOI-Real_ training set (row 2) outperforms all the baseline approaches (rows 4-8), with an improvement of +7,06% (30.65 vs 23.59) over the BS4 (row 7). Worth mentioning that the model trained only on _EgoISM-HOI-Synth_ (row 1) achieves comparable results to the best baseline approach (row 7).
### Additional results
In this section, we show an additional set of experiments with the aim of 1) demonstrating how using domain-specific synthetic data improves the performance of a system pre-trained on an out-of-domain large-scale dataset (Section 6.5.1), 2) showing the potential of using synthetic data for the related task of _Object Detection_ (Section 6.5.2). Similar to the set of experiments in Section 6.2, we deactivated the _multimodal hand state classifier_, _monocular depth estimation branch_, and _instance segmentation branch_ modules for a fair comparison.
#### 6.5.1 Pre-training on 100 Days Of Hands
To further demonstrate the utility of synthetic data, we performed an additional experiment in which we pre-trained different models on the large-scale 100DOH dataset and then fine-tuned them on our EgoISM-HOI dataset. The goal of this experiment is to demonstrate how the use of domain-specific synthetic data further increases the performance of a system pre-trained on a large amount of out-of-domain real data.
Using synthetic and real-world training data (row 3) leads to the best or second-best results for all the evaluation metrics. In particular, the improvements over the model which uses only _EgoISM-HOI-Real_ data (rows 2) are significant in the metrics affected by the active objects with a +20,44% (38.54 vs 18.10) for the _mAP Hand_+_Obj_ and +19,68 (37.37 vs 17.69) for the _mAP Hand_+_All_. Considering the _mAP Hand_+_All_ metric, it is worth noting that the model trained only on _EgoISM-HOI-Synth_ (row 1) surpasses the model trained on the _EgoISM-HOI-Real_ training data (row 2) with an improvement of +5,5% (23.19 vs 17.69).
#### 6.5.2 Object Detection
We performed an additional experiment to assess the utility of using synthetic data for the related task of _Object Detection_. The _mean Average Precision metric10_ with an _IoU_ threshold of 0.5 (_mAP@50_) was used as the evaluation criterion.
Figure 10: **Qualitative results of the proposed multimodal EHOI detection system on the _EgoISM-HOI-Real_ test data.**
The results are shown in Table 7. The models trained using synthetic and real-world data (rows 1-5) outperform all the corresponding models trained only on the real-world training set (rows 6-9). In particular, the best result of 81.06% was obtained by the model pre-trained on _EgoISM-HOI-Synth_ training set and fine-tuned with 100% of _EgoISM-HOI-Real_ training data (row 5), with an improvement of +7.73% (81.06 vs 73.33) over the model which obtains the best results among the ones trained only on _EgoISM-HOI-Real_ (row 8). Furthermore, it is worth noting that the model pre-trained using _EgoISM-HOI-Synth_ and fine-tuned with only 10% of the _EgoISM-HOI-Real_ training set (row 2) surpasses all the models fine-tuned using only _EgoISM-HOI-Real_.
## 7 Conclusion
We studied egocentric human-object interactions in an industrial domain. Due to the expensiveness of collecting and labeling real in-domain data in the considered context, we proposed a pipeline and a tool that leverages 3D models of the objects and the considered environment to generate synthetic images of EHOIs automatically labeled and additional data signals, such as depth maps and instance segmentation masks. Exploiting our pipeline, we presented _EgoISM-HOI_, a new multimodal dataset of synthetic and real EHOI images in an industrial scenario with rich annotations of hands and objects. We investigated the potential of using multimodal synthetic data to pre-train an EHOI detection system and demonstrated that our proposed method outperforms class-agnostic baselines based on the state-of-the-art method of Shan et al. (2020). Future work will investigate how the knowledge inferred by our method can be valuable for other related tasks such as next active object detection or action recognition. To encourage research on the topic, we publicly released the datasets and the source code of the proposed system, together with pre-trained models, on our project web page: [https://iplab.dmi.unict.it/egoism-hoi](https://iplab.dmi.unict.it/egoism-hoi).
## Acknowledgments
This research is supported by Next Vision11 s.r.l., by MISE - PON I&C 2014-2020 - Progetto ENIGMA - Prog n. F/190050/02/X44 - CUP: B61B19000520008, and by the project Future Artificial Intelligence Research (FAIR) - PNRR MUR Cod. PE0000013 - CUP: E63C22001940006.
Footnote 11: Next Vision: [https://www.nextvisionlab.it/](https://www.nextvisionlab.it/)
|
2307.05707 | MoP-CLIP: A Mixture of Prompt-Tuned CLIP Models for Domain Incremental
Learning | Despite the recent progress in incremental learning, addressing catastrophic
forgetting under distributional drift is still an open and important problem.
Indeed, while state-of-the-art domain incremental learning (DIL) methods
perform satisfactorily within known domains, their performance largely degrades
in the presence of novel domains. This limitation hampers their
generalizability, and restricts their scalability to more realistic settings
where train and test data are drawn from different distributions. To address
these limitations, we present a novel DIL approach based on a mixture of
prompt-tuned CLIP models (MoP-CLIP), which generalizes the paradigm of
S-Prompting to handle both in-distribution and out-of-distribution data at
inference. In particular, at the training stage we model the features
distribution of every class in each domain, learning individual text and visual
prompts to adapt to a given domain. At inference, the learned distributions
allow us to identify whether a given test sample belongs to a known domain,
selecting the correct prompt for the classification task, or from an unseen
domain, leveraging a mixture of the prompt-tuned CLIP models. Our empirical
evaluation reveals the poor performance of existing DIL methods under domain
shift, and suggests that the proposed MoP-CLIP performs competitively in the
standard DIL settings while outperforming state-of-the-art methods in OOD
scenarios. These results demonstrate the superiority of MoP-CLIP, offering a
robust and general solution to the problem of domain incremental learning. | Julien Nicolas, Florent Chiaroni, Imtiaz Ziko, Ola Ahmad, Christian Desrosiers, Jose Dolz | 2023-07-11T18:17:50Z | http://arxiv.org/abs/2307.05707v1 | # MoP-CLIP: A Mixture of Prompt-Tuned CLIP Models
###### Abstract
Despite the recent progress in incremental learning, addressing catastrophic forgetting under distributional drift is still an open and important problem. Indeed, while state-of-the-art domain incremental learning (DIL) methods perform satisfactorily within known domains, their performance largely degrades in the presence of novel domains. This limitation hampers their generalizability, and restricts their scalability to more realistic settings where train and test data are drawn from different distributions. To address these limitations, we present a novel DIL approach based on a mixture of prompt-tuned CLIP models (MoP-CLIP), which generalizes the paradigm of S-Prompting to handle both in-distribution and out-of-distribution data at inference. In particular, at the training stage we model the features distribution of every class in each domain, learning individual text and visual prompts to adapt to a given domain. At inference, the learned distributions allow us to identify whether a given test sample belongs to a known domain, selecting the correct prompt for the classification task, or from an unseen domain, leveraging a mixture of the prompt-tuned CLIP models. Our empirical evaluation reveals the poor performance of existing DIL methods under domain shift, and suggests that the proposed MoP-CLIP performs competitively in the standard DIL settings while outperforming state-of-the-art methods in OOD scenarios. These results demonstrate the superiority of MoP-CLIP, offering a robust and general solution to the problem of domain incremental learning.
## 1 Introduction
In machine learning, it is a common practice to assume that both training and test data follow the same underlying distribution. In real-world scenarios, however, this strong assumption is rarely met, leading to substantial performance degradation when the trained model is evaluated on test samples under a distributional drift. A simple solution to alleviate this issue is to train the model on the labeled samples from the new domain. However, when the learning is performed in a sequential manner on multiple domains, contemporary deep learning models tend to suffer from the phenomenon of _catastrophic forgetting_, wherein the acquired knowledge from previous domains is typically erased.
A simple strategy to address this issue consists in training different models, one per single domain. However, this approach is suboptimal, as all these models must be stored for future usage and the domain identity is not necessarily known at test time. To tackle the issue of forgetting learned knowledge, domain incremental learning (DIL) has recently emerged as an appealing alternative that alleviates the need to store multiple domain-specific networks. Among the different DIL approaches, rehearsal [3, 2, 17, 34] and distillation-based [1, 23, 16] methods, which leverage a buffer of stored exemplars from old domains, dominate the literature. Nevertheless, from a privacy and storage standpoint, _exemplar-free_ DIL approaches may offer a better solution in practical settings.
An appealing alternative to mitigate knowledge forgetting is prompt-learning, which is driving progress in a wide span of transfer learning problems [22, 48]. In this approach, domain-specific knowledge is preserved in the form of textual and visual prompts, alleviating the need of storing exemplars per domain. While some methods advocate for the joint learning of prompts across tasks [13, 40], the recent work in [38] instead favors the learning of the prompts independently, suggesting that this leads to the best performance per domain. This learning paradigm, referred to as S-Prompting [38], circumvents the issue of using expensive buffers by optimizing per-domain prompts, which are lever
aged at testing time. In particular, centroids for each domain are obtained during training by applying K-Means on the training image features, which are generated with the fixed pre-trained transformer without using any prompts. Then, during inference, the standard KNN algorithm is used to identify the nearest centroid to the test image, whose associated domain prompt is added to the image tokens for classification. Despite the empirical performance gains observed by these approaches [13, 38, 40], a current limitation hampering their generalization is that they perform satisfactorily in _known_ domains, but typically fail when _unseen_ domains are presented (see Fig. 1). This is particularly important in real-world scenarios where training and testing data of the _a priori_ same domain may present distributional drifts that degrade the model performance. In the case of S-Prompts [38], we argue that a potential reason behind this suboptimal performance stems from forcing the model to select a single domain (i.e., the closest one), which might be indeed far in the feature space.
Motivated by these limitations, we introduce a novel _exemplar-free_ DIL solution, based on prompt learning, which generalizes the recent S-liPrompts approach [38] for both in-distribution and out-of-distribution data. Specifically, our contributions can be summarized as follows:
* We first expose that existing state-of-the-art domain incremental learning approaches suffer in the presence of distributional shift between samples used for adaptation and testing, which hampers their generalization to unseen domains (Fig. 1).
* Based on these observations, we present a novel DIL strategy based on a mixture of prompt-tuned (MoP) CLIP models, generalizing the recent S-liPrompts approach [38] to work with both in-distribution and out-of-distribution data. In particular, the proposed approach learns class-wise features distributions for each domain, allowing to detect whether a given sample comes from a known domain.
* The proposed approach is _exemplar-free_, reducing the computational burden compared to conventional methods, and _agnostic to the sequence order_.
* Extensive experiments demonstrate that our approach performs at par with state-of-the-art DIL methods on known domains, while largely outperforming them under distributional drifts.
## 2 Related Work
**Domain-Incremental learning (DIL)** refers to continual learning scenarios in which the distribution of instances from fixed classes changes between domains. These real-world scenarios include, for example, the recognition of objects where new instances from varying environments appear in each new domain [27], or autonomous driving, where the car is exposed to ever-changing conditions. We focus on the domain-agnostic scenario, where the sample's domain remains unknown at inference time. The major challenge of this task is to find a good trade-off to adapt to the new instances distribution without deteriorating performance for samples of the previous distributions (i.e., alleviating _catastrophic forgetting_). The literature on this subject is abundant, where the main approaches are based on weight regularization [7, 21, 45], knowledge distillation in a teacher-student setting using current examples [25] or a memory buffer [8] and methods using or generating latent features [31, 36] or gradient examplars [8, 28, 30]. Nevertheless, these approaches require the use of _exemplars_ from seen domains, which may result in storage, security and privacy issues. In contrast, the proposed approach only requires the storage of a single prototype per class and domain, which largely alleviates these issues.
**Prompt learning.** Driven by the advances in Natural Learning Processing, prompt learning has emerged as an appealing learning strategy to adapt large scale pre-trained models to downstream tasks. While initial attempts to adapt language-vision models have centered on carefully designing handcrafted prompts [4], recent works focus on optimizing a task-specific continuous vector, which is optimized via gradients during fine-tuning [19, 48, 49, 29]. An underlying limitation of these approaches arises from the inherent disparity between language and vision modalities, and thus fine-tuning only text prompts for visual recognition tasks may yield suboptimal performance. Motivated by this, visual prompt tuning (VPT) [18] was proposed as a powerful alternative to text prompting. In this approach, authors propose to optimize task-specific learnable prompts in either the input or visual embedding space. Following the satisfactory results achieved by VPT, fine-tuning visual prompts
Figure 1: **Performance degradation under the presence of domain shift** between adaptation and testing samples, which shows that so DIL approaches do not generalize well. We employ S-Prompts [38] as use-case. The red line represents the performance across each test domain, when all domains have been seen by the model. In contrast, the blue dotted line shows the performance of the same model when the test domain remains unknown, highlighting the performance degradation under distributional shift.
has gained popularity recently, particularly for adapting pretrained models to novel unseen categories [9, 37, 42, 42].
**Prompt tuning in domain incremental learning.** This paradigm protects against catastrophic forgetting by optimizing a small set of learnable prompts. This contrasts with classical approaches which modify all the network parameters (or a subset),or store _exemplars_ in a buffer. Despite the success observed in other tasks, the literature on prompt tuning for domain incremental learning remains underexplored, with just a handful works addressing this problem [13, 38, 40]. For example, S-Prompts [38] learns in isolation a set of prompts per domain, and dynamically selects which set to use at test-time using a fixed key/value dictionary where the keys are computed with K-Means and the values represent the sets of prompts. L2P [40] uses an incrementally learnable key/value mechanism to select which prompts to prepend to the input image tokens at test-time, hence breaking the isolation between domains, which contrasts with our work, as it learns domain prompts independently. A main difference with these, and conventional DIL approaches, is that the proposed approach explicitly tackles the generability performance in domain incremental learning, while maintaining at par accuracy in known domains, which remains underexplored.
**Domain generalization (DG)** Existing literature on DG strongly relies on supervised knowledge from source domain data, regardless of whether it originates from a single domain [39] or multiple domains [10, 43, 47, 46], which may not be realistic in continually changing scenarios, as knowledge comes in a sequential manner. Additionally, in scenarios involving distributional shifts, DG approaches primarily focus on the target domain, increasing the potential risk of catastrophic forgetting on previously learned domains [26].
## 3 Method
An overview of MoP-CLIP is illustrated in Fig. 3, which contains two phases: _i)_ learning of in-distribution domain-specific visual and text prompts (sec. 3.2) and _ii)_ selection of optimal prompts for a given test sample (sec. 3.3).
### Problem definition
Let us denote as \(\mathcal{S}=\left\{\mathcal{D}_{s}\right\}_{s=1}^{N}\) the sequence of datasets presented to the model in our incremental learning scenario, with \(N\) being the final number of domains. Each dataset is defined as \(\mathcal{D}_{s}=\{\mathbf{x}_{i}^{s},\mathbf{y}_{i}^{s}\}_{i=1}^{|\mathcal{D}_ {s}|}\), where \(\mathbf{x}_{i}\in\mathbb{R}^{W\times H\times C}\) represents an image of size \(W\times H\) and \(C\) channels, and \(\mathbf{y}_{i}\in\{0,1\}^{K}\) is its corresponding one-hot label for \(K\) target classes. In this setting, we have access to only one domain \(\mathcal{D}_{s}\) at a time and storing samples from previous seen domains, commonly referred to as _exemplars_, is not allowed. Each time a new domain \(\mathcal{D}_{s}\) becomes accessible, DIL aims to improve the model's performance on \(\mathcal{D}_{s}\), while avoiding the loss of knowledge for past domains, \(\mathcal{D}_{s-1},\mathcal{D}_{s-2},...\mathcal{D}_{1}\). In the proposed setting, and in contrast to most existing literature on DIL, we assume that the model should also generalize well on unseen datasets, i.e., \(\mathcal{D}_{s+1},\mathcal{D}_{s+2},...,\mathcal{D}_{|\mathcal{D}_{s}|}\) (Fig. 2). In other words, our learning scenario leverages _backward transfer_ to avoid catastrophic forgetting on seen domains, while optimizing _forward transfer_ to facilitate knowledge transfer to new tasks/domains. Our motivation
Figure 2: **Proposed generalization scenario for domain incremental learning** Standard problem (_left_): Only in-domain examples are encountered at test time. Addressed problem (_right_): Both in-domain and out-of-domain examples are presented at test time.
behind this bi-directional performance assessment relies on the realistic assumption that a distributional drift between training and testing data always exists.
### Prompts Learning
Following the setting in [38], we define \(f_{\theta}\) as the pre-trained vision transformer that generates a visual embedding \(\mathbf{z}^{v}=f_{\theta}(\mathbf{x}_{\mathrm{tok}})\in\mathbb{R}^{L}\), where \(\mathbf{x}_{\mathrm{tok}}\in\mathbb{R}^{WH/R^{2}\times M^{v}}\) corresponds to the image tokens (or patches), \(WH/R^{2}\) is the number of tokens, \(R\) is the width/height of the (square) patch and \(M^{v}\) is the dimension of the image tokens embedding. We also define \(f_{\phi}\), a pre-trained text transformer that generates text embeddings of dimension \(M^{t}\) from class names tokens \(\mathbf{c}_{k}\) for \(k\in\{1,...,K\}\). For each new domain \(\mathcal{D}_{s}\) in the sequence \(\mathcal{S}\), we can adapt the model by learning a visual prompt \(\mathbf{p}_{s}^{v}\in\mathbb{R}^{L^{v}\times M^{v}}\) and a text prompt \(\mathbf{p}_{s}^{t}\in\mathbb{R}^{L^{t}\times M^{t}}\), following [38]. In particular, these prompts are a set of continuous learnable parameters, where \(L^{v},L^{t}\) are the visual and text prompt length. Thus, for the set of domains \(\mathcal{S}\), we have a set of domain-specific visual and text prompts, denoted as \(\mathcal{P}^{v}=\{\mathbf{p}_{1}^{v},...,\mathbf{p}_{N}^{v}\}\) and \(\mathcal{P}^{t}=\{\mathbf{p}_{1}^{t},...,\mathbf{p}_{N}^{t}\}\). Now, with the domain-specific prompts, we can modify the embeddings that will be provided to the visual and text encoders, \(f_{\theta}\) and \(f_{\phi}\). Concretely, for an image of domain \(s\) and class \(k\), the input of the visual transformer is defined as \(\tilde{\mathbf{x}}^{v}=[\mathbf{x}_{\mathrm{tok}},\mathbf{p}_{s}^{v},\mathbf{ x}_{\mathrm{cls}}]\) with \(\mathbf{x}_{\mathrm{cls}}\) the classification token of the ViT. Similarly, the input of the text transformer is defined as \(\tilde{\mathbf{c}}_{k}^{t}=[\mathbf{p}_{s}^{t},\mathbf{c}_{k}]\). We then denote as \(\tilde{\mathbf{z}}^{v}=f_{\theta}(\tilde{\mathbf{x}}^{v})\) and \(\tilde{\mathbf{z}}_{k}^{t}=f_{\phi}(\tilde{\mathbf{c}}_{k}^{t})\) the embeddings of these inputs. The posterior probability of a given image \(\mathbf{x}_{i}\) from \(\mathcal{D}_{s}\) belonging to class \(k\) can be therefore defined as:
\[p(\mathbf{y}_{k}|\mathbf{x},s)=\frac{e^{\cos(\tilde{\mathbf{z}}^{v},\tilde{ \mathbf{z}}_{k}^{t})}}{\sum_{j=1}^{K}e^{\cos(\tilde{\mathbf{z}}^{v},\tilde{ \mathbf{z}}_{j}^{t})}}, \tag{1}\]
where \(\cos(\mathbf{a},\mathbf{b})=\frac{\mathbf{a}\cdot\mathbf{b}}{\|\mathbf{a}\| \|\mathbf{b}\|}\) is the cosine similarity between vectors \(\mathbf{a}\) and \(\mathbf{b}\).
### Inference
At test time, the domain of the images to classify remains unknown. In S-liPrompts [38], the domain \(s^{*}\) closest to a given test sample is selected by finding the minimum distance between the visual embeddings and prototypes computed with K-Means over the domains \(\mathcal{S}\). This strategy is generally effective in finding the closest domain when \(\mathbf{x}\in\mathcal{D}_{s}\) and \(\mathcal{D}_{s}\) has been already presented to the model. In this setting, \(p(\mathbf{y}_{k}|\mathbf{x},s)\) yields satisfying predictions, as the domain of the sample \(\mathbf{x}\) can be easily inferred and the scenario becomes a classification task under in-distribution data. Nevertheless, when the model has not been exposed to \(\mathcal{D}_{s}\) during training or adaptation, the selection of an existing closest domain (other than \(\mathcal{D}_{s}\)) might not match with the real distribution of the new domain. In this case, the strategy used in S-liPrompts may actually move the test sample away from its original distribution. To overcome this issue, we propose to enhance the domain selection mechanism in two separate ways: _i)_ dynamically allowing the model to select \(n\) close domains and _ii)_ leveraging per-domain predictions in an ensembling scheme for samples of unseen domains.
Figure 3: **Overview of MoP-CLIP.** The training phase (_left_): class-wise prototypes are identified from in-distribution domains. Inference (_middle_ and _right_): domain selection and ensembling (Mixture of Prompts), respectively, for in-distribution and out-of-distribution samples. For simplicity, we depict the pipeline for 2 classes (Real _vs_ Fake). However, the procedure for multiple classes (e.g., DomainNet or CoRE50) is exactly the same.
To select the right prompt, we propose a strategy based on a set of class-specific prototypes for each domain, \(\mathcal{E}_{s}=\{\mathbf{m}_{s}^{k}\}_{k=1}^{K}\), instead of prototypes obtained with K-Means as in [38]. Let \(\mathcal{D}_{s}^{k}\subset\mathcal{D}_{s}\) be the samples of domain \(\mathcal{D}_{s}\) belonging to the class \(k\), we compute the the prototype of class \(k\) for domain \(\mathcal{D}_{s}\) by averaging the visual embeddings of examples in \(\mathcal{D}_{s}^{k}\):
\[\mathbf{m}_{s}^{k}=\frac{1}{|\mathcal{D}_{s}^{k}|}\sum_{\{\mathbf{z}^{v}\,|\,\mathbf{x}\in \mathcal{D}_{s}^{k}\}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
### Comparison methods.
We benchmark MoP-CLIP to several state-of-the-art DIL methods. These include **non-prompting** approaches (EWC [21], LwF [25], ER [8], GDumb [33], BiC [41], DER++ [5] and Co2L [6]), **prompting-based** methods (L2P [40], DyTox [13] and S-liPrompts [38]) and a **self-supervised** learning method, CaSSLe [14], following the experimental set-up in [38]. For OOD experiments, we only evaluate those methods that are in direct competition with our approach, in terms of _exemplars_ buffer use. In particular, we compare to the following methods, whose respective codes are publicly available: EWC1, LwF2, DyTox3, L2P4, and S-liprompts5.
Footnote 1: [https://github.com/G-U-N/PyCIL/](https://github.com/G-U-N/PyCIL/)
Footnote 2: [https://github.com/G-U-N/PyCIL/](https://github.com/G-U-N/PyCIL/)
Footnote 3: [https://github.com/arthurdouillard/dytox](https://github.com/arthurdouillard/dytox)
Footnote 4: [https://github.com/JH-LEE-RR/12p-pytorch](https://github.com/JH-LEE-RR/12p-pytorch)
Footnote 5: [https://github.com/iamwangyabin/S-Prompts](https://github.com/iamwangyabin/S-Prompts)
### Evaluation metrics and protocol.
To assess the performance of the proposed approach, we resort to standard metrics in the incremental learning literature. **In-domain setting:** On DomainNet and CDDB-Hard we follow the original work in [24] and employ the average classification accuracy (AA), as well as the average forgetting degree (AF), which is the mean of the popular backward transfer degradation (BWT). We formally define the average accuracy as \(AA=\frac{1}{N}\sum_{i=1}^{N}A_{i,N}\) with \(A_{i,N}\) the accuracy on domain \(i\) measured after having trained on \(N\) domains. This metric is computed at the end, i.e., after having seen all the domains, e.g., on CDDB: GauGAN \(\rightarrow\) BigGAN\(\rightarrow\) WildDeepfake\(\rightarrow\) WhichFaceReal\(\rightarrow\) SAN. Furthermore, the average forgetting degree on CDDB can be defined as \(\frac{1}{N-1}\sum_{i=1}^{N-1}BWT_{i}\) with \(BWT_{i}=\frac{1}{N-i-1}\sum_{j=i+1}^{N}(A_{i,j}-A_{i,i})\) as originally proposed in [24] (i.e., the forgetting degree is computed for each domain at each adaptation step, then averaged). **Out-of-domain setting:** We follow [27] to compute the AA on CORe50 on the fixed test set, which contains 3 hold-out splits that can be considered as OOD with respect to the training set. Furthermore, as in [38], we compute the AA on 3 unseen domains (Glow, StarGAN and CycleGAN) in CDDB-Hard. Last, as no independent hold-out subset of unseen domains exists for DomainNet, we propose using the Cumulative Accuracy on the unseen domains during the incremental learning of the model (i.e., average accuracy on the unseen domains averaged on all the steps), defined as follows: \(CA=\frac{1}{N-1}\sum_{i=1}^{N-1}\frac{1}{N-j-1}\sum_{i=j}^{N}A_{i,j}\).
### Implementation details
We use the same setting as [38], i.e. use ViT-B/16 [12] as our base image encoder and the text encoder of CLIP, both initialized by CLIP pretraining on ImageNet [35]. We follow [38] and use the same image encoder model as a backbone (i.e., ViT-B/16 [12] pretrained on ImageNet [35]) across all the compared methods, for a fair comparison. As suggested in [38], we use a more advanced backbone (i.e. ConViT pretrained on ImageNet [35]) on DyTox [13] as it underperforms a random model with ViT-B/16 as backbone. We empirically fix \(q=0.94\) for the 3 datasets, based on the ablation study in Figure 5, such that we do not deteriorate ID performance while improving OOD performance on CDDB-Hard. For EWC, LwF and CaSSLe, we use the same hyperparameters as in the original papers, whereas we keep the hyperparameters reported in [38] for DyTox, L2P and S-Prompts.
### Results
**In-domain distributions.** We first evaluate the proposed approach in the standard DIL scenario where the testing samples are drawn from the same distribution as the training/adaptation images. These results, which are reported under the _Seen-Domains_ columns of Tables 1 and 2, demonstrate that the proposed MoP-CLIP approach yields superior performance than existing _exemplar-free_ methods. In particular, MoP-CLIP outperforms the very recent approaches DyTox [13] and L2P [40] by large margin, with improvement gains of around 20-30% in terms of average classification accuracy under the same storage conditions. Furthermore, the degree of knowledge forgetting is also largely reduced, going from -45.85 in DyTox to -0.79 in our approach. Furthermore, if storing exemplars is allowed, DyTox [13] significantly improves its performance, but still underperforms our approach yet incurring a non-negligible overhead. Last, it is noteworthy to highlight that the proposed approach reaches similar performance than S-liPrompts [38] in this scenario, with at par values in the CDDB-Hard dataset and remarkable performance gains in DomainNet. Note that this result is somehow expected, as our approach is a generalization of S-liPrompts for the OOD scenario, and differences in the in-distribution setting may come from the domain prompt selected.
An interesting observation is that prompting-based methods, which do not store exemplars from old tasks, typically outperform their buffer-storage counterparts. For example, S-liPrompts [38] and MoP-CLIP bring considerable improvements compared to LUCIR (between 6-8%) or iCaRL (ranging from 9 to 15%). We hypothesize that this phenomenon comes from the absence of interference between domains when doing the adaptation. In this scenario, the knowledge from previously learned domains remains isolated in the form of optimized domain prompts, and the only knowledge shared is derived from pre-trained transformers.
**Performance under domain distributional shift.** We now want to assess the benefits of the proposed approach when the testing dataset presents a distributional drift over the training data. In particular, we advocated that the proposed approach is a generalization of [38] to be able to handle samples coming from an unseen distribution. To support
this claim, and to demonstrate the superiority of our approach on unseen domains, we resort to the OOD experiments, which are reported in the right-most columns of Tables 1 and 2, as well as Table 3. From these results, we can observe that excluding S-liPrompts, the performance gains brought by the proposed approach are substantial compared to other _exemplar-free_ methods, ranging from 17% (EWC in CORe50) to 40% (L2P [40] in DomainNet). Even when comparing to state-of-the-art competitors that store exemplars (e.g., DyTox [13] or Co\({}^{2}\)L [6] in CORe50), MoP-CLIP yields considerable improvements, ranging from 11% to nearly 17%. The clear superiority of our approach lies on the isolation of different domains during learning, which do not degenerate the generalization capabilities brought by the pre-trained transformers. Furthermore, when comparing the proposed MoP-CLIP to S-liPrompts [38], we observe that our method outperforms the latter by around 6%, 2% and 3% in CDDB-Hard, DomainNet and CORe50 benchmarks, respectively. These performance gains on OOD samples might likely come from the flexibility of MoP-CLIP in selecting a subset of similar domains for a given test sample, which allows the model to properly weight the contribution of each domain prompt. In contrast, S-liPrompts [38] forces the model to select only one domain from the seen domains, which impedes its scalability to novel distributions, as empirically shown in these results, as well as in Figure 1.
**On the impact of the different components.** The empirical study in Table 4 justifies the need of employing the proposed approach over the strong baseline S-liPrompts [38], as well as showcases the impact of each choice. In a practical scenario, it is unrealistic to assume that the test samples always follow the same distribution as the data used for adaptation. Furthermore, the domain of each sample typically remains unknown. Thus, to align with real-world conditions, we will consider the average of in-distribution and out-of-distribution performance as our metric of reference to evaluate the impact of the different choices. We can observe that in nearly all the cases, the use of an ensembling strategy results in consistent improvements over the single model predictions (considering same distances). An interesting observation is that distances related to the L\({}_{2}\)-norm typically degrade the performance on ID samples. We observe that in this scenario, the distributions overlap considerably and \(p(s|\mathbf{x})\) (derived from the Gaussian mixture) is too far from 1 for most ID samples, making the discrimination of samples by these distance measures difficult. Nevertheless, this behavior is reversed in the presence of OOD samples. In particular, our simplification assumes an isotropic Gaussian distribution of the points around the prototypes and therefore reduces the noise in the coordinate-wise variances (which can explain the performance degradation observed when using the Mahanalobis distance), replacing it with distance-wise variances. Thus, the proposed approach combines the best of both worlds, leading to the best average performance across all the configurations.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Prompts} & \multicolumn{2}{l}{Semi-Domains} & \multicolumn{2}{l}{Unseen-Domains} \\ & & & AA (1) & AF (1) & AA (1) \\ \hline LRCLIP\({}_{\text{non}}\)2 [31] & ✗ & 76.39 & -4.39 & - \\ CRILIP\({}_{\text{non}}\)2 [36] & ✗ & 100xx/class & 79.76 & 8.73 & - \\ LRCLIP\({}_{\text{non}}\)2 [17] & ✗ & 82.53 & -5.34 & - \\ \hline LRCLIP\({}_{\text{non}}\)2 [31] & ✗ & 74.01 & 8.62 & - \\ LGRL\({}_{\text{non}}\)2 [30] & ✗ & 50x/class & 73.98 & -14.50 & - \\ LUCIP\({}_{\text{non}}\)2 [17] & ✗ & 80.77 & -7.85 & - \\ DFN\({}_{\text{non}}\)2 [13] & ✓ & 86.21 & -1.55 & - \\ \hline EWC\({}_{\text{non}}\)2 [21] & ✗ & 50.59 & -42.62 & - \\ LuF\({}_{\text{non}}\)2 [5] & ✗ & 60.94 & -13.53 & 50.05 \\ DFN\({}_{\text{non}}\)2 [13] & ✓ & _No_ buffer & 51.27 & -65.85 & 50.46 \\ L2P\({}_{\text{non}}\)2 [40] & ✓ & 61.28 & -9.23 & 57.34 \\ S-liPrompts\({}_{\text{non}}\)2 [38] & ✓ & **88.65** & **-0.09** & 76.79 \\
**MoP-CLIP (ours)** & ✓ & 88.54 & **-0.79** & **82.02** \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Results on CDDB-Hard for both ID and OOD scenarios.** Evaluation of existing state-of-the-art DIL methods in the standard _seen-domain_ setting and more challenging _unseen-domain_ scenario. For the unseen-domain experiments, we only reproduced the results for related (i.e., _exemplar-free_) methods. Best results are highlighted in **bold**.
\begin{table}
\begin{tabular}{l l l l} \hline \hline Method & Prompt & Buffer size & AA \\ \hline GDumb\({}_{\text{ECCV}}\)20 [33] & ✗ & & 74.92 \\ BiC\({}_{\text{CVPR}}\)19 [41] & ✗ & & 79.28 \\ DER++ NeurPS20 [5] & ✗ & _50ex/class_ & 79.70 \\ Co\({}^{2}\)L\({}_{\text{CCV}}\)21 [6] & ✗ & & 79.75 \\ DyTox\({}_{\text{CVPR}}\)22 [13] & ✓ & & 79.21 \\ L2P\({}_{\text{CVPR}}\)22 [40] & ✓ & & 81.07 \\ \hline EWC\({}_{\text{PNAS}}\)17 [21] & ✗ & & 74.82 \\ LwF\({}_{\text{TPAMI}}\)17 [25] & ✗ & & 75.45 \\ L2P\({}_{\text{CVPR}}\)22 [40] & ✓ & _No buffer_ & 78.33 \\ S-liPrompts\({}_{\text{NeurIPS}}\)22 [38] & ✓ & & 89.06 \\
**MoP-CLIP (Ours)** & ✓ & & **91.43** \\
**MoP-CLIP (Ours)*** & ✓ & & **92.29** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Results on CORe50.** Note that CORe50 already provides separate training and testing domains, and thus results can only be computed on the **OOD scenario**. Results are reported as the Acc metric, where the best values are highlighted in **bold**. In our method, we use the same \(q\) as in the other datasets, whereas * indicates that \(q\) is fixed based on the validation set of CORe50, as typically done in all the other approaches.
**Strategy to select the domain prompts.** As emphasized in Sec. 3.3, [38] uses K-Means over the features extracted with a pre-trained ViT to compute the prototypes which are used to dynamically select which prompt to use at test time. While this strategy is memory efficient, it lacks flexibility, as the number of clusters needs to be adjusted according to the dataset employed. To alleviate this issue, we instead use class-wise prototypes as a _hyperparameter-free_ alternative to compute representative prototypes. The effect of using either k-Means or class-prototypes is depicted in Fig. 4. From these results, we empirically observe that this choice improves performance in both in-distribution and out-of-distribution domains, leading to a higher average performance. Furthermore, it is noteworthy to mention that using class-wise prototypes makes the distribution of points around prototypes Gaussian, which explains the satisfactory performance of MoP-CLIP, particularly on samples from unseen domains.
**How much trade-off is sufficient?** The influence of the threshold \(q\) from our simple out-of-distribution criterion (Sec. 3.3) to select between seen and unseen domains is shown in Figure 5. As stressed earlier, we aim for a compromise between ID and OOD performance, in order to provide generalizable models. As target domains should remain unknown at inference, we selected a fixed \(q\) value that provided the optimal average performance across both settings. Nevertheless, these plots reveal two interesting findings. First, the average performance of the model is not very sensitive to the choice of \(q\). For example, the performance of ID samples decreases as \(q\) decreases, whereas OOD performance improves. On the other hand, if \(q\) increases, the accuracy in the ID scenario increases, while it decreases for OOD samples. And second, if prior knowledge about the target domain is available -an assumption made by all existing DIL literature- the performance of MoP-CLIP is further increased, enlarging the gap with SOTA methods.
## 5 Conclusion
Findings from this work reveal that existing literature on domain incremental learning suffers under the presence of distributional drift, hampering their scalability to practical scenarios. To overcome this issue, we have proposed a generalization of the recent S-ilPrompts [38] approach, that further handles out-of-distribution samples. In addition to outperforming current state-of-the-art, particularly in the unseen domain setting, our method brings several interesting benefits compared to most existing DIL method. First, MoP-CLIP is _exemplar-free_, eliminating the limitations of conventional DIL approaches in terms of storage and privacy. Furthermore, as prompts are learned independently on each domain, and the model parameters remain fixed during the adaptation, the performance of our approach is insensitive to the ordering of the seen domains. This contrasts with a whole body of the literature, where the choice of the sequence order can significantly impact the final performance. Our comprehensive evaluation shows the empirical gains provided by MoP-CLIP, pointing to visual prompt tuning as an appealing alternative for general domain incremental learning. Finally, we stress that while powerful, the proposed approach retains the spirit of S-ilPrompts [38], which advocates for a simple yet elegant method.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Method & Ensembling & Distance & \begin{tabular}{l} Seen \\ Domains \\ \end{tabular} &
\begin{tabular}{l} Users \\ Domains \\ \end{tabular} & Mean \\ \hline \hline SiPropping [39] & ✗ & L1 & 88.65 & 76.79 & 82.72 \\ \hline \multicolumn{5}{l}{Mod-CLIP- no. ens. (a)} & ✗ & L2 & **89.88** & 76.95 & 83.22\({}_{(+5.30)}\uparrow\) \\ - & ✗ & Maha & 80.45 & 76.66 & 78.36\({}_{(+5.40)}\downarrow\) \\ - & ✗ & L2-GMM & 75.72 & 75.76 & 75.24\({}_{(+5.00)}\downarrow\) \\ \hline \multicolumn{5}{l}{} & ✓ & Uniform & 67.55 & 83.61 & 75.58\({}_{(+5.40)}\downarrow\) \\ - & ✓ & L1 & 89.29 & 80.05 & 84.57\({}_{(+5.10)}\downarrow\) \\ - & ✓ & L2 & 68.37 & 84.07 & 76.22\({}_{(+5.00)}\downarrow\) \\ - & ✗ & Maha & 80.48 & 77.56 & 79.02\({}_{(-5.30)}\downarrow\) \\ \hline \multicolumn{5}{l}{Mod-CLIP- ens. (b)} & ✓ & L2-GMM & 72.51 & **89.21** & 80.05\({}_{(-1.30)}\downarrow\) \\ \hline \multicolumn{5}{l}{**Mod-CLIP (Proposed)**} & Hybrid & ID(ii)/ OOD (i) & 88.54 & 82.02 & **83.28\({}_{(+5.20)}\uparrow\)** \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Impact of each design choice of MoP-CLIP.**_Maha_ denotes the Mahanalobis distance, whereas GMM is used for a Gaussian Mixture Model. Furthermore, _Hybrid_ denotes the nature of our approach, which uses an ensembling for OOD samples and a single domain prompt for ID samples. Results (on CDDB-Hard) show the average accuracy (AA), with the deviation from the baseline S-ilPrompts [38] in brackets. Best results in **bold**.
Figure 4: **k-Means or class prototypes as domain centroids?** Ablation study that demonstrates the benefits of using class prototypes (our approach) rather than k-Means prototypes, as in [38].
Figure 5: **A controllable trade-off between in-domain and out-of-domain prediction performances.** Impact of the threshold \(q\) (Sec. 3.3) on the accuracy, evaluated on CDDB-Hard.
_Potential Negative Impact:_ Language-vision models and prompt tuning heavily rely on pre-training data, including different corpus, which may contain biases and reinforce existing societal prejudices. The use of text prompt tuning might amplify these biases and contribute to biased classification results.
|
2304.13353 | Muon anomalous magnetic dipole moment in a low scale type I see-saw
model | Recent experimental results on muon anomalous magnetic dipole moment have
shown a $4.2\sigma$ tension with the SM prediction, which has blown a fresh
wind into the elementary particle physics community. The problem is believed to
be explained only by physics beyond the standard model. Current work considers
the anomalous moment in a scenario of models with mirror symmetry and type I
see-saw mechanism at low energy scale of electroweak interactions. After a
brief introduction to the model, a detailed numerical analysis of muon
anomalous phenomenology will be carefully performed. Analysis results show that
the model is not successful in explaining the muon moment problem, however the
contributions of channels involving neutral Higgs scalars, including both the
light and heavy ones, might provide sizable corrections to the discrepancy. | D. N. Dinh | 2023-04-26T07:47:29Z | http://arxiv.org/abs/2304.13353v2 | ###### Abstract
###### Abstract
Recent experimental results on muon anomalous magnetic dipole moment have shown a \(4.2\sigma\) tension with the SM prediction, which has blown a fresh wind into the elementary particle physics community. The problem is believed to be explained only by physics beyond the standard model. Current work considers the anomalous moment in a scenario of models with mirror symmetry and type I see-saw mechanism at low energy scale of electroweak interactions. After a brief introduction to the model, a detailed numerical analysis of muon anomalous phenomenology will be carefully performed.
**Muon anomalous magnetic dipole moment in a low scale type I see-saw model**
D. N. Dinh
_Institute of Physics, Vietnam Academy of Science and Technology,_
_10 Dao Tan, Ba Dinh, Hanoi, Vietnam._
## 1 Introduction
In this letter, we are interested in the class of extended versions of the standard model with mirror symmetry and light active neutrino masses generated by the type I see-saw mechanism at the low energy scale of electroweak interactions [1]. A fermion mirror sector is proposed by introducing a corresponding mirror partner for each standard model fermion with the same quantum numbers but opposite chirality. The presence of mirror partners of left-handed neutrinos, thus the right-handed ones, provides a necessary condition for the type I see-saw mechanism to operate [2, 3, 4, 5]. In contrast to the canonical type I see-saw, which operates at ultra-high energy scale, it is shown in that the new physics scale for the model under consideration might be as low as 100 GeV, and thus at the electroweak interaction scale.
We work on an updated version of the class of models that was introduced to accommodate the 125 GeV SM-like Higgs scalar discovery [6, 7]. In contrast to the original ones, an additional Higgs doublet has been introduced, so there are two Higgs doublets, which are respectively responsible for mass generation in the normal and mirror sectors. Two candidates among the neutral scalars appearing after spontaneous symmetry breaking are shown to have signals in agreement with ATLAS and CMS results [7]. Besides, the model also has to confront the precision measurements of the electroweak processes, especially the effects of extra chiral doublets. A large parameter space is validated to be available after being constrained by EW precision data [8].
It is well-known recent time that the combination of Brookhaven E8211 result [9] and \((g-2)_{\mu}\) experiment at Fermilab [10] for the muon anomalous magnetic moment has obtained the result
\[\Delta a_{\mu}=a_{\mu}^{EXP}-a_{\mu}^{SM}=(251\pm 59)\times 10^{-11}, \tag{1}\]
which is a \(4.2\sigma\) discrepancy with the SM prediction [11]. Although the result has not taken into account the recent lattice QCD calculations for hadronic vacuum polarization [12] and
the latest measurement of \(e^{+}e^{-}\to\) hadrons [13], it can not be denied to be the strong evidence of new physics beyond the standard model.
From the theoretical perspective, a large number of researches has investigated the problem of muon anomalous magnetic dipole moment in various scenarios of physics beyond the standard model [14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24]. In the class of models with mirror symmetry under consideration, the muon problem have also been briefly mentioned in some researches [6, 25], involving the channel with participation of light neutral scalar. However more detailed analysis should be performed, besides taking into account the contributions provided by other channels of heavy neutral and singly charged scalars.
This research discusses the phenomenology of muon anomalous magnetic dipole moment in the scenario of an extended version of the standard model with mirror symmetry, accommodating the 125 GeV SM-like scalar discovery. The contents are arranged as follows: besides the introduction in this section, in Sect. 2 we briefly introduce the model and required vertices for further calculations. In Sect. 3, we derive explicit form factors and algebraic expressions for muon anomalous magnetic dipole moment. Then, numerical analysis is also performed in this section. Finally, we give the conclusion in Sect. 4.
## 2 A review of the model
### The model content
This under consideration extended version of the EW-scale \(\nu_{R}\) model is constructed based on the symmetric group \(SU(2)\times U(1)_{Y}\times U(1)_{SM}\times U(1)_{MF}\), in which \(SU(2)\times U(1)_{Y}\) is the gauge one, and \(U(1)_{SM}\times U(1)_{MF}\) is a global symmetry introduced to forbid some unexpected interactions. The arrangement of scalar and matter fields under gauge group and their transformations under the global symmetry are detailed shown in the Table 1. Here, transformations of a given field \(\Psi\) under the \(U(1)_{SM}\times U(1)_{MF}\) is defined as \(\Psi\to e^{i\alpha_{SM}n_{SM}+i\alpha_{MF}n_{MF}}\Psi\).
Note that the right-handed neutrinos in this model are components of a \(SU(2)\times U(1)_{Y}\) doublets; therefore they are non-sterile and take part in the weak interaction. Moreover, the heavy right-handed neutrinos naturally occur in the mirror sector accompanying the light active ones to fulfill the required conditions for the type I see-saw neutrino mass generation functioning. Five Higgs scalars (two doublets, two triplets and a singlet) are introduced to give masses for fermion particles, their roles could be seen in the later part of the paper.
Before writing down the Yukawa couplings, let us briefly discuss the Higgs scalar sector. Apparently, the global symmetry defined in the earlier part only allows \(\Phi_{2}\) to couple to SM fermions, while \(\Phi_{2M}\) will couple to the mirror partners. The singlet \(\phi_{S}\) carried nontrivial \(n_{SM}\), \(s_{MF}\) charges will couple to a normal and a mirror fields. Finally, \(\chi\) is responsible for introducing two unit lepton number violation term, which is needed for acquiring Majorana masses for heavy neutrinos. Detailed expressions of the Yukawa couplings are:
\[{\cal L}_{Y}^{\ell}=g_{\ell}\bar{\ell}_{L}\Phi_{2}e_{R}+g_{\ell}^{M}\bar{\ell }_{R}^{M}\Phi_{2M}e_{L}^{M}+g_{\ell s}\bar{\ell}_{L}\phi_{s}\ell_{R}^{M}+h.c., \tag{2}\]
\[{\cal L}_{Y}^{q}=g_{u}\bar{q}_{L}\tilde{\Phi}_{2}u_{R}+g_{d}\bar{q}_{L}\Phi_{2 }d_{R}+g_{u}^{M}\bar{q}_{R}^{M}\tilde{\Phi}_{2M}u_{L}^{M}+g_{d}^{M}\bar{q}_{R} ^{M}\Phi_{2M}d_{L}^{M}+g_{qs}\bar{q}_{L}\phi_{S}q_{R}^{M}+h.c., \tag{3}\]
\[\mathcal{L}_{\nu_{R}}=g_{M}l_{R}^{M,T}\,\sigma_{2}\,\tilde{\chi}\,l_{R}^{M}\,, \tag{4}\]
where \(\sigma_{2}\) is the second Pauli matrix, \(\tilde{\Phi}_{2}=i\sigma_{2}\Phi_{2}^{*}\), \(\tilde{\Phi}_{2M}=i\sigma_{2}\Phi_{2M}^{*}\), and \(\tilde{\chi}\) form of the complex Higgs triplet with \(Y=2\) is
\[\tilde{\chi}=\frac{1}{\sqrt{2}}\vec{\tau}.\vec{\chi}=\left(\begin{array}{cc} \frac{1}{\sqrt{2}}\chi^{+}&\chi^{++}\\ \chi^{0}&-\frac{1}{\sqrt{2}}\chi^{+}\end{array}\right). \tag{5}\]
### Symmetry breaking and mass generations
We discuss in this subsection the mechanism of mass generations for fermion and scalar particles in this model, when the symmetry is spontaneously breaking. Let us suppose that Higgs fields develop their vacuum expectation values (VEV) as the following: \(\langle\Phi_{2}\rangle=(0,v_{2}/\sqrt{2})^{T}\), \(\langle\Phi_{2M}\rangle=(0,v_{2M}/\sqrt{2})^{T}\), \(\langle\chi^{0}\rangle=v_{M}\), and \(\langle\phi_{S}\rangle=v_{S}\).
The charged lepton mass matrix that can be easily obtained from eq. (2), is explicitly expressed as
\[M_{\ell}=\left(\begin{array}{cc}m_{\ell}&m_{\ell}^{D}\\ (m_{\ell}^{D})^{\dagger}&m_{\ell M}\end{array}\right)\,, \tag{6}\]
where \(m_{\nu}^{D}=m_{\ell}^{D}=g_{\ell s}v_{S}\), \(m_{\ell}=g_{\ell}v_{2}/\sqrt{2}\), and \(m_{\ell M}=g_{\ell}^{M}v_{2M}/\sqrt{2}\). Based on the current experimental status of searching new fermions beyond the standard model, one expects that masses of the mirror partners are much heavier than their normal ones, thus it is reasonable to assume \(m_{\ell M}\gg m_{\ell}\) and \(m_{\ell M},m_{\ell}\gg m_{\ell}^{D}\). This assumption allows us to approximately block-diagonalize \(M_{\ell}\) in the same way usually done for the see-saw type I neutrino mass matrix, then we obtain:
\[\tilde{m}_{\ell}=m_{\ell}-\frac{(m_{\ell}^{D})^{2}}{m_{\ell M}-m_{\ell}} \approx m_{\ell},\quad\tilde{m}_{\ell M}=m_{\ell M}+\frac{(m_{\ell M}^{D})^{2 }}{m_{\ell M}-m_{\ell}}\approx m_{\ell M}, \tag{7}\]
\[\left(\begin{array}{c}\ell_{L(R)}\\ \ell_{L(R)}^{M}\end{array}\right)\,=\left(\begin{array}{cc}U_{\ell L(R)}&- R_{\ell}U_{\ell L(R)}^{M}\\ R_{\ell}^{I}U_{\ell L(R)}&U_{\ell L(R)}^{M}\end{array}\right)\,\left(\begin{array} []{c}\ell_{L(R)}^{\prime}\\ \ell_{L(R)}^{M^{\prime}}\end{array}\right), \tag{8}\]
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Multiplets & \(SU(2)\times U(1)_{Y}\) & \(n_{SM}\) & \(n_{MF}\) \\ \hline \(\ell_{L}=(\nu_{L},\ e_{L})^{T}\), \(q_{L}=(u_{L},d_{L})^{T}\) & \((2,-1)\), \((2,1/3)\) & \(1\) & \(0\) \\ \(\ell_{R}^{M}=(\nu_{R},\ e_{R}^{M})^{T}\), \(q_{R}^{M}=(u_{R}^{M},\ d_{R}^{M})^{T}\) & \((2,-1)\), \((2,1/3)\) & \(0\) & \(1\) \\ \hline \(e_{R}\), \(u_{R}\), \(d_{R}\) & \((1,-2)\), \((1,4/3)\), \((1,-2/3)\) & \(1\) & \(0\) \\ \(e_{L}^{M}\), \(u_{L}^{M}\), \(d_{L}^{M}\) & \((1,-2)\), \((1,4/3)\), \((1,-2/3)\) & \(0\) & \(1\) \\ \hline \(\Phi_{2}=(\phi_{2}^{+},\phi_{2}^{0})\) & (2,2) & \(1\) & \(0\) \\ \(\Phi_{2M}=(\phi_{2M}^{+},\phi_{2M}^{0})\) & (2,2) & \(0\) & \(1\) \\ \hline \(\chi=(\chi^{++},\chi^{+},\chi^{0})\) & (3,2) & \(0\) & \(0\) \\ \hline \(\xi=(\xi^{+},\xi^{0},\xi^{-})\) & (3,0) & \(0\) & \(0\) \\ \hline \(\phi_{S}\) & (1,0) & \(1\) & -1 \\ \hline \end{tabular}
\end{table}
Table 1: Model’s field content and their transformations under gauge and global symmetries
where \(\ell^{\prime}_{L(R)}\), \(\ell^{M^{\prime}}_{L(R)}\) are respectively the normal and mirror charged leptons in the mass basis; \(R_{\ell}\approx\frac{m_{\ell}^{D}}{m_{\ell M}}\ll 1\), and \(\tilde{m}_{\ell}=U_{\ell L}m_{\ell}^{d}U_{\ell R}^{\dagger}\), \(\tilde{m}_{\ell M}=U_{\ell L}^{M}m_{\ell M}^{d}U_{\ell R}^{M\dagger}\), in which \(m_{\ell}^{d}\) and \(m_{\ell M}^{d}\) are diagonal matrices.
After the gauge symmetry is spontaneously broken, the neutral leptons acquire their masses through a matrix of the canonical form of the type-I see-saw mechanism. By denoting that \(M_{R}=g_{M}v_{M}\), one obtains
\[M_{\nu}=\left(\begin{array}{cc}0&m_{\nu}^{D}\\ (m_{\nu}^{D})^{T}&M_{R}\end{array}\right)\,. \tag{9}\]
Approximately block-diagonalizing (9), while keeping in mind that \(M_{R}\gg m_{\nu}^{D}\), the result reads
\[\tilde{m}_{\nu}\approx-\frac{(m_{\nu}^{D})^{2}}{M_{R}}=-\frac{(g_{\ell s}v_{ S})^{2}}{g_{M}v_{M}},\,\,\,\,\tilde{m}_{\nu R}\approx M_{R}. \tag{10}\]
We briefly comment on the light neutrino mass matrix \(\tilde{m}_{\nu}\) defined in (10), which is experimentally constrained to be at sub-eV order. In a canonical scenario, such a small constraint on \(\tilde{m}_{\nu}\) implies \(M_{R}\) should be very heavy \(\sim 10^{9}\) GeV or higher. However, in the current model under consideration, if \((g_{\ell s}^{2}/g_{M})\sim O(1)\) and \(v_{S}\sim O(10^{5}\;eV)\), \(M_{R}\) can be much lower, at the electroweak scale. Note that this is the most interesting scenario that could be designed for this model to be testable at the LHC. However, \(M_{R}\) is not forbidden to have values in other ranges, depending on both the magnitude of \(v_{M}\) and interaction strength \(g_{M}\). In fact, \(g_{\ell s}\) is constrained by some rare processes, for instance the \(\mu\to e\gamma\) decay which has been detailed studied in [27], that leads to \((g_{\ell s}^{2}/g_{M})\ll 1\). In this case \(v_{S}\) should be adjusted to be higher (might reach few GeV) to give correct masses for the light neutrinos if \(M_{R}\) is fixed in range of hundred GeV.
Let \(R_{\nu}\approx\frac{m_{\nu}^{D}}{M_{R}}\) be the ratio of the neutrino Dirac and Majorana mass matrices. Assuming that the light and heavy (mirror) neutrino mass matrices are diagonalized respectively by \(\tilde{m}_{\nu}=U_{\nu}^{*}m_{\nu}^{d}U_{\nu}^{\dagger}\), \(\tilde{m}_{\nu R}={U_{\nu}^{M}}^{*}m_{\nu M}^{d}{U_{\nu}^{M}}^{\dagger}\), where \(m_{\nu}^{d}\) and \(m_{\nu M}^{d}\) are diagonal matrices, we can obtain the relations between the gauge and mass eigenstates of neutrinos as the following
\[\left(\begin{array}{c}\nu_{L}\\ (\nu_{R})^{c}\end{array}\right)\,=\left(\begin{array}{cc}U_{\nu}&-R_{\nu}U_{ \nu}^{M}\\ R_{\nu}^{\dagger}U_{\nu}&U_{\nu}^{M}\end{array}\right)\,\left(\begin{array}{c }\chi_{\nu}\\ \chi_{M}\end{array}\right). \tag{11}\]
Discussion on the mechanism of mass generation of the quark sector will not be performed in this letter, because it does not involve in the phenomenology of physical quantity under consideration in this research.
Before we introduce the masses and mass states of the new physical scalars that appear after spontaneous symmetry breaking, let us briefly explain why we need two Higgs triplets in this model. It is well known that when triplets are introduced, the tree-level result \(\rho=1\), which is precisely measured by experiment, will be violated. Fortunately, it is also shown in [26] that, in a scenario of two triplets with appropriate hyper-charges, they can combine to form a \((3,3)\) representation under the global \(SU(2)_{L}\otimes SU(2)_{R}\) symmetry. After symmetry breaking, the custodial \(SU(2)\) symmetry is preserved and \(\rho=1\). Thus, along with the triplet \(\tilde{\chi}\), we need to add a real Higgs triplet with \(Y=0\), denoted by \((\xi^{+},\xi^{0},\xi^{-})\). The combinations
of two triplets \((3,3)\) and two doublets \((2,2)\) under the global symmetry can be respectively expressed as:
\[\chi=\left(\begin{array}{ccc}\chi^{0}&\xi^{+}&\chi^{++}\\ \chi^{-}&\xi^{0}&\chi^{+}\\ \chi^{--}&\xi^{-}&\chi^{0*}\end{array}\right)\,, \tag{12}\]
\[\Phi_{2}=\left(\begin{array}{ccc}\phi_{2}^{0,*}&\phi_{2}^{+}\\ \phi_{2}^{-}&\phi_{2}^{0}\end{array}\right),\quad\Phi_{2M}=\left(\begin{array} []{ccc}\phi_{2M}^{0,*}&\phi_{2M}^{+}\\ \phi_{2M}^{-}&\phi_{2M}^{0}\end{array}\right). \tag{13}\]
Then the proper vacuum alignment for breaking gauge symmetry from \(SU(2)_{L}\times U(1)_{Y}\) to \(U(1)_{em}\) can be easily written down:
\[\langle\chi\rangle=\left(\begin{array}{ccc}v_{M}&0&0\\ 0&v_{M}&0\\ 0&0&v_{M}\end{array}\right)\,, \tag{14}\]
\[\langle\Phi_{2}\rangle=\left(\begin{array}{ccc}v_{2}/\sqrt{2}&0\\ 0&v_{2}/\sqrt{2}\end{array}\right),\quad\langle\Phi_{2M}\rangle=\left( \begin{array}{ccc}v_{2M}/\sqrt{2}&0\\ 0&v_{2M}/\sqrt{2}\end{array}\right). \tag{15}\]
Due to the experimental constraint on the \(W_{\mu}\) mass, VEVs of the real components of \(\Phi_{2}\), \(\Phi_{2M}\) and \(\chi\) denoted in (14) and (15), satisfy the conditions:
\[v_{2}^{2}+v_{2M}^{2}+8\,v_{M}^{2}=v^{2}\,, \tag{16}\]
where \(v\approx 246\ GeV\). Related to the above VEVs and for further discussions, the following notations are used:
\[s_{2}=\frac{v_{2}}{v};\ \ s_{2M}=\frac{v_{2M}}{v};\ \ s_{M}=\frac{2\sqrt{2}\ v_{M }}{v}\,. \tag{17}\]
We analyze the physical scalar spectrum in this model after the gauge symmetry and the L-R global symmetry of the Higgs potential are spontaneously broken to the custodial \(SU(2)_{D}\). Out of the seventeen degrees of freedom of the two Higgs triplets (one real and one complex) and two Higgs doublets, three of them are eaten to give masses to W's and Z, while the rest are rearranged to form new physical Higgs bosons. Those that are mass-degenerate are grouped in the same physical scalar multiplets of the global custodial symmetry. Thus we have a five-plet (quintet) \((H_{5}^{\pm\pm},\ H_{5}^{\pm},\ H_{5}^{0})\), two triplets \((H_{3}^{\pm},\ H_{3}^{0})\), \((H_{3M}^{\pm},\ H_{3M}^{0})\) and three singlets \(H_{1}^{0},\ H_{1M}^{0},\ H_{1}^{0\prime}\).
We will not introduce any specific case of Higgs potential in this research, but the above discussion on physical scalars applies to any one of them that possesses \(SU(2)_{L}\otimes SU(2)_{R}\) global symmetry, including the cases that have been considered in detail in [7]. It is reasonable to assume that the scalars mentioned earlier have masses at the electroweak scale, in the range of hundred to few hundred GeVs, because they are remnants of the gauge symmetry breaking mechanism. Recall that the scalars that are members of a multiplet (not singlet) have the same masses, while the three singlets \(H_{1}^{0},\ H_{1M}^{0},\ H_{1}^{0\prime}\) are not physical states, in general. These gauge states are linearly combinations of some mass eigenstates \((\tilde{H}_{1}^{0},\ \tilde{H}_{2}^{0},\ \tilde{H}_{3}^{0}\,)\) as \(H_{1}^{0}=\sum_{i}^{3}\alpha_{i}\tilde{H}_{i}\), \(H_{1M}^{0}=\sum_{i}^{3}\alpha_{i}^{M}\tilde{H}_{i}\), where \(\sum_{i}^{3}|\alpha_{i}|^{2}=1\) and \(\sum_{i}^{3}|\alpha_{i}^{M}|^{2}=1\). The SM-like Higgs scalar discovered by LHC with mass 125-GeV is one of these three mass states [7].
Finally, let us discuss \(\phi_{s}^{0}\), which is the remaining degree of freedom of the Higgs sector, originating from \(\phi_{S}\). As a singlet, \(\phi_{S}\) does not participate in the gauge symmetry breaking mechanism, so its VEV \(v_{S}\) can have a large range of values from KeV to few GeV. In this research, we will consider \(\phi_{s}^{0}\) mass to have the same order of magnitude as \(v_{S}\).
### The LFV vertexes
In the SM, there is no flavor changing of the neutral current at tree-level in the lepton sector, so there is no LFV vertex, because the charged lepton mass matrix and the matrix of Yukawa couplings are simultaneously diagonal, and the vector gauge bosons only interact with the left-handed components of the matter fields. However, these properties do not hold in this model, because the matter content has been enlarged with mirror fermions, and the vector fields also interact with the right-handed components of the mirror sector. Therefore, LFV interactions occur at tree-level for both the charged currents and Yukawa couplings. Their detailed expressions in the gauge basis can be found in [7].
To facilitate further discussion, we present below the LFV couplings in this model in the mass eigenstate basis. For consistency with the current experimental observations and for simplicity, we assume that the charged lepton and mirror charged lepton mixing matrices are real (so all the involved complex phases are neglected) and \(U_{\ell L}=U_{\ell R}=U_{\ell}\), \(U_{\ell L}^{M}=U_{\ell R}^{M}=U_{\ell}^{M}\). After dropping terms that are subleading of the second order of \(R_{\nu(\ell)}\) and higher, the LFV couplings are given in table 2 and eqs. from (18) to (23):
\[(\bar{e}_{R}^{\prime}e_{L}^{M^{\prime}}\tilde{H}_{i}^{0}) -i\frac{g}{2}Y_{\tilde{H}_{i}^{0}}^{ML}=-i\frac{g}{2M_{W}}\left[ \frac{\alpha_{i}}{s_{2}}m_{\ell}^{d}\tilde{R}_{\ell}+\frac{\alpha_{i}^{M}}{s_{2 M}}\tilde{R}_{\ell}m_{\ell M}^{d}\right], \tag{18}\] \[(\bar{e}_{L}^{\prime}e_{R}^{M^{\prime}}\tilde{H}_{i}^{0}) -i\frac{g}{2}Y_{\tilde{H}_{i}^{0}}^{MR}=-i\frac{g}{2M_{W}}\left[ \frac{\alpha_{i}}{s_{2}}m_{\ell}^{d}\tilde{R}_{\ell}+\frac{\alpha_{i}^{M}}{s_{ 2M}}\tilde{R}_{\ell}m_{\ell M}^{d}\right], \tag{19}\]
\begin{table}
\begin{tabular}{|c|c|} \hline Vertices & Couplings \\ \hline \((\bar{e}_{L}^{\prime}\gamma^{\mu}\chi_{L})W_{\mu}^{-}\) & \(-i\frac{g}{\sqrt{2}}U_{W_{\mu}}^{LL}=-i\frac{g}{\sqrt{2}}U_{PMNS}\) \\ \hline \((\bar{e}_{L}^{\prime}\gamma^{\mu}\chi_{L}^{M})W_{\mu}^{-}\) & \(-i\frac{g}{\sqrt{2}}U_{W_{\mu}}^{ML}=i\frac{g}{\sqrt{2}}R_{\nu}\left(U_{PMNS}^{M }\right)^{*}\) \\ \hline \((\bar{e}_{R}^{\prime}\gamma^{\mu}\chi_{L}^{\prime})W_{\mu}^{-}\) & \(-i\frac{g}{\sqrt{2}}U_{W_{\mu}}^{LR}=-i\frac{g}{\sqrt{2}}R_{\nu}^{T}\left(U_{ PMNS}\right)^{*}\) \\ \hline \(\bar{e}_{R}^{\prime}\chi_{L}H_{3}^{-}\) & \(-i\frac{g}{2}Y_{\tilde{H}_{3}^{-}}^{L}=-i\frac{g\frac{g}{M_{W}}}{2M_{W}c_{M}}m_ {\ell}^{d}U_{PMNS}\) \\ \hline \(\bar{e}_{R}^{\prime}\chi_{L}^{M}H_{3}^{-}\) & \(-i\frac{g}{2}Y_{\tilde{H}_{3}^{-}}^{ML}=i\frac{g\frac{g}{M_{M}}}{2M_{W}c_{M}}m_ {\ell}^{d}R_{\nu}\left(U_{PMNS}^{M}\right)^{*}\) \\ \hline \(\bar{e}_{L}^{\prime}\chi_{L}^{MC}H_{3}^{-}\) & \(-i\frac{g}{2}Y_{\tilde{H}_{3}^{-}}^{ML}=-i\frac{g\frac{g}{M_{W}}}{2M_{W}c_{M}}m_ {\ell}^{d}U_{PMNS}^{M}\) \\ \hline \(\bar{e}_{R}^{\prime}\chi_{L}^{\prime}H_{3M}^{-}\) & \(-i\frac{g}{2}Y_{\tilde{H}_{3}^{-}}^{ML}=-i\frac{g\frac{g}{M_{W}}}{2M_{W}s_{2}c_{ M}}m_{\ell}^{d}R_{\nu}\left(U_{PMNS}^{M}\right)^{*}\) \\ \hline \(\bar{e}_{L}^{\prime}\chi_{L}^{MC}H_{3M}^{-}\) & \(-i\frac{g}{2}Y_{\tilde{H}_{3M}^{-}}^{ML}=-i\frac{g\frac{g}{M_{W}}}{2M_{W}s_{2}c_{ M}}m_{\ell}^{d}R_{\nu}^{M}U_{PMNS}^{M}\) \\ \hline \(\bar{e}_{R}^{\prime}e_{L}^{M^{\prime}}\phi_{s}^{0}\) & \(-i\frac{g}{2}Y_{\phi_{s}^{0}}^{ML}=-iU_{\ell R}^{\prime}g_{\ell s}U_{\ell L}^{M }=-i\tilde{g}_{\ell s}\) \\ \hline \(\bar{e}_{L}^{\prime}e_{R}^{M^{\prime}}\phi_{s}^{0}\) & \(-i\frac{g}{2}Y_{\phi_{s}^{0}}^{ML}=-i\ U_{\ell L}^{\prime}g_{\ell s}U_{\ell R}^{ M}=-i\tilde{g}_{\ell s}\) \\ \hline \end{tabular}
\end{table}
Table 2: Vertexes involving muon anomalous magnetic dipole moment in the mass eigenstate basis.
\[(\bar{e}^{\prime}_{R}e^{{M^{\prime}}}_{L}H^{0}_{3}) -i\frac{g}{2}Y^{ML}_{H^{0}_{3}}=-i\frac{g}{2M_{W}}\left[\frac{s_{M} }{c_{M}}m^{d}_{\ell}\tilde{R}_{\ell}+\frac{s_{M}}{c_{M}}\tilde{R}_{\ell}m^{d}_{ \ell M}\right], \tag{20}\] \[(\bar{e}^{\prime}_{L}e^{{M^{\prime}}}_{R}H^{0}_{3}) -i\frac{g}{2}Y^{MR}_{H^{0}_{3}}=-i\frac{g}{2M_{W}}\left[-\frac{s_ {M}}{c_{M}}m^{d}_{\ell}\tilde{R}_{\ell}-\frac{s_{M}}{c_{M}}\tilde{R}_{\ell}m^{ d}_{\ell M}\right],\] (21) \[(\bar{e}^{\prime}_{R}e^{{M^{\prime}}}_{L}H^{0}_{3M}) -i\frac{g}{2}Y^{ML}_{H^{0}_{3M}}=-i\frac{g}{2M_{W}}\left[-\frac{s _{2M}}{s_{2}}m^{d}_{\ell}\tilde{R}_{\ell}-\frac{s_{2}}{s_{2M}}\tilde{R}_{\ell} m^{d}_{\ell M}\right],\] (22) \[(\bar{e}^{\prime}_{L}e^{{M^{\prime}}}_{R}H^{0}_{3M}) -i\frac{g}{2}Y^{MR}_{H^{0}_{3M}}=-i\frac{g}{2M_{W}}\left[\frac{s _{2M}}{s_{2}}m^{d}_{\ell}\tilde{R}_{\ell}+\frac{s_{2}}{s_{2M}}\tilde{R}_{\ell} m^{d}_{\ell M}\right]. \tag{23}\]
Here one has used the notations \(U_{PMNS}=U^{\dagger}_{\ell}U_{\nu}\), which is the famous PMNS mixing matrix, \(U^{M}_{PMNS}=U^{M\dagger}_{\ell}U^{M}_{\nu}\) and \(\tilde{R}_{\ell(\nu)}=U^{\dagger}_{\ell}R_{\ell(\nu)}U^{M}_{\ell}\).
## 3 Phenomenology of muon anomalous magnetic dipole moment
### One-loop form-factors and muon anomalous magnetic dipole moment
In this scenario, under the considered model, the muon anomalous magnetic moment and the \(\mu\to e+\gamma\) decay rate are related to the loop integral factors. In some previous research, one-loop diagrams with various kinds of internal lines have been calculated [27, 28, 29, 30, 31]. The current work takes into account the effective charged lepton flavor-changing operators arising at one-loop, where the virtual particles running inside are either physical Higgs scalars (which include single and neutral charges, heavy and light ones) or W gauge bosons accompanied by relevant leptons. The final result can be summarized as follows:
\[{\cal L}_{eff}=-4\frac{eG_{F}}{\sqrt{2}}\left[(m_{\ell}A_{R}+m_{ \ell^{\prime}}A_{L})\bar{\ell^{\prime}}\sigma_{\mu\nu}P_{R}\ell+(m_{\ell}A_{L }+m_{\ell^{\prime}}A_{R})\bar{\ell^{\prime}}\sigma_{\mu\nu}P_{L}\ell\right]F^{ \mu\nu}. \tag{24}\]
Here \(A_{L,R}\) are the form factors:
\[A_{R}= -\sum_{H^{Q},k}\frac{M_{W}^{2}}{64\pi^{2}M_{H}^{2}}\left[\left(Y_ {H}^{L}\right)_{\mu k}\left(Y_{H}^{L}\right)^{*}_{ek}G_{H}^{Q}(\lambda_{k})+ \frac{m_{k}}{m_{\mu}}\left(Y_{H}^{R}\right)_{\mu k}\left(Y_{H}^{L}\right)^{*}_ {ek}\times R_{H}^{Q}(\lambda_{k})\right] \tag{25}\] \[+\frac{1}{32\pi^{2}}\sum_{k}\left[\left(U_{W_{\mu}}^{L}\right)_ {\mu k}\left(U_{W_{\mu}}^{L}\right)^{*}_{ek}G_{\gamma}(\lambda_{k})-\left(U_{ W_{\mu}}^{R}\right)_{\mu k}\left(U_{W_{\mu}}^{L}\right)^{*}_{ek}\frac{m_{k}}{m_{ \mu}}R_{\gamma}(\lambda_{k})\right],\]
\[A_{L}= -\sum_{H^{Q},k}\frac{M_{W}^{2}}{64\pi^{2}M_{H}^{2}}\left[\left(Y_ {H}^{R}\right)_{\mu k}\left(Y_{H}^{R}\right)^{*}_{ek}G_{H}^{Q}(\lambda_{k})+ \frac{m_{k}}{m_{\mu}}\left(Y_{H}^{L}\right)_{\mu k}\left(Y_{H}^{R}\right)^{*}_ {ek}R_{H}^{Q}(\lambda_{k})\right] \tag{26}\] \[+\frac{1}{32\pi^{2}}\sum_{k}\left[\left(U_{W_{\mu}}^{R}\right)_{ \mu k}\left(U_{W_{\mu}}^{R}\right)^{*}_{ek}G_{\gamma}(\lambda_{k})-\left(U_{W_{ \mu}}^{L}\right)_{\mu k}\left(U_{W_{\mu}}^{R}\right)^{*}_{ek}\frac{m_{k}}{m_{ \mu}}R_{\gamma}(\lambda_{k})\right],\]
where \(H^{Q}=\phi_{S}^{0},\tilde{H}_{i}^{0}\) (\(i=1,2,3\)), \(H_{3}^{0},H_{3M}^{0}\), \(H_{3}^{+},H_{3M}^{+}\), and \(m_{k}\) are the masses of associated fermions that accompany with either \(H^{Q}\) or \(W_{\mu}\) in the loops. The functions \(G_{H}^{Q}(x)\), \(R_{H}^{Q}(x)\), \(G_{\gamma}(x)\), and \(R_{\gamma}(x)\) appearing in eqs. (25) and (26) are defined as:
\[G_{H}^{Q}(x) = -\frac{(3Q-1)x^{2}+5x-3Q+2}{12(x-1)^{3}}+\frac{1}{2}\frac{x(Qx-Q+1 )}{2(x-1)^{4}}\log(x), \tag{27}\] \[R_{H}^{Q}(x) = \frac{(2Q-1)x^{2}-4(Q-1)x+2Q-3}{2(x-1)^{3}}-\frac{Qx-(Q-1)}{(x-1)^ {3}}\log(x),\] (28) \[G_{\gamma}(x) = \frac{10-43x+78x^{2}-49x^{3}+4x^{4}+18x^{3}\log(x)}{12(x-1)^{4}},\] (29) \[R_{\gamma}(x) = -\frac{x^{2}+x-8}{2(x-1)^{2}}+\frac{3x(x-2)}{(x-1)^{3}}\log(x), \tag{30}\]
where \(\lambda_{k}=m_{k}^{2}/M_{W_{\mu}(H^{Q})}^{2}\) has been denoted.
Note that functions introduced in the above equations \(G_{H}^{Q}(x)\), \(R_{H}^{Q}(x)\), \(G_{\gamma}(x)\), and \(R_{\gamma}(x)\) are valid for \(x\) variable varying in interval \([0,+\infty)\), and get finite values at the specific points, such as \(x=0,\ 1\) or when \(x\) tends to infinity. Compare with the original expression, introduced in an earlier publication [30], \(G_{\gamma}(x)\) has been divided by 4 to be consistent to the factor \(1/(32\pi^{2})\) in the definitions of \(A_{L}\) and \(A_{R}\).
The expression for muon anomalous magnetic dipole moment can be easily extracted from the effective Lagrangian (24), which arrives at
\[\Delta a_{\mu}=\frac{4\pi\alpha}{\sin^{2}\theta_{w}}\frac{m_{\mu}^{2}}{M_{W}^ {2}}(A_{L}+A_{R}), \tag{31}\]
where \(\alpha_{em}=1/137\) is the fine-structure constant. Note that formula (31) should not include the contributions of light neutrino and \(W_{\mu}\) loops, which have been taken into account in the standard model.
### Numerical analysis of muon anomalous magnetic dipole moment
In this section, we perform numerical analysis of the muon anomalous magnetic dipole moment expressed by (31) using current experimental data. To better understand the role of each kind of diagram and for convenience, we separately consider the contributions of one-loop diagrams with virtual \(W\) gauge boson, neutral and singly charged Higgs scalars to the quantity. Moreover, for simplicity in further numerical discussions, we assume that three heavy neutrinos are degenerated in masses, which are denoted as \(m_{\chi}^{M}\). Similarly, we make the same assumption for three mirror charged lepton masses \(m_{\ell}^{M}\).
Before performing a detail discussion, let's make a rough estimation of the magnitudes of \(R_{\nu(\ell)}\), which have significant roles in the phenomenology of lepton flavour violation and muon anomalous magnetic dipole moment in this model. Starting from light neutrino mass
matrix, then we can easily derive that
\[\tilde{m}_{\nu}=\frac{(m_{\nu}^{D})^{2}}{M_{R}}\sim 10^{-10}\ {\rm GeV}\Rightarrow R_{ \nu}=\frac{m_{\nu}^{D}}{M_{R}}\sim 10^{-5}\sqrt{\frac{1{\rm GeV}}{M_{R}}}. \tag{32}\]
For \(M_{R}\sim 100\) GeV, \(|R_{\nu}|\) has value at order of \(10^{-6}\). The same magnitude \(|R_{\ell}|\sim 10^{-6}\) is analogously obtained if mirror charged lepton mass matrix is supposed not to be larger than the EW scale. Note that it is also relevant to estimate \(|\tilde{R}_{\ell}|=|U_{\ell}^{\dagger}R_{\ell}U_{\ell}^{M}|\), as well as \(|\tilde{R}_{\nu}|=|U_{\ell}^{\dagger}R_{\nu}U_{\ell}^{M}|\), to be at the same order as \(|R_{\ell(\nu)}|\sim 10^{-6}\), due to the basis transformation matrices \(U_{\ell}\) and \(U_{\ell}^{M}\) are normalized.
Before carrying on discussions that are distinctive for the current model, let's re-obtain the contribution of light active neutrinos and W-boson 1-loop diagrams by applying eq. (31) for the corresponding interacting couplings while keeping in mind the unitarity of PMNS matrix. The outcome arrives at
\[\Delta a_{\mu}^{\rm SM}(\chi_{L})=\frac{\alpha}{8\pi\sin^{2}\theta_{w}}\frac{m _{\mu}^{2}}{M_{W}^{2}}G_{\gamma}(\lambda_{\chi_{L}})=\frac{G_{F}m_{\mu}^{2}}{4 \sqrt{2}\pi^{2}}G_{\gamma}(\lambda_{\chi_{L}})\simeq\frac{G_{F}m_{\mu}^{2}}{8 \sqrt{2}\pi^{2}}\left(\frac{5}{3}+O(\lambda_{\chi_{L}})\right), \tag{33}\]
which entirely coincides with the result given in the PDG book [32].
Other contributions to the magnetic dipole moment by new physics under the considered scenario and involving light neutrino and W-boson interactions are very small and therefore ignorable. This fact is easy to figure out by looking at these three contributions, which are the last term in (25) and the two last terms in (26). The interference terms, which contain ratio \(m_{k}/m_{\mu}\), are strongly suppressed by a factor of \(m_{k}/m_{\mu}\sim 10^{-6}\), where \(m_{k}<0.1\)eV for light active neutrino mass and \(m_{\mu}=106\)MeV have been used; while the rest gets a tiny value due to being proportional to \(U_{W_{\mu}}^{R\dagger}U_{W_{\mu}}^{R}\sim\tilde{R}_{\nu}^{*}\tilde{R}_{\nu}^{ T}\ll U_{W_{\mu}}^{L\dagger}U_{W_{\mu}}^{L}\).
In fugure 1, we show the correlation between \(\left(U_{W}^{ML\dagger}U_{W}^{ML}\right)_{\mu\mu}\) and heavy neutrino mass \(m_{\chi}^{M}\) when muon anomalous magnetic dipole moment is set at its current experimental
Figure 1: The correlation between \(\left(U_{W}^{ML\dagger}U_{W}^{ML}\right)_{\mu\mu}\) and heavy neutrino mass \(m_{\chi}^{M}\) when muon anomalous magnetic dipole moment \(\Delta a_{\mu}\) is set at its current best fit value; the channel of virtual W-boson and heavy neutrinos.
best fit value \(\Delta a_{\mu}=251\times 10^{-11}\), for the channels with participation of virtual W-boson and heavy neutrinos. The figure shows that the channel contribution is significant only if \(\left(U_{W}^{ML\dagger}U_{W}^{ML}\right)_{\mu\mu}\) has magnitude about 0.1 or larger; however the real value is extremely tiny due to \(U_{W}^{ML\dagger}U_{W}^{ML}\sim\tilde{R}_{\nu}^{\dagger}\tilde{R}_{\nu}\sim 10^{ -12}\).
We continue next with the contributions of 1-loop diagrams with virtual singly negative charged Higgs \(H^{-}\). In contrast to the previously considered cases, the interference terms are enhanced by factor \(m_{k}/m_{\mu}\sim 1000\), where \(m_{k}\sim 100\) GeV and \(m_{\mu}=106\) MeV; therefore strongly dominates in comparison with the others. To have a more intuitive understanding, we present the correlations between relevant Yukawa couplings as a function of neutrino masses of the channels in which virtual singly charged Higgs took part in the loops when muon anomalous dipole moment is fixed at the best-fit value. These correlations are presented for the cases if only the first term of (25) and (26) are taken into account (Fig.2) and all terms are considered (left-panel, Fig.3). The Yukawa coupling absolute values obtained in Fig.3 are about three orders smaller than those in Fig.2, which are certainly consistent with the mentions in earlier parts. The negative sign of the horizontal axis in Fig.3 means that the interference terms, thus the channel, would give subtractions to the muon anomalous dipole moment if the involving Yukawa couplings are positive.
Ignoring the negative sign, the smallest absolute values of Yukawa couplings, which are easily figured out to be at \(m_{H^{-}}=70\) GeV, obtained from Fig.3 left-panel are
\[|((Y_{H^{-}}^{ML})^{\dagger}Y_{H^{-}}^{MR}+(Y_{H^{-}}^{MR})^{\dagger}Y_{H^{-}}^ {ML})_{\mu\mu}|\simeq 1.30\times 10^{-4}(1.80\times 10^{-4}), \tag{34}\]
for \(m_{\chi_{L}^{M}}=80\) (200) GeV. These results are apparently available for both cases of \(H_{3}^{-}\) and \(H_{3M}^{-}\). Magnitude of \(|((Y_{H^{-}}^{ML})^{\dagger}Y_{H^{-}}^{MR}+(Y_{H^{-}}^{MR})^{\dagger}Y_{H^{-}}^ {ML})_{\mu\mu}|\), in fact, might be estimated basing on the model scheme and supposing that mirror charged lepton masses are about 100GeV, the calculation implies
\[|((Y_{H^{-}}^{ML})^{\dagger}Y_{H^{-}}^{MR}+(Y_{H^{-}}^{MR})^{\dagger}Y_{H^{-}} ^{ML})_{\mu\mu}|\sim 10^{-12}\times\left(\frac{s_{M}}{s_{2}c_{M}^{2}}\right) \frac{6m_{\mu}m_{\ell}^{M}}{M_{W}^{2}}\sim 7.0\times 10^{-15}\left(\frac{s_{M}}{s_{ 2}c_{M}^{2}}\right), \tag{35}\]
Figure 2: \((Y_{H^{-}}^{\dagger}Y_{H^{-}})_{\mu\mu}\) as function of physical singly charged Higgs scalar mass at some specific values of mirror neutrino masses; if only left or right sector is taken into account, \(\Delta a_{\mu}\) is set at the present best fit value.
for the case of \(H_{3M}^{-}\); and \(\sim 7.0\times 10^{-15}\left(\frac{s_{M}}{c_{M}}\right)^{2}\); if the negative scalar participating in loops is \(H_{3}^{-}\). Thus the real values are extremely tiny (\(<10^{-8}\)), even if \(s_{2}\) and \(c_{M}\) of the mixing angles are as small as 0.01. The above analysis figure out a fact that contributions of the singly charged Higgs scalar channel to the muon anomalous dipole moment are too small to be able to explain the muon anomalous magnetic dipole moment experimental results.
The correlations between the Yukawa couplings involving the heavy neutral Higgs scalars (which are \(\tilde{H}_{i}^{0}\), (i=1,2,3), \(H_{3}^{0}\) and \(H_{3M}^{0}\)) and either the Higgs or mirror charged lepton masses are respectively shown in the Fig.4 left (or right) panel. The discussion on light Higgs channel will be presented in a latter separate part due to the enormous difference in mass hierarchies. The same as previous considered case of \(H^{-}\) channels, contributions of the diagrams involving neutral scalar (including both light and heavy ones) diagrams are predominated by the mixing terms due to the heaviness in masses of mirror charged leptons. To explain the muon anomalous dipole moment, magnitude of \(2(Y_{H^{0}}^{MLi}Y_{H^{0}}^{MR})_{\mu\mu}\) requires a value within \(10^{-4}-10^{-3}\) range, which is slightly increase as the increasing of neutral Higgs scalar mass (Fig.4, left
Figure 4: The correlations between Yukawa couplings and : i, Heavy neutral Higgs scalar masses (left-panel); ii, Mirror charged lepton masses (right-panel) for diagrams, whose particles running inside loops are heavy physical scalars and mirror charged leptons, \(\Delta a_{\mu}=251\times 10^{-11}\) is fixed.
Figure 3: The correlations between Yukawa couplings and : i, Singly charged Higgs mass (left-panel) for \(m_{\chi_{L}^{M}}=80\) (200) GeV, blue (red) lines; ii, Heavy neutrino masses (right-panel) for \(m_{H^{-}}=70\) (300) GeV, blue (red) lines, when \(\Delta a_{\mu}=251\times 10^{-11}\) is fixed.
panel). At the initial points of the lines, corresponding to \(m_{H^{0}}\simeq 50\)GeV, we have
\[2(Y_{H^{0}}^{ML\dagger}Y_{H^{0}}^{MR})_{\mu\mu}\simeq 6.21\times 10^{-5}(1.33\times 1 0^{-4}), \tag{36}\]
for \(m_{m_{\ell}^{M}}=80\) (200) GeV, respectively. Carry on the same strategy as in earlier part, the theoretical estimation can be performed using eqs. from (18) to (23). The results arrive at
\[2(Y_{H^{0}}^{ML\dagger}Y_{H^{0}}^{MR})_{\mu\mu}\sim 2\alpha^{2}\frac{\left(\tilde{R }_{\ell}^{\dagger}(m_{\ell M}^{d})^{2}\tilde{R}_{\ell}\right)_{\mu\mu}}{M_{W}^ {2}}\sim 9.38\alpha^{2}\times 10^{-12}, \tag{37}\]
where mirror charged lepton masses are taken about 100GeV, \(\alpha\) denotes for \(\frac{\alpha_{i}}{s_{2}}\), \(\frac{s_{M}}{c_{M}}\) or \(\frac{s_{2}}{s_{2M}}\), corresponding to \(\tilde{H}_{i}^{0}\), (i=1,2,3), \(H_{3}^{0}\) or \(H_{3M}^{0}\), respectively. Notes that to obtain eq. (37), first terms of the Yukawa couplings defined from (18) to (23), which contain \(m_{\ell}^{d}\) thus sub-dominate in comparison to the second ones with \(m_{\ell M}^{d}\), are reasonably neglected. Equation (37) also mean the currently considered channels might provide contributions to the muon anomalous magnetic dipole moment about three orders higher than those of the singly charged scalars in magnitude. In fact, real values of charged lepton masses can be larger at order of several hundreds GeV. For instance, if \(m_{\ell}^{M}=500\) GeV is taken, \(2(Y_{H^{0}}^{ML\dagger}Y_{H^{0}}^{MR})_{\mu\mu}\sim 2.34\alpha^{2}\times 10^{-10} \sim 2.24\times 10^{-6}\) for \(\alpha=100\), which occurs at \(s_{2}=0.01\), \(c_{M}=0.01\), or \(s_{2M}=0.01\), corresponding to the case of \(\tilde{H}_{i}^{0}\), (i=1,2,3), \(H_{3}^{0}\) or \(H_{3M}^{0}\), respectively. Therefore, contributions of the heavy neutral Higgs scalar channels can not be able to explain the muon anomalous magnetic dipole moment, but they might be possible to provide sizable corrections.
The same contents as Fig.4 are presented in Fig.5, in which heavy neutral Higgs masses are replaced by those of the light one at lower scale from keV to order of GeV. At a given value of \(m_{\ell}^{M}\), magnitude of \(2((Y_{\phi_{0}^{0}}^{L})^{\dagger}Y_{\phi_{0}^{0}}^{R})_{\mu\mu}\) does not change with the increase in light Higgs scalar mass until about 10 GeV, then slowly increase. From Fig.4 left panel, we easily obtain from the constant lines
\[2((Y_{\phi_{0}^{0}}^{L})^{\dagger}Y_{\phi_{s}^{0}}^{R})_{\mu\mu}\simeq 4.97 \times 10^{-5}(1.24\times 10^{-4}), \tag{38}\]
corresponding to \(m_{m_{\ell}^{M}}=80\) (200) GeV, respectively. After a simple computation, the result arrives at relations of Yukawa coupling matrix
\[|\tilde{g}_{\ell s}^{\dagger}\tilde{g}_{\ell s}|_{\mu\mu}\simeq 1.56\times 10^{-5 }(3.88\times 10^{-5}), \tag{39}\]
Figure 5: The same content as Fig.4, for the case of diagrams with loops formed by light Higgs scalar and mirror charged leptons.
which are five orders lager than the upper constraints obtained by current experimental bound of \(\mu\to e\gamma\) decay [27]1
Footnote 1: Here we have recast eq. (51) in [27] to have similar form as (39) for more convenient in comparison.
\[|\tilde{g}^{\dagger}_{\ell s}\tilde{g}_{\ell s}|_{\mu e}\simeq 5.29\times 10^{-10}(1. 30\times 10^{-9}). \tag{40}\]
The above outcome doesn't mean contributions of light neutrino Higgs scalar channel are too small to be taken care. However the two (39) and (40) eqs. are both concurrently fulfilled; if \(\tilde{g}_{\ell s}\) is proportional to a matrix, which is close to unitary form. The measurement of muon anomalous magnetic dipole moment, therefore, can be used to determined magnitude of \(|\tilde{g}^{\dagger}_{\ell s}\tilde{g}_{\ell s}|_{\mu\mu}\).
## 4 Conclusion
Recent experimental measurements on muon magnetic dipole moment show substantial discrepancies with standard model predictions. These differences might be understood in scenarios of physics beyond the standard model with new particles and interactions. In this work, we have derived an algebraic formula and performed numerical analysis for muon anomalous magnetic dipole moment at 1-loop approximation in an extended model with mirror symmetry, in which the light neutrino masses are generated by the type-I see-saw at low scale of the electroweak. We have shown that the contributions provided by neutrino and either W boson or singly charged Higgs scalar channels are too small to be taken into account. Moreover, the channel of heavy neutrinos and singly charged Higgs also gives a contribution opposite to that of the rest. For the case that particles running inside loops are heavy neutral Higgs scalars and mirror charged leptons, although contribution of this channel are not be able to explain the experimental results, it might provide a sizable amount of correction to the muon anomalous magnetic dipole moment; if at test one of the following quantities \(s_{2}\), \(c_{M}\), or \(s_{2M}\) has the value as small as 0.01 or less. As for the most promising case, channel involving light neutral Higgs scalar might give an excellent explanation for the muon anomalous problem. In return, muon anomalous magnetic moment measurement would be an important experiment for determining magnitude of \(|\tilde{g}^{\dagger}_{\ell s}\tilde{g}_{\ell s}|_{\mu\mu}\) in the scenario of under consideration model. For instance, if \(\Delta a_{\mu}=251\times 10^{-11}\) as the current experimental best fit value, we can then imply \(|\tilde{g}^{\dagger}_{\ell s}\tilde{g}_{\ell s}|_{\mu\mu}\simeq 1.56\times 10^{-5 }(3.88\times 10^{-5})\) for \(m_{m^{M}_{\ell}}=80\) (200) GeV, respectively.
## Acknowledgments
This research is funded by Vietnam National Foundation for Science and Technology Development (NAFOSTED) under grant number 103.01-2019.307.
|
2306.05348 | Athermal quasistatic cavitation in amorphous solids: effect of random
pinning | Amorphous solids are known to fail catastrophically via fracture, wherein
cavitation at nano-metric scales is known to play a significant role.
Micro-alloying via inclusions is often used as a means to increase the fracture
toughness of amorphous solids. Modeling such inclusions as randomly pinned
particles that move only affinely and do not participate in plastic relaxation,
we study how the pinning influences the process of cavitation-driven fracture
in an amorphous solid. Using extensive numerical simulations and probing in the
athermal quasistatic limit, we show that just by pinning a very small fraction
of particles, the tensile strength is increased and also the cavitation is
delayed. Further, the cavitation that is expected to be spatially heterogeneous
becomes spatially homogeneous by forming a large number of small cavities
instead of a dominant cavity. | Umang A. Dattani, Smarajit Karmakar, Pinaki Chaudhuri | 2023-06-08T16:51:52Z | http://arxiv.org/abs/2306.05348v1 | # Athermal quasistatic cavitation in amorphous solids: effect of random pinning
###### Abstract
Amorphous solids are known to fail catastrophically via fracture, wherein cavitation at nano-metric scales is known to play a significant role. Micro-alloying via inclusions is often used as a means to increase the fracture toughness of amorphous solids. Modeling such inclusions as randomly pinned particles that move only affinely and do not participate in plastic relaxation, we study how the pinning influences the process of cavitation-driven fracture in an amorphous solid. Using extensive numerical simulations and probing in the athermal quasistatic limit, we show that just by pinning a very small fraction of particles, the tensile strength is increased and also the cavitation is delayed. Further, the cavitation that is expected to be spatially heterogeneous becomes spatially homogeneous by forming a large number of small cavities instead of a dominant cavity.
## I Introduction
The mechanical properties of amorphous solids are utilized in diverse applications in industries and our daily lives [1; 2]. Therefore, mechanical failure of these materials is an area of concern. Hence, understanding the physical processes that lead to the failure of these structurally disordered solids is a domain of current active research, with the primary goal being to figure out design pathways that can sustain against such failures [3; 4]. Cavitation, i.e., the formation of nano-cavities within the solid, has been identified as a precursor to eventual failure via fracture [5; 6]. In experiments, the fracture in amorphous solids has been shown to propagate via the coalescence of cavities in the solid along the direction of the crack [7], which has motivated numerical investigations to analyze the mechanisms underlying the cavitation process [8; 9; 10; 11; 12]. Recently [13], it has been demonstrated that the plasticity associated with cavitation has the same universal characteristic elastoplastic response of amorphous solids undergoing failure via cavitation under uniform expansion [13] and on exploring a combination of loading scenarios where cavitation can occur [14]. In Ref. [14], we demonstrated that a combination of deformation processes, for example, uniform expansion followed by oscillatory shear of a certain amplitude, can enhance the formation of cavities at densities that are much higher than the cavitation density observed in uniform expansion only. These results suggest that in a natural deformation process in which various forms of deformations will be coupled together, the material can show unpredictable failure behavior, which will be difficult to control. Thus a systematic study of how such a cavity-dominated failure process in an amorphous solid can be mitigated effectively will be of significant interest for practical applications.
In the quest of making glasses with higher fracture-topuhess, the seeding of the amorphous solids with micro-alloyed inclusions has gained a lot of popularity in the last few years [15; 16; 17; 18; 19; 20]. In the numerical modeling, a minimal model of these micro-alloyed inclusions describes these inclusions as pinned/frozen particles [21; 22; 23; 24]. In the context of probing their mechanical behavior, it is assumed that they only move affinely during the deformation and do not actually undergo non-affine motion. Using such a model system, systematic studies investigating the response of a pinned amorphous solid to a shear-deformation have explored the microscopic theories [21], yielding mechanisms [22], suppression of shear-banding [23], development of intrinsic length-scales [22; 24] etc. As these studies focus on the shear-deformation of high-density amorphous solids, they do not access the region where cavitation instabilities occur, i.e., under axial tension [10; 13; 14]. Due to the importance of cavitation instabilities in fracture, it becomes important to study the response of an amorphous solid under deformation modes where cavitation can occur.
In this work, we, therefore, study the response of amorphous solids with micro-alloyed inclusions modeled as pinned particles under uniform expansive deformation under athermal quasistatic conditions. We find that, even in presence of a very small fraction of pinned particles, cavitation becomes more spatially homogeneous and the cavitation point shift to lower densities and lower pressures, implying a higher load-bearing capacity of the pinned solid. The sharp brittle-yielding-like transition seen in unpinned solids becomes more gradual with significantly smaller sizes of plastic events due to which system size dependence becomes very weak. On tracking the eigenvalues of the Hessian near cavitation instabilities, we find that, on the potential energy landscape, cavitation occurs via a saddle-node bifurcation and the average spa
tial decay of displacements in the plastic eigenmodes from the plastic center reveal a length-scale in pinned solids that parallels the length-scale set by average distance between two pinned sites. The presence of a length-scale explains the absence of system-size effects and the drastic decrease in the mean sizes of plastic events. Our findings thus reveal how micro-alloyed inclusions can suppress the cavitation and how the presence of a length-scale of plasticity controls the deformation response of such a pinned solid.
The manuscript is organised as follows. After initial introductory discussion in Section I, we provide in Section II a brief overview of the model amorphous solid that we consider for our study and the methodology of our simulations as well as analysis. In Section III, we discuss the detailed findings of our investigations regarding the presence of random pinning and its influence on the cavitation process. Finally, we provide a concluding discussion in Section IV.
## II Model and methods
### Model details and initial states
We use the well-characterized two-dimensional model consisting of two species labelled \(A\) and \(B\) at \(65:35\) concentration ratio, interacting via pairwise Lennard-Jones potential. The interaction parameters are - \(\sigma_{AA}=1.0\), \(\sigma_{BB}=0.88\), \(\sigma_{AB}=0.8\), \(\epsilon_{AA}=1.0\), \(\epsilon_{BB}=0.5\), \(\epsilon_{AB}=1.5\)[25]. With this model, we smoothen the interaction potential up to first two derivatives. The form of the interactions between \(i^{th}\) and \(j^{th}\) particle becomes:
\[\phi(r_{ij})=4\epsilon_{\alpha\beta}\left[\left(\frac{\sigma_{\alpha\beta}}{ r_{ij}}\right)^{12}-\left(\frac{\sigma_{\alpha\beta}}{r_{ij}}\right)^{6} \right]+u(r_{ij}) \tag{1}\]
where,
\[u(r_{ij})=C_{0}+C_{2}\left(\frac{r_{ij}}{\sigma_{\alpha\beta}}\right)^{2}+C_{ 4}\left(\frac{r_{ij}}{\sigma_{\alpha\beta}}\right)^{4} \tag{2}\]
Here, \(\alpha\) and \(\beta\) correspond to either of the labels \(A\) or \(B\). The constants \(C_{0}\), \(C_{2}\) and \(C_{4}\) are determined by requiring the potential and its first two derivatives to be zero at the cutoff \(r=2.5\sigma_{ij}\). The simulations have been performed for a variety of system sizes ranging from \(N=10^{3}\) to \(N=10^{5}\).
### Initial states
To prepare initial states for our study, we first equilibrate the system at \(T=1.0\) (in LJ units), which is in the liquid regime, followed by cooling at a constant rate of \(10^{-4}\) per MD timestep to a final temperature of \(T=0.01\)[22], which is in the glassy regime. The corresponding glass transition temperature of the model system is at \(T=0.44\)[25]. The athermal states used in our study are generated by obtaining inherent structure states corresponding to the glassy configurations at \(T=0.01\), via conjugate gradient (CG) minimization[26].
### Athermal Quasistatic Expansion
Starting from a spatially homogeneous high density state (\(\rho=1.2\) for KABLJ) having positive barostatic pressure, we study the athermal quasi-static response (i.e. in the absence of any thermal effects and in the limit of vanishing driving rates) of this system to isotropic expansion[10]. In each expansion step, a constant volume strain is applied on the system by rescaling the length of the box by a factor \((1+\epsilon)\) along with affine transformation of particle coordinates, followed by minimization of the energy of this strained configuration using the conjugate gradient algorithm[26]. The values of \(\epsilon\) are varied from \(\epsilon=10^{-4}\) to \(\epsilon=10^{-9}\). The AQE simulations are done using LAMMPS[27].
### Pinning
We choose a small fraction of particles \(c=0.01\) to \(c=0.05\) in the generated solid and freeze their motion. The particles are chosen randomly as long as no two pinned particles lie within the cut off of the interaction between them[22]. This helps avoiding the scenario where two close-by pinned sites increase the energy of the system. The pinned particles only move affinely when the strain is applied. During the energy minimization, these pinned particles are not allowed to move.
### Hessian of potential energy
LAPACKE[28] is used for doing the stability analysis of the local minima states, by computing eigenvalues and eigenvectors of the Hessian matrix \(\mathcal{H}_{ij}^{\alpha\beta}\), which is defined as
\[\mathcal{H}_{ij}^{\alpha\beta}=\frac{\partial^{2}U\left(\left\{\mathbf{r}_{i} \right\}\right)}{\partial r_{i}^{\alpha}\partial r_{j}^{\beta}}, \tag{3}\]
where \(U\left(\left\{\mathbf{r}_{i}\right\}\right)\) is the potential energy of the system and \(\mathbf{r}_{i}\) is the position vector of particle \(i\). The indices \(\alpha,\beta\in\left\{x,y\right\}\) whereas \(i,j\in\left\{1,\ldots,N\right\}\).
If we now consider a system of \(N\) particles, where particle numbers, \(i=1,\cdots,m\) are free and particle numbers \(i=m+1,\cdots,N\) are pinned, then the potential energy of such a system can be expressed as
\[U(r)=\frac{1}{2}\left[\sum_{i,j=1;i\neq j}^{m}\phi_{ij}+2\cdot\sum_{i=1}^{m} \sum_{j=m+1}^{N}\phi_{ij}\right], \tag{4}\]
where the first term comes from the interactions between unpinned particles, the second term comes due to the
interactions between the pinned and the unpinned particles. Note that the term due to interactions between pinned sites is set to zero because of our pinning protocol
By substituting Eq.(4) in Eq.(3), it can be shown that the first term in the sum of Eq.(4) gives a contribution,
\[H_{\alpha\beta}^{ij}=-\sum_{k,i;k\neq i}\left[\left(\frac{\phi_{r}}{(r^{kl})^{3} }-\frac{\phi_{rr}}{(r^{kl})^{2}}\right)r_{\alpha}^{ki}r_{\beta}^{ki}-\delta_{ \alpha\beta}\frac{\phi_{r}}{r^{kl}}\right](\delta^{ji}-\delta^{jk}). \tag{5}\]
The second term of the sum in Eq.(4) gives,
\[H_{\alpha\beta}^{ij}=-2\sum_{k=0}^{m}\sum_{l=m+1}^{N}\left[\frac{\phi_{r}\,r_{ \alpha}^{kl}\,r_{\beta}^{kl}}{(r^{kl})^{3}}-\frac{\phi_{rr}\,r_{\alpha}^{kl}\, r_{\beta}^{kl}}{(r^{kl})^{2}}-\delta_{\alpha\beta}\frac{\phi_{r}}{r^{kl}} \right]\delta^{ij}, \tag{6}\]
where \(\phi_{r}\) and \(\phi_{rr}\) are first and second derivatives of the pair-potential with respect to variable \(r\) respectively.
## III Results
### Yielding and spatial ramifications
Upon expanding the amorphous solid isotropically under quasistatic loading, as discussed in previous works [10; 13], the pressure of the solid decreases monotonically, eventually reaching negative values. After a certain threshold, a sharp jump in the pressure accompanies the cavitation of the solid. Upon further expansion, the cavities grow and merge, leading to system-spanning fracture of the solid [13]. Here, we expand the same amorphous solid isotropically but with a small fraction of particles pinned(frozen) and compare it with the case without pinning [10; 13]. As shown in Fig. 1(a), compared to the unpinned solid, the pinned solid, on average, does not show a large pressure jump that is seen around the first cavitation event. The location of the yield point, which is usually marked by a turn in the pressure-density curves, shifts to lower and lower densities with increasing concentration of pinned particles, \(c\), demonstrating an increase in the load-bearing capacity of pinned solids. In the pinned solids (\(c\neq 0\)), the turning of the curve occurs due to small pressure jumps that occur gradually, leading to a smooth average pressure vs density (\(P-\rho\)) curve as opposed to the ensemble-averaged trajectories of an unpinned solid (\(c=0\)), which show an abrupt jump. A similar trend reflects in the per particle energy vs density (\(U/N-\rho\)) plots in Fig. 1(b) i.e. no large energy drop shows up in the pinned solid unlike the case of unpinned solid and in fact, the energy per particle of the pinned solid keeps increasing with increasing pinning concentration. The nature of critical-like behaviour in the (\(P-\rho\)) or (\(U/N-\rho\)) plots can be studied by measuring the fluctuations of pressure and energy at a given value of density across ensembles as characterized by the following susceptibilities [13],
\[\chi_{p}(\rho) = N\,\left(\langle P^{2}(\rho)\rangle-\langle P(\rho)\rangle^{2} \right),\quad\text{and} \tag{7}\] \[\chi_{u}(\rho) = (1/N)\,\left(\langle U^{2}(\rho)\rangle-\langle U(\rho)\rangle^{2}\right) \tag{8}\]
and shown in Fig. 1(c) & (d) respectively. A sharp susceptibility peak around the big pressure jump / energy drop seen in the unpinned solid [13] is tamed down due to pinning. For higher pinning concentrations, a clear peak around the yielding is not seen implying that the yielding of the solid is more localised and gradual. The gradual nature of cavitation in pinned solids is also echoed in the average size of pressure jumps \(\langle\Delta P\rangle\) and energy drops \(\langle\Delta U\rangle\) encountered during expansion (see 1(e) & (f)) for different pinning concentrations \(c\). The average size of avalanches, \(\langle\Delta P\rangle\) & \(\langle\Delta U\rangle\) decreases drastically with increasing values of \(c\) implying suppression of large pressure jumps and energy drops.
To probe the spatial-nature of cavitation in pinned solids, we look at the coarse-grained spatial density maps for different values of \(c\) at a same value of density ( shown in Fig. 2). They suggest that the cavitation occurs at multiple sites in the solid gradually over the course of expansion for \(c\neq 0\) instead of a heterogeneous cavitation starting with a big cavity for \(c=0\). The creation of a large number of cavities in the system also explains the increase in energy with expansion seen in Fig. 1(b) as presence of a large number of particles on the surfaces is expected to increase the energy of pinned solids. So, to summarise, pinning smooths out the sharp brittle yielding-like cavitation transition seen in amorphous solids without any micro-alloying and causes less heterogeneous cavitation. This scenario is consistent with a previous study on effect of pinning on yielding transition under simple shear [22] where pinning made the yielding more spatially homogeneous.
### System-size effects
In Ref. [13], strong system size effects were observed in the \(P-\rho\) the \(\chi_{p}\) curves for athermal quasistatic expansion of amorphous solid. Hence it is important to probe the dependence on system sizes for pinned solids as well. The \(P-\rho\) plots and \(\chi_{p}-\rho\) plots for different system sizes across the pinning concentrations are shown in Fig.3 top panel and bottom panels, respectively. Unlike the case of unpinned solids, the \(P-\rho\) and \(\chi_{p}-\rho\) curves for pinned solids show little to no dependence on the system size. This occurs due to the emergence of an intrinsic length scale of plasticity \(\xi<<L\) in these systems due to the imposed random pinning constraints, which are discussed in the subsequent paragraphs.
### Irreversible plastic events on the potential energy landscape
Under athermal quasistatic shear, near a plastic instability, the lowest non-zero eigenvalue (apart from the two zero-modes in 2D) of the hessian matrix of the potential energy Eq. 3 is known to vanish as a square root of strain difference from the point of instability [29; 30]. i.e. \(\lambda_{min}\sim\sqrt{\gamma_{c}-\gamma}\), where \(\lambda_{min}\) refers to the minimum
Figure 1: For \(N=10^{5}\) and different pinning concentrations, \(c\), as marked: Variation with density \(\rho\) of (a) Pressure \(P\) (b) Energy per particle \(U/N\) (c) Pressure susceptibility \(\chi_{P}\) (d) Energy susceptibility \(\chi_{U}\). Change in (e) average size of pressure jump \(\Delta P\), and (f) average size of energy drops \(\Delta U\), with pinning concentration.
Figure 2: Density field for different pinning concentrations, \(c=0.00\) (a), \(0.01\) (b), \(0.02\) (c), \(0.05\) (d), measured at \(\rho=0.982\).
eigenvalue. This occurs due to a saddle-node bifurcation on a potential energy landscape (PEL) where the local minima in which the system resides becomes unstable in one direction. One of the possible ways to arrive at such a square-root power law has been discussed in Ref. [31], where it has been shown how the nature of non-affine displacements in amorphous solids near a plastic instability with only \(\lambda_{min}\to 0\) gives rise to the square-root singularity. Vanishing of only the lowest non-zero eigenvalue near the plastic instability ensures that the spatial map of eigenmode corresponding to the vanishing eigenvalue dominates the displacement field on the approach to such a plastic instability [30]. The same scenario of a square-root singularity with only one vanishing eigenvalue was shown to hold under thermal quasistatic expansion on approach to a cavitation instability as well [13].
In the current context of the athermal quasistatic expansion of pinned solids, we find that the same square-root singularity scenario holds. In Fig. 4(a), we show one such trajectory corresponding to \(c=0.05\) & \(N=2500\). Fig.4(b) shows the square root singularity at which the lowest-eigenvalue of the Hessian vanishes as \(\lambda_{min}\sim\sqrt{(\rho-\rho_{c})/\rho_{c}}\) on approach to the plastic instability at the points marked in Fig.4(a), where \(\rho_{c}\) is the point at which the plastic instability occurs. The Fig. 4(d)-(e) show an eigenmode on the approach to the plastic instability, displacement field on the approach to the plastic instability, and the displacement field across the pressure drop, respectively, for one of the pressure-jumps in Fig.4(a)& (b). As evident from the vector-field maps, the displacement fields on the approach to instability are predicted by the eigenvector of \(\lambda_{min}\), but the displacement fields across the pressure jump do not have a high overlap with the eigenmode on the approach to the instability. This occurs because of the cascade/avalanche nature of the plastic jump. These avalanches are also more localized spatially due to pinning, unlike those seen in unpinned solids which can be system spanning [32; 33; 13]. All in all, the mechanism of plasticity on PEL being analogous to unpinned solids does not come as a surprise because pinning only blocks certain pathways of relaxation on the PEL of unpinned solids [34].
### Spatial decay of plastic modes and a length scale of plasticity
The plasticity under shear deformation is known to occur via localised rearrangements of particles in a shear transformation zone [35; 36; 37]. These localised rearrangements(and their eigenmode) have a quadrupolar shape [38; 39] and the radial part of the displacements of the medium decay as \(r^{-(d-1)}\); where \(r\) is the distance from the center of rearrangement, and \(d\) is the spatial dimensions. Even though the rearrangements are local, the displacement fields have a long-range character, i.e., in 2d, they decay as \(1/r\) from the center. The eigenmodes, \(\vec{e}\), corresponding to the vanishing eigenvalue \(\lambda_{min}\) are known to decide the direction of failure on the PEL for both shear and expansion [30; 40; 41; 13].
In the context of pinning, we, too, look at the radial decay profile of displacements from the plastic center [42] of the eigenmode \(\vec{e}\), just before the plastic event, i.e., at a distance of \(\delta\rho\approx 10^{-7}\), in the pre-yield regime. We choose to study the spatial profiles in the pre-yield regime, where density inhomogeneities (which can interfere with the profile shapes) are largely absent. The decay profiles averaged over 10 plastic events for each pinning concentration are shown in Fig.5(a). For the unpinned case, as one would expect, the decay profile \(e(r)\sim 1/r\). For the pinned solids, interestingly, we find two regimes. For the smaller pinning concentrations, _viz_. \(c=0.01\), \(c=0.02\) and \(c=0.05\), we see that the decay profile fits a screened power-law function \(e(r)\sim\exp(-r/\xi)/r^{\eta}\) with \(\eta\approx 0.42\) for \(c=0.01\) and \(c=0.02\) whereas \(\eta=0.13\) for \(c=0.05\). For larger pinning concentration, _viz_. \(c=0.08\), we find that the decay profile is well-fitted by an exponential form \(e(r)\sim\exp(-r/\xi)\) suggesting a crossover from a screening-like decay to a purely exponential decay. The scale-dependent exponential part of the decay profile allows us to extract a length scale \(\xi\) from the fit parameters. The values of \(\xi_{fit}\) for different pinning concentrations are shown using red color in Fig.5(b).
For a fixed pinning concentration \(c\), one naturally obtains a length scale \(\xi_{pin}=\sqrt{1/(c\cdot\rho)}\), which denotes an average distance between any two pinned sites. But, since the eigenmode decay profiles shown in Fig.5(a) are sampled at different densities from the expansion trajectory, the blue curve showing \(\xi_{pin}=\sqrt{1/(c\cdot\rho)}\) for the sampled densities in Fig.5(b). The length scale extracted from the fits in Fig. 5(a) thus shows reasonable parallels with the length scale introduced by pinning as seen in Fig.5(b). The decay exponents of length scales with pinning concentrations for \(\xi_{pin}\) is slightly different from exponent \(0.5\) primarily due to the changes in density where these decay profiles are obtained for each pinning concentration.
The exponential nature of the decay profiles and the extracted length scales thus suggest that, with increasing pinning concentrations, the displacements in the eigenmode become more and more localized. This implies that the consequence of plastic events in a pinned solid has reduced spatial effects, unlike the unpinned solid, which has a non-local power-law character. Hence, this will restrict the size of cascades/avalanches, which are caused due to triggering of multiple such plastic modes at different spatial locations [43]. This is also consistent with the data in Fig.1(e) & (f), where the magnitude of the drops in pressure/energy decrease with an increase in pinning concentration. The presence of a plasticity length scale in pinned solids also explains why there are negligible system size effects in the data shown in Fig. 3 as long as \(\xi<<L\), which is the case for the system sizes that we report. As the length-scale \(\xi\) prevents the system from acting as a whole beyond the lengths \(l>\xi\).
Figure 4: (a) Pressure \(P\) vs density \(\rho\) for an expansion trajectory, using \(N=2500\), corresponding to \(c=0.05\). (b) Demonstration of square-root singularity for lowest eignevalue of the Hessian, \(\lambda_{\rm min}\) (appropriately scaled by fit parameter \(A\)), computed at the density points in (a) corresponding to occurrence of plastic instabilities; \(\rho_{c}\) is the estimate density location of the event in each case. (c) Eigenmode, (d) displacement field just before the drop and (e) displacement fields across the plastic drop, occurring at one such plastic event, near \(\rho\approx 1.03277\). (f)-(j) Evolution of the density field across the expansion trajectory shown in (a).
Figure 3: (Top) Variation of ensemble averaged pressure (\(P\)) with density (\(\rho\)) for different system sizes, as labelled, at \(c=0.00\) (a), \(0.01\) (b), \(0.02\) (c), \(0.05\). (d) (Bottom) Corresponding variation of pressure susceptibility \(\chi_{p}\) with density \(\rho\), in each case.
Summary & Discussion
To summarize, using extensive numerical simulations, we have investigated how random inclusions, at the level of atoms/particles, influence the cavitation failure in amorphous solids. Our studies are done in the athermal quasistatic limit, i.e., where fluctuations originating from thermal fluctuations or finite deformation rates are absent. Like in previous studies, we model the inclusions as randomly pinned particles that only undergo affine motion during the mechanical deformation process but do not undergo any non-affine motion during the plastic relaxation.
We demonstrate how an amorphous solid with a small concentration of micro-inclusions, intended to make the solid stronger, can delay the cavitation and increase the tensile strength. The resulting glass obtained from the inclusions is thus not only stronger glass but also less susceptible to catastrophic fracture failure via cavitation. From a thermodynamic point of view, the delay in cavitation and, thus, the increase in tensile strength is likely a result of a change in the densities of coexisting solid and gas phases in the temperature-density [44; 45] phase diagram of the amorphous solid. A sharp yielding-like cavitation transition observed in amorphous solids [10; 13] accompanied by an extensive peak in the pressure fluctuations characterized via a susceptibility \(\chi_{p}\) is suppressed by particle-pinning. Particle pinning not only increases the tensile strength but also decreases the size of avalanches that lead to cavitation in amorphous solids by restricting them spatially; as a result, the cavitation occurs more homogeneously in this pinned solid. The expansion response of a pinned solid is somewhat analogous to the shear response of a ductile solid, as the sharp macroscopic drop leading to yielding is absent. But the analogy with a ductile solid doesn't hold completely as the yielding via cavitation is delayed in pinned solids. In a ductile solid, the yielding under shear would occur at lower strains and bulk stress [46]. Also, with increasing system sizes, the ductile solid shows brittle behaviour under shear via shear-banding-like events, as claimed in a recent work [47]. In our range of probing, the finite size effects in the pressure \(P\) vs \(\rho\) curves and the pressure fluctuation \(\chi_{p}\) vs \(\rho\) curves do not show such a brittle-like character for the large system sizes. The absence of finite-size effects in the \(P-\rho\) and \(\chi_{p}-\rho\) curves, along with decreasing sizes of avalanches, is explained using a length-scale extracted from the spatial decay of displacements in plastic-eigenmodes.
We have modeled these micro-alloyed inclusions via random pinning, which is a drastic simplifying approximation. Future studies can perhaps create more realistic models of inclusions in amorphous solids and study their deformation response. In Ref. [48], it was found that a particle that has a larger size compared to the typical size of the particles of the host medium can act as temporary pinning sites (termed as "soft pinning") over a timescale relevant to the medium relaxation time. Similarly, particles having different geometric shapes, like rods, can have significantly lower mobility due to their enhanced caging effect in a crowded environment and can also act as possible micro-alloying agents. Recent experimental realization of this "soft pinning" concept in colloidal glasses [49], molecular glasses [50], and glassy polymer mixture [51] is indeed very encouraging, indicating a clear possibility to test some of our results in experiments.
## V Acknowledgements
We thank the HPC facility at IMSc-Chennai for providing computational resources. S.K. would like to acknowledge the support from Swarna Jayanti Fellowship Grants No. DST/SJF/PSA-01/2018-19 and No. SB/SFJ/2019-20/05 and Core Research Grant from SERB via grant CRG/2019/005373. PC acknowledges funding from SERB via grant MTR/2022/001034.
|
2301.08813 | Representations of Materials for Machine Learning | High-throughput data generation methods and machine learning (ML) algorithms
have given rise to a new era of computational materials science by learning
relationships among composition, structure, and properties and by exploiting
such relations for design. However, to build these connections, materials data
must be translated into a numerical form, called a representation, that can be
processed by a machine learning model. Datasets in materials science vary in
format (ranging from images to spectra), size, and fidelity. Predictive models
vary in scope and property of interests. Here, we review context-dependent
strategies for constructing representations that enable the use of materials as
inputs or outputs of machine learning models. Furthermore, we discuss how
modern ML techniques can learn representations from data and transfer chemical
and physical information between tasks. Finally, we outline high-impact
questions that have not been fully resolved and thus, require further
investigation. | James Damewood, Jessica Karaguesian, Jaclyn R. Lunger, Aik Rui Tan, Mingrou Xie, Jiayu Peng, Rafael Gómez-Bombarelli | 2023-01-20T22:05:54Z | http://arxiv.org/abs/2301.08813v1 | # Representations of Materials for Machine Learning+
###### Abstract
High-throughput data generation methods and machine learning (ML) algorithms have given rise to a new era of computational materials science by learning relationships among composition, structure, and properties and by exploiting such relations for design. However, to build these connections, materials data must be translated into a numerical form, called a representation, that can be processed by a machine learning model. Datasets in materials science vary in format (ranging from images to spectra), size, and fidelity. Predictive models vary in scope and property of interests. Here, we review context-dependent strategies for constructing representations that enable the use of materials as inputs or outputs of machine learning models. Furthermore, we discuss how modern ML techniques can learn representations from data and transfer chemical and physical information between tasks. Finally, we outline high-impact questions that have not been fully resolved and thus, require further investigation.
###### Contents
* 1 INTRODUCTION
* 2 STRUCTURAL FEATURES FOR ATOMISTIC GEOMETRIES
* 2.1 Local Descriptors
* 2.2 Global Descriptors
* 2.3 Topological Descriptors
* 3 LEARNING ON PERIODIC CRYSTAL GRAPHS
* 4 CONSTRUCTING REPRESENTATIONS FROM STOICCHIOETRY
* 5 DEFECTS, SURFACES, AND GRAIN BOUNDARIES
* 6 TRANSFERABLE INFORMATION BETWEEN REPRESENTATIONS
* 7
###### Abstract
We consider the problem of finding the "new" model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a stochastic model of a
for instance, accurately predicting a property calculated by density functional theory (DFT) with ML requires input descriptors obtained from DFT on the same structure and at the same level of theory, the machine learning model does not offer any benefit.
A practicing materials scientist will notice a number of key barriers to forming property-informative representations that satisfy these criteria. First, describing behavior often involves quantifying structure-to-property relationships across length scales. The diversity of possible atomistic structure types considered can vary over space groups, supercell size, and disorder parameters. This challenge motivates researchers to develop flexible representations capable of capturing local and global information based on atomic positions. Beyond this idealized picture, predicting material performance relies upon understanding the presence of defects, the characteristics of the microstructure, and reactions at interfaces. Addressing these concerns requires extending previous notions of structural similarity or developing new specialized tools. Furthermore, atomistic structural information is not available without experimental validation or extensive computational effort [15, 16]. Therefore, when predictions are required for previously unexplored materials, models must rely on more readily available descriptors such as those based on elemental composition and stoichiometry. Lastly, due to experimental constraints, datasets in materials science can often be scarce, sparse, and restricted to relatively few and self-similar examples. The difficulty in constructing a robust representation in these scenarios has inspired strategies to leverage information from high-quality representations built for closely related tasks through transfer learning.
In this review, we will analyze how representations of solid-state materials (**Figure 1**) can be developed given constraints on the format, quantity, and quality of available data. We will discuss the justification, benefits, and trade-offs of different approaches. This discussion is meant to highlight methods of particular interest rather than provide exhaustive coverage of the literature. We will discuss current limitations and open problems whose solutions would have high impact. In summary, we intend to provide readers with an introduction to current state of the field and exciting directions for future research.
## 2 Structural Features for Atomistic Geometries
Simple observations in material systems (e.g. higher ductility of face-centered cubic metals compared to body-centered cubic metals) have made it evident that material properties are highly dependent on crystal structure--from coordination and atomic ordering to broken symmetries and porosity. For a computational material scientist, this presents the question of how to algorithmically encode information from a set of atoms types \((a_{1},a_{2},a_{3},...)\), positions \((x_{1},x_{2},x_{3},...)\), and primitive cell parameters into a feature set that can be effectively utilized in machine learning.
For machine learning methods to be effective, it is necessary that the machine-readable representation of a material's structure fulfills the criteria as outlined in the introduction [10, 11, 12, 13, 14]. Notably, scalar properties (such as heat capacity or reactivity) do not change when translations, rotations, or permutations of atom indexing are applied to the atomic coordinates. Therefore, to ensure representations reflect the similarities between atomic structures, the representations should also be invariant to those symmetry operations.
### Local Descriptors
One strategy to form a representation of a crystal structure is to characterize the local environment of each atom and consider the full structure as a combination local representations. This concept was applied by Behler and Parinello[26], who proposed the atom-centered symmetry functions (ACSF). ACSF descriptors (**Figure 2a**) can be constructed using radial, \(G_{i}^{1}\), and angular, \(G_{i}^{2}\), symmetry
functions centered on atom \(i\),
\[G_{i}^{1}=\sum_{j\neq i}^{\text{neighbors}}e^{-\eta(R_{ij}-R_{s})^{2}}f_{c}(R_{ij}) \tag{1}\]
\[G_{i}^{2}=2^{1-\zeta}\sum_{j,k\neq i}^{\text{neighbors}}(1+\lambda\cos\theta_{ ijk})^{\zeta}e^{-\eta(R_{ij}^{2}+R_{ik}^{2}+R_{jk}^{2})}f_{c}(R_{ij})f_{c}(R_{ ik})f_{c}(R_{jk}) \tag{2}\]
with the tunable parameters \(\lambda\), \(R_{s}\), \(\eta\), and \(\zeta\). \(R_{ij}\) is the distance between the central atom \(i\) and atom \(j\), and \(\theta_{ijk}\) corresponds to the angle between the vector from the central atom to atom \(j\) and the vector from the central atom to atom \(k\). The cutoff function \(f_{c}\) screens out atomic interactions beyond a specified cutoff radius and ensures locality of the atomic interactions. Because symmetry functions rely on relative distances and angles, they are rotationally and translationally invariant. Local representations can be constructed from many symmetry functions of the type \(G_{i}^{1}\) and \(G_{i}^{2}\) with multiple settings of tunable parameters to probe the environment at varying distances and angular regions. With the set of localized symmetry functions, neural networks can then predict local contributions to a particular property and approximate global properties as the sum of local contributions. The flexibility of this approach allows for modification of the \(G_{i}^{1}\) and \(G_{i}^{2}\) functions [27, 28] or higher capacity neural networks for element-wise prediction [28].
Figure 1: Summary of representations for perovskite SrTiO\({}_{3}\). **Top Left.** 2D cross section of Voronoi decomposition. Predictive features can be constructed from neighbors and geometric shape of cells [17]. **Middle Left.** Crystal graph of SrTiO\({}_{3}\) constructed assuming periodic boundary conditions and used as input to graph neural networks [18]. **Bottom Left.** Compositional data including concentrations and easily accessible atomic features including electronegativities and atomic radii [19]. Data taken from Reference [20]. **Top Right.** Deviations on a pristine bulk structure induced by an oxygen vacancy to predict formation energy [21]. **Middle Right.** Representations can be learned from large repositories using deep neural networks. The latent physical and chemical information can be leveraged in related but data-scare tasks. **Bottom Right.** Training of generative models capable of proposing new crystal structures by placing atoms in discretized volume elements [22, 23, 24, 25].
In search of a representation with fewer hand-tuned parameters and a more rigorous definition of similarity, Bartok et al. [12] proposed a rotationally invariant kernel for comparing environments based on local atomic density. Given a central atom, the Smooth Overlap of Atomic Positions (SOAP) defines the atomic density function \(\rho(\mathbf{r})\) as a sum of Gaussian functions centered at each neighboring atom within a cutoff radius (**Figure 2b**). The choice of Gaussian function is motivated by the intuition that representations should be continuous such that small changes in atomic positions should result in correspondingly small changes in the metric between two configurations. With a basis of radial functions \(g_{n}(\mathbf{r})\) and spherical harmonics \(Y_{lm}(\theta,\phi)\), \(\rho(\mathbf{r})\) for central atom \(i\) can be expressed as:
\[\rho_{i}(\mathbf{r})=\sum_{j}\exp-\frac{|\mathbf{r}-\mathbf{r}_{ij}|^{2}}{2 \sigma^{2}}=\sum_{nlm}c_{nlm}g_{n}(\mathbf{r})Y_{lm}(\mathbf{\hat{r}}) \tag{3}\]
and the kernel can be computed[12, 29]:
\[K(\rho,\rho^{\prime})=\mathbf{p}(\mathbf{r})\cdot\mathbf{p}^{\prime}(\mathbf{ r}) \tag{4}\]
\[\mathbf{p}(\mathbf{r})\equiv\sum_{m}c_{nlm}(c_{n^{\prime}lm})^{*} \tag{5}\]
where \(c_{nlm}\) are the expansion coefficients in **Equation 3**. In practice, \(\mathbf{p}(\mathbf{r})\) can be used as a vector descriptor of the local environment and is also referred to as a power spectrum [12]. SOAP has demonstrated extraordinary versatility for materials applications both as a tool for measuring similarity [30] and as a descriptor for machine learning algorithms [31]. Furthermore, the SOAP kernel can be used to compare densities of different elements by adding an additional factor that provides a definition for similarity between atoms, where for instance, atoms in the same group could have higher similarity [29]. The mathematical connections between different local atomic density representations including ACSFs and SOAP are elucidated by a generalized formalism introduced by Willatt et al. [32], offering a methodology through which the definition of new variants can be clarified.
Instead of relying on the density of nearby atoms, local representations can be derived from a Voronoi tessellation of a crystal structure. The Voronoi tessellation segments space into cells such that each cell contains one atom and all points in space such that atom A is the closest atom are contained in the same cell as atom A (**Figure 2c**). From these cells, Ward et al. [17] identified a set of descriptive features including an effective coordination number computed using the area of the faces, the lengths and volumes of nearby cells, ordering of the cells based on elements, and atomic properties of nearest neighbors weighted by the area of the intersecting face. When combined with compositional features [19], their representation results in better performance on predictions of formation enthalpy for ICSD than partial radial distribution functions [33] (**Figure 1** in Reference [17]). In subsequent work, these descriptors have facilitated the prediction of experimental heat capacities in MOFs [34]. Similarly, Isayev et al. [35] replaced faces of the Voronoi tessellation with virtual bonds and separated the resulting framework into sets of linear (up to four atoms) and shell-based (up to nearest neighbors) fragments. Additional features related to the atomic properties of constituent elements were associated with each fragment, and the resulting vectors were concatenated with attributes of the supercell. In addition to demonstrating accurate predictive capabilities, models could be interpreted through the properties of the various fragments. For instance, predictions of band gap could be correlated with the difference in ionization potential in two-atom linear fragments, a trend that could be exploited to design material's properties through tuning of composition[35].
### Global Descriptors
Alternatively, to more explicitly account for interactions beyond a fixed cutoff, atom types and positions can be encoded into a global representation that reflects geometric and physical insight. Inspired by the
importance of electrostatic interactions in chemical stability, Rupp et al. [39] proposed the Coulomb matrix (**Figure 2d**), which models the potential between electron clouds:
\[M_{i,j}=\begin{cases}Z_{i}^{2.4}&\text{for }i=i\\ \frac{Z_{i}Z_{j}}{|r_{i}-r_{j}|}&\text{for }i\neq j\end{cases} \tag{6}\]
Due to the fact that off-diagonal elements are only dependent on relative distances, Coulomb matrices are rotation and translation invariant. However, the representation is not permutation invariant since changing the labels of the atoms will rearrange the elements of the matrix. While originally developed for molecules, the periodicity of crystal structures can be added to the representation by considering images of atoms in adjacent cells, replacing the \(\frac{1}{|r_{i}-r_{j}|}\) dependence with another function with the same small distance limit and periodicity that matches the parent lattice, or using an Ewald sum to account for long range interactions [11]. BIGDML [40] further improved results by restricting predictions from the representation to be invariant to all symmetry operations within the space group of the parent lattice and demonstrated effective implementations on tasks ranging from H interstitial diffusion in Pt to phonon density of states. While this approach has been able to effectively model long-range physics, these representations rely on a fixed supercell and may not be able to achieve the
Figure 2: **(a)** Examples of radial, \(G_{i}^{1}\), and angular, \(G_{i}^{2}\), symmetry functions from the local atom-centered symmetry function descriptor proposed by Behler and Parinello[26]. **(b)** In the Smooth Overlap of Atomic Positions (SOAP) descriptor construction, the atomic neighborhood density of a central atom is defined by a sum of Gaussian functions around each neighboring atom. A kernel function can then be built to compare the different environments by computing the density overlap of the atomic neighborhood functions. Figure is reprinted from reference [36]. **(c)** Voronoi tessellation in two and three dimensions. Yellow circles and spheres show particles while the blue lines divide equidistantly the space between two neighboring particles. Polygonal spaces encompassed by the blue lines are the Voronoi cells. Figure is reprinted from reference [37]. **(d)** Illustration of a Coulomb matrix where each element in the matrix shows Coulombic interaction between the labeled particles in the system on the left. Diagonal elements show self-interactions. **(e)** The births and deaths of topological holes in a point cloud (left) are recorded on a persistence diagram (right). Persistent features lie far from the parity line and indicate more significant topological features. Feature is reprinted from reference [38].
same chemical generality as local environments [40].
Global representations have also been implemented with higher-order tensors. Partial radial distribution functions (PRDF) are 3D non-permutation invariant matrices \(g_{\alpha\beta r}\) whose elements correspond to the density of element \(\beta\) in the environments of element \(\alpha\) at radius \(r\)[33]. The many-body tensor representation (MBTR) provides a more general framework [10] that can quantify k-body interactions and account for chemical similarity between elements. The MBTR is translationally, rotationally, and permutation invariant and can be applied to crystal structures by only summing over atoms in the primitive cell. While MBTR exhibited better performance than SOAP or Coulomb matrices for small molecules, its accuracy may not extend to larger systems [10].
Another well-established method for representing crystal structures in materials science is the cluster expansion. Given a parent lattice and a decoration \(\sigma\) defining the element that occupies each site, Sanchez et al. sought to map this atomic ordering to material properties and proposed evaluating the correlations between sites through a set of cluster functions. Each cluster function \(\Phi\) is constructed from a product of basis functions \(\phi\), over a subset of sites [41]. To ensure the representation is appropriately invariant, symmetrically equivalent clusters are grouped into classes denoted by \(\alpha\). The characteristics of the atomic ordering can be quantified by averaging cluster functions over the decoration \(<\Phi_{\alpha}>_{\sigma}\), and properties \(q\) of the configuration can be predicted as:
\[q(\sigma)=\sum_{\alpha}J_{\alpha}m_{\alpha}<\Phi_{\alpha}>_{\sigma} \tag{7}\]
where \(m_{\alpha}\) is a multiplicity factor that accounts for the rate of appearance of different cluster types, and \(J_{\alpha}\) are parameters referred to as effective cluster interactions that must be determined from fits to data [42]. While cluster expansions have been constructed for decades and provided useful models for configurational disorder and alloy thermodynamics [42], cluster expansions assume the structure of the parent lattice and models cannot generally be applied across different crystal structures [43, 44]. Furthermore, due to the increasing complexity of selecting cluster functions, implementations are restricted to binary and ternary systems without special development [45]. Additional research has extended the formalism to continuous environments (Atomic Cluster Expansion) by treating \(\sigma\) as pairwise distances instead of site-occupancies and constructing \(\phi\) from radial functions and spherical harmonics [46]. The Atomic Cluster Expansion framework has provided a basis for more sophisticated deep learning approaches [47].
### Topological Descriptors
Topological data analysis (TDA) has found favor over the past decade in characterizing structure in complex, high-dimensional datasets. When applied to the positions of atoms in amorphous or crystalline structures, topological methods reveal underlying geometric features that inform behavior in downstream predictions such as phase changes, reactivity, and separations. In particular, persistent homology (PH) is able to identify significant structural descriptors that are both machine readable and physically interpretable. The data can be probed at different length scales (formally called filtrations) by computing a series of complexes that each include all sets of points where all pairwise distances are less than the corresponding length [48]. Analysis of complexes by homology in different dimensions reveals holes or voids in the data manifold, which can be described by the range of length scales they are observed at (persistences), as well as when they are produced (births) and disappear (deaths). Emergent features with significant persistence values are less likely to be caused by noise in the data or as an artifact of the chosen length scales. In practice, multiple persistences, births, and deaths produced from a single material can be represented together by persistent diagrams (**Figure 2e**) or undergo additional feature engineering to generate a variety of descriptors as machine learning inputs [49].
While persistent homology has been applied to crystal structures in the Open Quantum Material Database [50], the method is particularly useful in the analysis of porous materials. The identified features (births, deaths, persistences) hold direct physical relevance to traditional structural features used to describe the pore geometries. For instance, persistent 2D deaths represent the largest sphere that can be included inside the pores of the materials. Krishnapriyan et al. has showed that these topological descriptors outperform traditional structural descriptors when predicting carbon dioxide adsorption under varying conditions for metal-organic frameworks [38], as did Lee et al. for zeolites for methane storage capacities [51]. Representative cycles can trace the topological features back to the atoms that are responsible for the hole or the void, creating a direct relationship between structure and predicted performance (**Figure 4** in Reference [38]). Similarity methods for comparing barcodes can then be used to identify promising novel materials with similar pore geometries for targeted applications.
A caveat is that PH does not inherently account for system size and is thus size-dependent. The radius cutoff, or the supercell size, needs to be carefully considered to encompass all significant topological features and allow comparison across systems of interest. In the worst case scenario, the computation cost per filtration for a structure is \(O(N^{3})\), where \(N\) is the number of sets of points defining a complex. Although the cost is alleviated by the sparsity of the boundary matrix [52], the scaling is poor for structures whose geometric features exceed unit cell lengths. The benefit of using PH features to capture more complex structural information has to be carefully balanced with the cost of generating these features.
## 3 Learning on Periodic Crystal Graphs
In the previous section, we described many physically-inspired descriptors that characterize materials and can be used to efficiently predict properties. The use of differentiable graph-based representations in convolutional neural networks, however, mitigates the need for manual engineering of descriptors [53, 54]. Indeed, advances in deep learning and the construction of large-scale materials databases [4, 5, 6, 7, 8, 9] have made it possible to learn representations directly from structural data. From a set of atoms \(a_{1},a_{2},...\) located at positions \(x_{1},x_{2},x_{3},...\), materials can be converted to a graph \(G(V,E)\) defined as the set of atomic nodes \(V\) and the set of edges \(E\) connecting neighboring atoms. Many graph-based neural network architectures were originally developed for molecular systems, with edges representing bonds. By considering periodic boundary conditions and defining edges as connections between neighbors within a cutoff radius, graphical representations can be leveraged for crystalline systems. The connectivity of the crystal graph thus naturally encodes local atomic environments [18].
When used as input to machine learning algorithms, the graph nodes and edges are initialized with an associated set of features. Nodal features can be as simple as a one-hot vector of the atomic number or can explicitly include other properties of the atomic species (e.g. electronegativity, group, period). Edge features are typically constructed from the distance between the corresponding atoms. Subsequently, a series of convolutions parameterized by neural networks modify node and/or edge features based on the current state of their neighborhood (**Figure 3a**). As the number of convolutions increases, interactions from further away in the structure can propagate, and graph features become tuned to reflect the local chemical environment. Finally, node and edge features can be pooled to form a single vector representation for the material [53, 55].
Crystal Graph Convolution Neural Networks (CGCNN) [18] and Materials Graph Networks (MEGNet) [56] have become benchmark algorithms capable of predicting properties across solid-state materials domains including bulk, surfaces, disordered systems, and 2D materials [57, 58]. Atomistic Line Graph Neural Network (ALIGNN) extended these approaches by including triplet three-body features in addition to nodes and edges and exhibited superior performance to CGCNN over a broad range of regression tasks including formation energy, bandgap, and shear modulus [59]. Other variants have
used information from Voronoi polyhedra to construct graphical neighborhoods and augment edge features [60] or initialized node features based on the geometry and electron configuration of nearest neighbor atoms [61].
While these methods have become widespread for property prediction, graph convolution updates based only on the local neighborhood may limit the sharing of information related to long-range interactions or extensive properties. Gong et al. demonstrated that these models can struggle to learn materials properties reliant on periodicity, including characteristics as simple as primitive cell lattice parameters (**Figure 3c**)[63]. As a result, while graph-based learning is a high-capacity approach, performance can vary substantially by the target use case. In some scenarios, methods developed primarily for molecules can be effectively implemented "out-of-the-box" with the addition of periodic boundary conditions, but especially in the case of long-range physical phenomena, optimal results can require specialized modeling.
Various strategies to account for this limitation have been proposed. Gong et al. found that if the pooled representation after convolutions was concatenated with human-tuned descriptors, errors could be reduced by 90% for related predictions, including phonon internal energy and heat capacity [63]. Algorithms have attempted to more explicitly account for long-range interactions by modulating convolutions with a mask defined by a local basis of Gaussians and a periodic basis of plane waves
Figure 3: **(a)** General architecture of graph convolutional neural networks for property prediction in crystalline systems. Three-dimensional crystal structure is represented as a graph with nodes representing atoms and edges representing connections between nearby atoms. Features (e.g. nodal, edge, angular) within local neighborhoods are convolved, pooled into a crystal-wide vector, then mapped to the target property. Figure adapted from [62]. **(b)** Information loss in graphs built from pristine structures. Geometric distortions of ground-state crystal structures are captured as differing edge features in graphical representations. This information is lost in graphs constructed from corresponding unrelaxed structures. **(c)** Graph-based models can struggle to capture periodicity-dependent properties, such as cell lattice parameters. \(R^{2}\) scores presented here were reported by Gong et al. for lattice parameter, \(a\), predictions in short and long 1D single carbon chain toy structures. Figure adapted from [63]. **(d)** Ability of graphical representations to distinguish toy structures. Assuming a sufficiently small cutoff radius, the invariant representation—using edge lengths and/or angles—cannot distinguish the two toy arrangements, while the equivariant representation with directional features can. Figure adapted from [64].
[65], employing a unique global pooling scheme that could include additional context such as stoichiometry [66], or constructing additional features from the reciprocal representation of the crystal [67]. Other strategies have leveraged assumptions about the relationships among predicted variables, such as representing phonon spectra using a Gaussian mixture model [68].
Given the promise and flexibility of graphical models, improving the data-efficiency, accuracy, generalizability, and scalability of these representations are active areas of research. While our previous discussion of structure-based material representations relied on the invariance of scalar properties to translation and rotation, this characteristic does not continue to hold for higher-order tensors. Consider a material with a net magnetic moment. If the material is rotated \(180^{\circ}\) around an axis perpendicular to the magnetization, the net moment then points in the opposite direction. The moment was not invariant to the rotation but instead, transformed alongside the operation in an equivariant manner [69]. For a set of transformations described by group \(G\), equivariant functions \(f\) satisfy \(g*f(x)=f(g*x)\) for every input \(x\) and every group element \(g\)[69, 70]. Recent efforts have shown that by introducing higher-order tensors to node and edge features (**Figure 3d**) and restricting the update functions such that intermediate representations are equivariant to the group E3 (encompassing translations, rotations, and reflections in \(R^{3}\)), models can achieve state-of-the-art accuracy on benchmark datasets and even exhibit comparable performance to structural descriptors in low-data (\(\sim\)100 datapoints) regimes [71, 69, 64]. Further accuracy improvements can be made by explicitly considering many-body interactions beyond edges [72, 73, 47]. Such models, developed for molecular systems, have since been extended to solid-state materials and shown exceptional performance. Indeed, Chen et al. trained an equivariant model to predict phonon density of states and was able to screen for high heat capacity targets [74], tasks identified to be particularly challenging for baseline CGCNN and MEGNet models [63]. Therefore, equivariant representations may offer a more general alternative to the specialized architectures described above.
A major restriction of these graph-based approaches is the requirement for the positions of atomic species to be known. In general, ground-state crystal structures exhibit distortions that allow atoms to break symmetries, which are computationally modeled with expensive DFT calculations. Graphs generated from pristine structures lack representation of relaxed atomic coordinates (**Figure 3b**) and resulting model accuracy can degrade substantially [75, 76]. These graph-based models are therefore often most effective at predicting properties of systems for which significant computational resources have already been invested, thus breaking advice (3) from Section 1. As a result, their practical usage often remains limited when searching broad regions of crystal space for an optimal material satisfying a particular design challenge.
Strategies have therefore been developed to bypass the need for expensive quantum calculations and use unrelaxed crystal prototypes as inputs. Gibson et al. trained CGCNN models on datasets composed of both relaxed structures and a set of perturbed structures that map to the same property value as the fully relaxed structure. The data-augmentation incentivizes the CGCNN model to predict similar properties within some basin of the fully relaxed structure and was demonstrated to improve prediction accuracy on an unrelaxed test set [76]. Alternatively, graph-based energy models can be used to modify unrelaxed prototypes by searching through a fixed set of possibilities [77] or using Bayesian optimization [78] to find structures with lower energy. Lastly, structures can be relaxed using cheap surrogate model (e.g. a force field) before a final prediction is made. The accuracy and efficiency of such a procedure will fundamentally rely on the validity and compositional generalizability of the surrogate relaxation approach [75].
## 4 Constructing Representations From Stoichiometry
The phase, crystal system, or atomic positions of materials are not always available when modeling materials systems, rendering structural and graphical representations impossible to construct. In the absence of this data, material representations can also be built purely from stoichiometry (the concentration of the constituent elements) and without knowledge of the geometry of the local atomistic environments. Despite their lack of structural information and apparent simplicity, these methods provide unique benefits for materials science researchers. First, descriptors used to form compositional representations such as common atomic properties (e.g. atomic radii, electronegativity) do not require computational overhead and can be readily found in existing databases [19]. In addition, effective models can often be built using standard algorithms for feature selection and prediction that are implemented in freely available libraries [79], increasing accessibility to non-experts when compared with structural models. Lastly, when used as tools for high-throughput screening, compositional models identify a set of promising elemental concentrations. Compared with the suggestion of particular atomistic geometries, stoichiometric approaches may be more robust, as they make weaker assumptions about the outcomes of attempted syntheses.
Compositional-based rules have long contributed to efficient materials design. Hume-Rothery and Linus Pauling designed rules for determining the formation of solid solutions and crystal structures that include predictions based on atomic radii and electronic valence states [80, 81]. However, many exceptions to their predictions can be found [82].
Machine learning techniques offer the ability to discover and model relationships between properties and physical descriptors through statistical means. Meredig et al. demonstrated that a decision tree ensemble trained using a feature set of atomic masses, positions in the periodic table, atomic numbers, atomic radii, electronegativities, and valence electrons could outperform a conventional heuristic on predicting whether ternary compositions would have formation energies \(<100\) meV/atom [83]. Ward et al. significantly expanded this set to 145 input properties, including features related to the distribution and compatibility of the oxidation states of constituent atoms [19]. Their released implementation, MagPie, can be a useful benchmark or staring point for the development of further research methods [79, 84, 85]. Furthermore, if a fixed structural prototype (e.g. elapsolite) is assumed, these stoichiometric models can be used to analyze compositionally-driven variation in properties [86, 87].
Even more subtle yet extremely expressive low-dimensional descriptors can be obtained by initializing a set with standard atomic properties and computing successive algebraic combinations of features, with each calculation being added to the set and used to compute higher order combinations in the next round. While the resulting set will grow exponentially, compressive sensing can then be used to identify the most promising descriptors from sets that can exceed \(10^{9}\) possibilities [88, 89]. Ghiringhelli et al. found descriptors that could accurately predict whether a binary compound would form in a zincblende or rocksalt structure [90], and Bartel et al. identified an improved tolerance factor \(\tau\) for the formation of perovskite systems [91] (**Table 1**). While these approaches do not derive their results from a known mechanism, they do provide enough interpretability to enable the extraction of physical insights for the screening and design of materials.
When large datasets are available, deep neural networks tend to outperform traditional approaches, and that is also the case for compositional representations. The size of modern materials science databases have enabled the development of information-rich embeddings that map elements or compositions to vectors as well as the testing and validation of deep learning models. Chemically meaningful embeddings can be constructed by counting all compositions in which that element appeared in the Materials Project [92] or learned through the application of natural language processing to previously reported results in the scientific literature [93]. These data-hungry methods were able to demonstrate that their representations could be clustered based on atomic group [92] and could be used to suggest
new promising compositions based on similarity with the best known materials [93]. The advantages of training deep learning algorithms with large datasets are exemplified by ElemNet, which only uses a vector of fractional stoichiometry as input. Despite its apparent simplicity, when \(>3,000\) training points where available, ElemNet performed better than a MagPie-based model at predicting formation enthalpies [94].
While the applicability of ElemNet is limited to problem domains with \(O(10^{3})\) datapoints, more recent methods have significantly reduced this threshold. ROOST [95] represented each composition as a fully-connected graph with nodes as elements, and properties were predicted using a message-passing scheme with an attention mechanism that relied on the stoichiometric fraction of each element. ROOST substantially improved on ElemNet, achieving better performance than MagPie in cases with only hundreds of training examples. Meanwhile, CrabNet [96] forms element-derived matrices as a sum of embeddings of each element's identity and stoichiometric fraction. This approach achieves similar performance to ROOST by updating the representation using self-attention blocks. The fractional embedding can take log-scale data as input such that even dopants in small concentrations can have a significant effect on predictions. Despite the inherent challenges of predicting properties purely from composition, these recent and significant modeling improvements suggest that continued algorithmic development could be an attractive and impactful direction for future research projects.
Compositional models have the advantage that they can suggest new systems to experimentalists without requiring a specific atomic geometry and, likewise, can learn from experimental data without necessitating an exact crystal structure [97]. Owing to their ability to incorporate experimental findings into ML pipelines and provide suggestions with fewer experimental requirements (e.g. synthesis of a particular phase), compositional models have become attractive methods for materials design. Zhang el al. trained a compositional model using atomic descriptors on previous experimental data to predict Vicker's harness and validated their model by synthesizing and testing eight metal disilicides [97]. Oliynik et al. identified new Heusler compounds, while also verifying their approach on negative cases where they predicted a synthesis would fail [87]. Another application of their approach enabled the prediction of the crystal structure prototype of ternary compounds with greater than \(96\%\) accuracy. By training their model to predict the probability associated with each structure, they were able to experimentally engineer a system (TiFeP) with multiple competing phases [98].
While researchers have effectively implemented compositional models as methods for materials design, their limitations should be considered when selecting a representation for ML studies. Fundamentally, compositional models will only provide a single prediction for each stoichiometry regardless of the number of synthesizeable polymorphs. While training models to only predict properties of the lowest-energy structure is physically justifiable [99], extrapolation to technologically relevant meta-stable systems may still be limited. Additionally, graph-based structural models such as CGCNN [18] or MEGNet [56] generally outperform compositional models [84]. Therefore, composition models are most practically applicable when atomistic resolution of materials is unavailable, and thus structural
\begin{table}
\begin{tabular}{|c|c|c|} \hline Descriptor & Prediction & Variables \\ \hline \(\frac{IP(B)-EA(B)}{r_{B}(A)^{2}}\) & Ordering in AB Compound & IP-Ionization Potential \\ & & EA-Electron Affinity \\ & & \(r_{p}\)-Radius of Maximum Density of p-Orbital \\ \hline \(\frac{r_{X}}{r_{B}}-n_{A}(n_{A}-\frac{r_{A}/r_{B}}{ln[r_{A}/r_{B}]})\) & Stability of ABX\({}_{3}\) Perovskite & \(n_{Y}\)-Oxidation State of Y \\ & & \(r_{y}\)-Ionic Radius of Y \\ \hline \end{tabular}
\end{table}
Table 1: Example Descriptors Determined through Compressive Sensing
representations cannot be effectively constructed.
## 5 Defects, Surfaces, and Grain Boundaries
Mapping the structure of small molecules and unit cells to materials properties has been a reasonable starting point for many applications of materials science modeling. However, materials design often requires understanding of larger length scales beyond the small unit cell, such as in defect and grain boundary engineering, and in surface science [100]. In catalysis, for example, surface activity is highly facet dependent and cannot be modeled using the bulk unit cell alone. It has been shown that the (100) facet of RuO\({}_{2}\), a state-of-the-art catalyst for the oxygen evolution reaction (OER), has an order of magnitude higher current for OER than the active site on the thermodynamically stable (110) facet [101]. Similarly, small unit cells are not sufficient for modeling transport properties, where size, orientation, and characteristics of grain boundaries play a large role. In order to apply machine learning to practical materials design, it is therefore imperative to construct representations that can characterize environments at the relevant length scales.
Defect engineering offers a common and significant degree of freedom through which materials can be tuned. Data science can contribute to the design of these systems as fundamental mechanisms are often not completely understood even in long-standing cases such as carbon in steels [103]. Dragoni et al. [31] developed a Gaussian Approximate Potential (GAP) [104] using SOAP descriptors for face-centered cubic iron that could probe vacancies, interstitials, and dislocations, but their model was confined to a single phase of one element and required DFT calculations incorporating \(O(10^{6})\) unique environments to build the interpolation.
Considering that even a small number of possible defects significantly increases combinatorial complexity, a general approach for predicting properties of defects from pristine bulk structure representations could accelerate computation by orders of magnitude (**Figure 4a**). For example, Varley et al. observed simple and effective linear relationships between vacancy formation energy and descriptors derived from the band structure of the bulk solid [102]. While their model only considered one type of defect, their implementation limits computational expense by demonstrating that only DFT calculations on the pristine bulk were required [102]. Structure- and composition-aware descriptors of the pristine bulk have additionally been shown to be predictive of vacancy formation in metal ox
Figure 4: **(a)** Point defect properties are learned from a representation of the pristine bulk structure and additional relevant information on conduction and valence band levels. **(b)** Surface properties are learned from a combination of pristine bulk structure representation, miller index, and density of states information. **(c)** Local environments of atoms near a grain boundary versus atoms in the pristine bulk are compared to learn grain boundary properties. **Figure 4a** adapted from [102].
ides [105, 106] and site/antisite defects in AB intermetallics [107]. To develop an approach that can be used over a broad range of chemistries and defect types, Frey et al. formed a representation by considering relative differences in characteristics (atomic radii, electronegativity, etc.) of the defect structure compared to the pristine parent[21]. Furthermore, because reference bulk properties could be estimated using surrogate ML models, no DFT calculations were required for prediction of either formation energy or changes in electronic structure [21]. We also note that in some cases it may be judicious to design a model that does not change significantly in the presence of defects. For these cases, representations based on simulated diffraction patterns are resilient to site-based vacancies or displacements [108].
Like in defect engineering, machine learning for practical design of catalyst materials requires representations beyond the single unit cell. Design of catalysts with high activity crucially depends on interactions of reaction intermediates with materials surfaces based on the Sabatier principle, which argues that activity is greatest when intermediates are bound neither too weakly nor too strongly [109]. From a computational perspective, determining absorption energies involves searches over possible adsorption active sites, surface facets, and surface rearrangements, leading to a combinatorial space that can be infeasible to exhaustively cover with DFT. Single dimension descriptors based on electronic structure have been established that can predict binding strengths and provide insight on tuning catalyst compositions, such as metal d-band center for metals [110] and oxygen 2p-band center for metal oxides [111]. Additional geometric approaches include describing the coordination of the active site (generalized coordination number in metals, adjusted generalized coordination number in metal oxides) [112]. Based on the success of these simple descriptors, machine learning models have been developed to learn binding energy using the density of states and geometric descriptors of the pristine bulk structure as features (**Figure 4b**) [113].
However, these structural and electronic descriptors are often not generalizable across chemistries [110, 114], limiting the systems over which they can be applied and motivating the development of more sophisticated machine learning techniques. To reduce the burden on high-throughput DFT calculations, active learning with surrogate models using information from pure metals and active-site coordination has been used to identify alloy and absorbate pairs that have the highest likelihood of producing near-optimal binding energies [115]. Furthermore, when sufficient data (\(>10,000\) examples) is available, modifications of graph-convolutional models have also predicted binding energies with high accuracy even in datasets with up to 37 elements, enabling discovery without detailed mechanistic knowledge [114]. To generalize these results, the release of Open Catalyst 2020 and its related competitions [6, 9] has provided both over one million DFT energies for training new models and a benchmark through which new approaches can be evaluated [75]. While significant advancements have been made, state-of-the-art models still exhibit high-errors for particular absorbates and non-metallic surface elements, constraining chemistry over which effective screening can be conducted [75]. Furthermore, the complexity of the design space relevant for ML models grows considerably when accounting for interactions between absorbates and different surface facets [116].
Beyond atomistic interactions, the mechanical and thermal behavior of materials can be significantly modulated by processing conditions and the resulting microstructure. Greater knowledge of local distortions introduced at varying grain boundary incident angles would give computational materials scientists a more complete understanding of how experimentally chosen chemistries and synthesis parameters will translate into device performance. Strategies to quantify characteristics of grain boundary geometry have included reducing computational requirements by identifying the most promising configurations with virtual screening [117], estimating grain boundary free volume as a function of temperature and bulk composition [118], treating the microstructure as a graph of nodes connected across grain boundaries [119, 54], and predicting the energetics, and hence feasiblity, of solute segregation [120]. While the previous approaches did not include features based on the constituent atoms and were only benchmarked on systems with up to three elements, recent work has demonstrated that
the excess energy of the grain boundary relative to the bulk can be approximated across compositions with five variables defining its orientation and the bond lengths within the grain boundary (**Figure 4c**)[121].
Further research has tried to map local grain boundary structure to function. Algorithmic approaches to grain boundary structure classification have been developed (see for example VoroTop [122]), but such approaches typically rely on expert users and do not provide a continuous representation that can smoothly interpolate between structures [123]. To eliminate these challenges, Rosenbrock et al. proposed computing SOAP descriptors for all atoms in the grain boundary, clustering vectors into classes, and identifying grain boundaries through its local environment classes. The representation was not only predictive of grain boundary energy, temperature-dependent mobility, and shear coupling but also provides interpretable effects of particular structures within the grain boundary [124]. A related approach computed SOAP vectors relative to the bulk structure when analyzing thermal conductivity [125]. Representations based on radial and angular structure functions can also quantify the mobility of atoms within a grain boundary [126]. When combined, advancing models for grain boundary stability as well as structure to property relationships opens the door for functional design of grain boundaries.
## 6 Transferable Information Between Representations
Applications of machine learning to materials science are limited by the scope of compositions and structures over which algorithms can maintain sufficient accuracy. Thus, building large-scale, diverse datasets is the most robust strategy to ensure trained models can capture the relevant phenomena. However, in most contexts, materials scientists are confronted with sparsely distributed examples. Ideally, models can be trained to be generalizable and exhibit strong performance across chemistries and configurations even with few to no data points in a given domain. In order to achieve this, representations and architectures must be chosen such that models can learn to extrapolate beyond the space observed in the training set. Effective choices often rely on inherent natural laws or chemical features that are shared between the training set and extrapolated domain such as physics constraints [127, 128], the geometric [129, 130] and electronic [131, 132] structure of local environments, and positions of elements in the periodic table [133, 134]. For example, Li et al. were able to predict absorption energies on high entropy alloy surfaces after training on transition metal data by using the coordination number and electronic properties of neighbors at the active site [129]. While significant advancements have been made in the field, extrapolation of machine learning models across materials spaces typically requires specialized research methods and is not always feasible.
Likewise, it is not always practical for a materials scientist to improve model generality by just collecting more data. In computational settings, some properties can only be reliably estimated with more expensive, higher levels of theory, and for experimentalists, synthetic and characterization challenges can restrict throughput. The deep learning approaches that have demonstrated exceptional performance over a wide range of tests cases discussed in this review can require at least \(~{}10^{3}\) training points, putting them seemingly out for the realm of possibility for many research projects. Instead, predictive modeling may fall back on identifying relationships between a set of human-engineered descriptors and target properties.
Alternatively, the hidden, intermediate layers of deep neural networks can be conceptualized as a learned vector representation of the input data. While this representation is not directly interpretable, it must still contain physical and chemical information related to the prediction task, which downstream layers for the network utilize to generate model outputs. Transfer learning leverages these learned representations from task A and uses them in the modeling of task B. Critically, task A can be chosen to be one for which a large number of data points are accessible (e.g. prediction all DFT formation
energies in the Materials Project), and task B can be of limited size (e.g. predicting experimental heats of formation of a narrow class of materials). In principle, if task A and task B share an underlying physical basis (the stability of the material), the features learned when modeling task A may be more informationally-rich than a human-designed representation [135]. With this more effective starting point, subsequent models for task B can reach high accuracy with relatively few new examples.
The most straightfoward methods to implement transfer learning in the materials science community follow a common procedure: (1) train a neural network model to predict a related property (task A) for which \(>O(1,000)\) data points are available (pretraining), (2) fix parameters of the network up a chosen depth \(d\) (freezing), and (3) given the new dataset for task B, _either_ retrain the remaining layers, where parameters can be initialized randomly or from the task A model (finetuning), _or_ treat the output of the model at depth d as in input representation to another ML algorithm (feature extraction) [136, 137]. The robustness of this approach has been demonstrated across model classes including those using composition only (ElemNet [135, 137], ROOST [95]), crystal graphs (CGCNN) [138], and equivariant convolutions (GemNet) [139]. Furthermore, applications of task B, range from experimental data [135, 95] to DFT-calculated surface absorption energies [139].
The sizes of the datasets for task A and task B will determine the effectiveness of a transfer learning approach in two ways. First, the quality and robustness of the representation learned for task A will increase as the number of observed examples (the size of dataset A) increases. Secondly, as the size of dataset B decreases, data becomes too sparse for a ML model to learn a reliable representation alone and prior information from the solution to task A can provide an increasingly useful method to interpolate between the few known points. Therefore, transfer learning typically exhibits the greatest boosts in performance when task A has orders of magnitude more data than task B [135, 138].
In addition, the quality of information sharing through transfer learning depends on the physical relationship between task A and task B. Intuitively, the representation from task A provides a better guide for task B if the tasks are closely related. For example, Kolluru et al. demonstrated that transfer learning from models trained on the Open Catalyst Dataset [6] exhibited significantly better performance when applied to absorption of new species than energies of less-related small molecules [139]. While it is difficult to choose the optimal task A for a given task B a priori, shotgun transfer learning [136] has demonstrated that the best pairing can be chosen experimentally by empirically validating a large pool of possible candidates and selecting top performers.
The depth \(d\) from which features should be extracted from task A to form a representation can also be task dependent. Kolluru et al. provided evidence that to achieve optimal performance more layers of the network should be allowed to be retrained in step (3) as the connection between task A and task B becomes more distant [139]. Gupta et al. arrived at a similar conclusion that the early layers of deep neural networks learned more general representations and performed better in cross-property transfer learning [137]. Inspired by this observation that representations at different neural network layers contain information with varying specificity to a particular prediction task, representations for transfer learning that combine activations from multiple depths have been proposed [139, 140].
When tasks are sufficiently different, freezing neural network weights may not be the optimal strategy and instead representations for task B can include predictions for task A as descriptors. For instance, Cubuk et al. observed that structural information was critical to predict Li conductivity but was only available for a small set of compositions for which crystal structures were determined. By training a separate surrogate model to predict structural descriptors from composition and using those approximations in subsequent Li conductivity models, the feasible screening domain was expanded by orders of magnitude [141]. Similarly, Greenman et al. [142] used \(O(10,000)\) TD-DFT calculations to train a graph neural network whose estimates could be used as an additional descriptor for a model predicting experimental peaks in absorption spectra. Representations have also been sourced from the output of generative models. Kong et al. trained a Generative Adversial Network (GAN) to sample electronic density of states (DOS) given a particular material composition. Predictions of
absorption spectra of a particular composition were improved by concatenating stoichiometric data with the average DOS sampled from the generative model [143].
## 7 Generative Models for Inverse Design
While, in principle, machine learning methods can significantly reduce the time required to compute materials properties, and material scientists can employ these models to screen for a set of target systems by rapidly estimating the stability and performance, the space of feasible materials precludes a naive global optimization strategy in most cases. Generative models including Variational Autoencoders (VAE) [144, 1], Generative Adversarial Networks (GAN) [145, 146], and diffusion models [147, 148] can be trained to sample from a target distribution and have proved to be capable strategies for optimization in high-dimensional molecular spaces [1, 149]. While some lessons can be drawn from the efforts of researchers in the computational chemistry community, generative models face unique challenges for proposing crystals [150, 151]. First, the diversity of atomic species increases substantially when compared with small organic molecules. In addition, given a composition, a properly defined crystal structure requires both the positions of the atoms within the unit cell as well as the lattice vectors and angles that determine the systems periodicity. This definition is not unique, and the same material can be described after rotations or translations of atomic coordinates as well as integer scaling of the original unit cell. Lastly, many state-of-the-art materials for catalysis (e.g. zeolites, metal organic frameworks) can have unit cells including \(>100\) of atoms, increasing the dimensionality of the optimization problem [150, 151].
One attempt to partially address the challenges of generative modeling for solid materials design is a voxel representation [150], in which unit cells are divided into volume elements and models are built using techniques from computer vision. Hoffman represented unit cells using a density field that could be further segmented into atomic species and was able to generate crystals with realistic atomic spacings. However, atoms could be mistakenly decoded into other species with nearby atomic
Figure 5: Approaches for crystal structure generative models. **(Left)** Initial models based on voxel representations defined positions of atoms by discretizing space into finite volume elements but were not applied generally over the space of crystal structures [23, 22, 24, 25]. **(Center)** Restricting the generation process to be invariant to permutation, translation, and rotations, through an appropriately constrained periodic decoder (PGNN\({}_{Dec}\)) results in sampling structures exhibiting more diversity and stability. **(Right)** When features of the material can be assumed, such as a finite number of possible topologies connecting substructures, the dimensionality of the problem can be substantially reduced and samples over larger unit cell materials can be generated. Figures on left, center, and right are adapted from [23], [152], and [153], respectively.
number and most of the generated structures could not be stably optimized with a DFT calculation [22]. Alternate approaches could obtain more convincing results, but over a confined region of material space [154]. iMatgen (**Figure 5a**) invertibly mapped all unit cells into a cube with Gaussian-smeared atomic density and trained a VAE coupled with a surrogate energy prediction. The model was able to rediscover stable structures but was constrained over the space of Vanadium oxides [23]. A similar approach constructed a separate voxel representation for each element and employed a GAN trained alongside an energy constraint to explore the phases of Bi-Se [155]. In order to resolve some of Hoffman el al's limitations, Court et al. [24] reduced segmentation errors by augmenting the representation with a matrix describing the occupation (0,1) of each voxel and a matrix recording the atomic number of occupied voxels. Their model was able to propose new materials that exhibited chemical diversity and could be further optimized with DFT but restricted analysis to cubic systems. Likewise, compositions of halide perovskites with optimized band gaps could be proposed using a voxelized representation of a fixed perovskite prototype [25].
Voxel representations can be relaxed to continuous coordinates in order to develop methods that are more comprehensively applicable over crystal space. Kim et al. represented materials using a record of the unit cell as well as a point cloud of fractional coordinates of each element. The approach proposed lower energy structures than iMatgen for V-O binaries and was also applicable over more diverse chemical spaces (Mg-Mn-O ternaries) [156]. Another representation including atomic positions along with elemental properties could be leveraged for inverse design over spaces that vary in both composition and lattice structure. In a test case, the model successfully generated new materials with negative formation energy and promising thermoelectric power factor [154]. While these models have demonstrated improvements in performance, they lack the translational, rotational, and scale invariances of real materials and are restricted to sampling particular materials classes [156, 152].
Recently, alternatives that account for these symmetries have been proposed. Fung et al. proposed a generative model for rotationally and translationally invariant atom-centered symmetry functions (ACSF) from which target structures could be reconstructed [157]. Crystal Diffusion VAEs (**Figure 5b**) leveraged periodic graphs and SE(3) equivariant message-passing layers to encode and decode their representation in a translationally and rotationally invariant way [152]. They also proposed a two step generation process during which they first predicted the crystal lattice from a latent vector and subsequently sampled the composition and atomic positions through Langevin dynamics. Furthermore, they established well-defined benchmark tasks and demonstrated that for inverse design their method was more flexible than voxel models with respect to crystal system and more accurate than point cloud representations at identifying crystals with low formation energy.
Scaling solid-state generative modeling techniques to unit cells with (\(10^{4}\)) atoms would enable inverse design of porous materials that are impossible to explore exhaustively but demonstrate exceptional technological relevance. Currently, due to the high number of degrees of freedom, sampling from these spaces requires imposing physical constraints in the modeling process. Such restrictions can be implemented as post-processing steps or integrated into the model representation. ZeoGAN [158] generated positions of oxygen and silicon atoms in a 32x32x32 grid to propose new Zeolites. While some of the atomic positions proposed directly from their model violated conventional geometric rules, they could obtain feasible structures by filtering out divergent compositions and repairing bond connectivity through the insertion or deletion of atoms. Alternatively, Yao et al. designed geometric constraints directly into the generative model by representing Metal Organic Frameworks (MOFs) by their edges, metal/organic vertexes, and distinct topologies (RFcodes) (**Figure 5c**) [153]. Because this representation is invertible, all RFcodes correspond to a structurally possible MOFs. By training a VAE to encode and decode this RFcode representation, they demonstrated the ability to interpolate between structures and optimize properties. In general, future research should balance more stable structure generation against the possible discovery of new motifs and topologies.
Discussion
In this review, we have introduced strategies for designing representations for machine learning in the context of challenges encountered by materials scientists. We discussed local and global structural features as well as representations learned from atomic-scale data in large repositories. We noted additional research that extends beyond idealized crystals to include the effects of defects, surfaces, and microstructure. Furthermore, we acknowledged that in practice the availability of data both in quality and quantity can be limited. We described methods to mitigate this including developing models based on compositional descriptors alone or leveraging information from representations built for related tasks through transfer learning. Finally, we analyzed how generative models have improved by incorporating symmetries and domain knowledge. As data-based methods have become increasingly essential for materials design, optimal machine learning techniques will play a crucial role in the success of research programs. The previous sections demonstrate that the choice of representation will be among these pivotal factors and that novel approaches can open the door to new modes of discovery. Motivated by these observations, we conclude by summarizing open problems with the potential to have high impact on the field of materials design.
### Trade-offs of Local and Global Structural Descriptors
Local structural descriptors including SOAP [12] have become reliable metrics to compare environments with a specific cutoff radius, and when properties can be defined through short-range interactions, have demonstrated strong predictive performance. Characterizing systems based off local environments allows models to extrapolate to cases where global representations may vary substantially (e.g. an extended supercell of a crystal structure)[14] and enables highly-scalable methods of computation that can extend the practical limit of simulations to much larger systems [159]. However, Unke et al. notes that the required complexity of the representation can grow quickly when modeling systems with many distinct elements and the quality of ML predictions will be sensitive to the selected hyperparameters, such as the characteristics distances and angles in atom-centered symmetry functions[160]. Furthermore, it is unclear if these high quality results extend to materials characteristics that depend strongly on long-range physics or periodicity of the crystal. On the other hand, recent global descriptors [40] can more explicitly model these phenomena, but have not exhibited the same generality across space groups and system sizes. Strategies exploring appropriate combinations of local and long-range features [161] have the potential to break through these trade-offs to provide more universal models for material property prediction.
### Prediction from Unrelaxed Crystal Prototypes
If relaxed structures are required to form representations, the space over which candidates can be screened is limited to those materials for which optimized geometries are known. Impressively, recent work [162, 163] has shown that ML force-fields, even simple models with relatively high errors, can be used optimize structures and obtain converged results that are lower in energy than those obtained using VASP [164]. Their benchmarking on the OC20 [6] dataset and lower accuracy requirements suggest that the approach could be generalizable across a wide-class of material systems and thus significantly expand the availability of structural descriptors. Similarly, Chen et al. demonstrated that a variant of MEGNET could perform high fidelity relaxations of unseen materials with diverse chemistries and that leveraging the resulting structures could improve downstream ML predictions of energy when compared with unrelaxed inputs [165]. The strong performance of these approaches and their potential to significantly increase the scale and effectiveness of computational screening motivates high-value research questions concerning the scale of data sets required for training, the generalizability over material classes, and the applicability to prediction tasks beyond stability.
### Applicability of Compositional Descriptors
Compositional descriptors are typically readily available as tabulated values, but even state-of-the-art models do not perform as well as the best structural approaches. However, there is some evidence that the scale of improvement when including structural information is property dependent. System energies can be conceptualized as a sum of site energies that are highly dependent on the local environment, and graph neural networks provide significantly more robust predictions of materials stability [84]. On the other hand, for properties dependent on global features such as phonons (vibrations) or electronic band structure (band gap) the relative improvement may not be as large [99, 166, 167]. Identifying common trends connecting tasks for which this difference is the least significant would provide more intuition on which scenarios compositional models are most appropriate. Furthermore, in some modeling situations, structural information is available but only over a small fraction of the dataset. To maximize the value of this data, more general strategies involving transfer learning [141] or combining separate composition and structural models [85] should be developed.
### Extensions of Generative Models
Additional symmetry considerations and the implementation of diffusion-based architectures led to generative models that improved significantly over previous voxel approaches. While this strategy is a promising direction for small unit cells, efforts pertaining to other parameters critical to material performance including microstructure [168], dimensionality [169] and surfaces [170] should also be pursued. In addition, research groups have side-stepped some of the challenges of materials generation by designing approaches that only sample material stoichiometry [171]. While this strategy limits the full characterization of new materials through a purely computational pipeline, there may be cases where they are sufficient to propose promising regions for experimental analysis.
## Disclosure Statement
The authors are not aware of any affiliations, memberships, funding, or financial holdings that might be perceived as affecting the objectivity of this review.
## Acknowledgments
JD was involved in the writing of all sections. AT and MX collaborated on the writing and designed the figure for Atomistic Structure section, JK collaborated on the writing and designed the figure for the Periodic Graph section, JL collaborated on the writing and designed the figure for the Defects, Surfaces, and Grain Boundaries section. JP provided valuable insights for the organization and content of the article. RGB selected the topic and focus of the review, contributed to the central themes and context, and supervised the project. All authors participated in discussions and the reviewing of the final article. The authors would like to thank Anna Bloom for editorial contributions.
The authors acknowledge financial support from the Advanced Research Projects Agency-Energy (ARPA-E), US Department of Energy under award number DE-AR0001220. JD, MX and ART thank the National Defense Science and Engineering Graduate Fellowship, the National Science Scholarship from Agency for Science, Technology and Research, and Asahi Glass Company, respectively, for financial support. RGB thanks the Jeffrey Cheah Chair in Engineering. |
2303.17192 | Sublinear Convergence Rates of Extragradient-Type Methods: A Survey on
Classical and Recent Developments | The extragradient (EG), introduced by G. M. Korpelevich in 1976, is a
well-known method to approximate solutions of saddle-point problems and their
extensions such as variational inequalities and monotone inclusions. Over the
years, numerous variants of EG have been proposed and studied in the
literature. Recently, these methods have gained popularity due to new
applications in machine learning and robust optimization. In this work, we
survey the latest developments in the EG method and its variants for
approximating solutions of nonlinear equations and inclusions, with a focus on
the monotonicity and co-hypomonotonicity settings. We provide a unified
convergence analysis for different classes of algorithms, with an emphasis on
sublinear best-iterate and last-iterate convergence rates. We also discuss
recent accelerated variants of EG based on both Halpern fixed-point iteration
and Nesterov's accelerated techniques. Our approach uses simple arguments and
basic mathematical tools to make the proofs as elementary as possible, while
maintaining generality to cover a broad range of problems. | Quoc Tran-Dinh | 2023-03-30T07:04:22Z | http://arxiv.org/abs/2303.17192v1 | Sublinear Convergence Rates of Extragradient-Type Methods: A Survey on Classical and Recent Developments
###### Abstract
The extragradient (EG), introduced by G. M. Korpelevich in 1976, is a well-known method to approximate solutions of saddle-point problems and their extensions such as variational inequalities and monotone inclusions. Over the years, numerous variants of EG have been proposed and studied in the literature. Recently, these methods have gained popularity due to new applications in machine learning and robust optimization. In this work, we survey the latest developments in the EG method and its variants for approximating solutions of nonlinear equations and inclusions, with a focus on the monotonicity and co-hypomonotonicity settings. We provide a unified convergence analysis for different classes of algorithms, with an emphasis on sublinear best-iterate and last-iterate convergence rates. We also discuss recent accelerated variants of EG based on both Halpern fixed-point iteration and Nesterov's accelerated techniques. Our approach uses simple arguments and basic mathematical tools to make the proofs as elementary as possible, while maintaining generality to cover a broad range of problems.
## 1 Introduction
The _generalized equation_ (also called the _[non]linear inclusion_) provides a unified template to model various problems in computational mathematics and related fields such as the optimality condition of optimization problems (in both unconstrained and constrained settings), minimax optimization, variational inequality, complementarity, two-person game, and fixed-point problems, see, e.g., [11, 24, 50, 112, 116, 118, 120]. Theory and numerical methods for this equation and its special cases have been extensively studied for many decades, see, e.g., the following monographs and the references quoted therein [11, 50, 94, 119]. At the same time, several applications of this mathematical tool in operations research, economics, uncertainty quantification, and transportations have been investigated [14, 52, 61, 50, 72]. In the last few years, there has been a surge of research in minimax problems due to new applications in machine learning and robust optimization, especially in generative adversarial networks (GANs), adversarial training, and distributionally robust optimization, see, e.g., [4, 14, 55, 76, 84, 114] as a few examples. Minimax problems have also found new applications in online learning and reinforcement learning, among many others, see, e.g., [4, 9, 15, 55, 67, 76, 78, 84, 114, 139]. Such prominent applications have motivated the research in minimax optimization and variational inequality problems (VIPs). On the one hand, classical algorithms such as gradient descent-ascent, extragradient, and primal-dual methods have been revisited, improved, and extended. On the other hand, new variants such as accelerated extragradient and accelerated operator splitting schemes have also been developed and equipped with rigorous convergence guarantees and practical performance evaluation. This new development motivates us to write this survey paper, with the focus on sublinear convergence rate analysis.
**Problem statements.** Since there is a vast amount of literature on the generalized equation, we will only present the recent developments on sublinear convergence rates of the **extragradient (EG)** method and its
variants for approximating the solutions of the following _generalized equation_ (also known as a [composite] _nonlinear inclusion_) and its specific cases:
\[\text{Find }x^{\star}\in\text{dom}(\Phi)\text{ such that:}\quad 0\in\Phi x^{\star} \equiv Fx^{\star}+Tx^{\star},\] (NI)
where \(F:\mathbb{R}^{p}\to\mathbb{R}^{p}\) is a single-valued operator, \(T:\mathbb{R}^{p}\rightrightarrows 2^{\mathbb{R}^{p}}\) is a set-valued (or multivalued) mapping from \(\mathbb{R}^{p}\) to \(2^{\mathbb{R}^{p}}\) (the set of all subsets of \(\mathbb{R}^{p}\)), \(\Phi:=F+T\), and \(\text{dom}(\Phi):=\text{dom}(F)\cap\text{dom}(T)\) is the domain of \(\Phi\), which is the intersection of the domains of \(F\) and \(T\). In this paper, we focus on the finite-dimensional Euclidean spaces \(\mathbb{R}^{p}\) and \(\mathbb{R}^{n}\) for ease of presentation. However, it is worth noting that most of the results presented in this paper can be extended to Hilbert spaces, as demonstrated in the existing literature.
**Special cases.** If \(F=0\), then (NI) reduces to a _generalized equation_ or a _[non]linear inclusion_\(0\in Tx^{\star}\). Alternatively, if \(T=0\), then (NI) reduces to a _[non]linear equation_:
\[\text{Find }x^{\star}\in\text{dom}(F)\text{ such that:}\quad Fx^{\star}=0.\] (NE)
If \(T:=\partial g\), the subdifferential of a proper, closed, and convex function \(g:\mathbb{R}^{p}\to\mathbb{R}\cup\{+\infty\}\), then (NI) reduces a _mixed variational inequality problem_ (MVIP):
\[\text{Find }x^{\star}\in\text{dom}(\Phi)\text{ such that:}\quad\langle Fx^{ \star},x-x^{\star}\rangle+g(x)-g(x^{\star})\geq 0,\text{ for all }x\in\text{dom}(\Phi).\] (MVIP)
In particular, if \(T=\mathcal{N}_{\mathcal{X}}\), the normal cone of a nonempty, closed, and convex set \(\mathcal{X}\) in \(\mathbb{R}^{p}\) (i.e. \(g=\delta_{\mathcal{X}}\), the indicator of \(\mathcal{X}\)), then (MVIP) reduces the classical (Stampacchia) _variational inequality problem_ (VIP):
\[\text{Find }x^{\star}\in\mathcal{X}\text{ such that:}\quad\langle Fx^{\star},x-x^{ \star}\rangle\geq 0,\text{ for all }x\in\mathcal{X}.\] (VIP)
While (VIP) can be viewed as a primal VIP (or a strong VIP), its dual (or weak) form can be written as
\[\text{Find }x^{\star}\in\mathcal{X}\text{ such that:}\quad\langle Fx,x-x^{ \star}\rangle\geq 0,\text{ for all }x\in\mathcal{X},\] (DVIP)
which is known as Minty's variational inequality problem. If \(F\) is monotone (see the definition in Section 2) then both problems (VIP) and (MVIP) are equivalent, i.e. their solution sets are identical, [50, 72]. One important special case of (NI) or (VIP) is the optimality condition of minimax problems of the form:
\[\min_{u\in\mathbb{R}^{m}}\max_{v\in\mathbb{R}^{n}}\Big{\{}\mathcal{L}(u,v):= \varphi(u)+\mathcal{H}(u,v)-\psi(v)\Big{\}}, \tag{1}\]
where \(\varphi:\mathbb{R}^{m}\to\mathbb{R}\cup\{+\infty\}\) and \(\psi:\mathbb{R}^{n}\to\mathbb{R}\cup\{+\infty\}\) are often proper, closed, and convex functions, and \(\mathcal{H}:\mathbb{R}^{m}\times\mathbb{R}^{n}\to\mathbb{R}\) is a bifunction, often assumed to be differentiable, but not necessarily convex-concave. If we denote \(x:=[u,v]\) as the concatenation of \(x\) and \(y\), and define \(T:=[\partial\varphi,\partial\psi]\) and \(F:=[\nabla_{u}\mathcal{H}(u,v),-\nabla_{v}\mathcal{H}(u,v)]\), then the optimality condition of (1) is exactly captured by (NI).
**Related work.** Extensive research has been conducted in the literature to investigate the existence of solutions and theoretical properties of (NI) and its special cases. This research has been conducted under various assumptions of monotonicity and extensions, including quasi-monotone, pseudo-monotone, and weakly monotone notions. Relevant literature references on this topic include [11, 50, 72, 99, 144]. Moreover, solution methods for (NI) and its special cases have been well-developed, particularly in the context of monotonicity and related extensions such as quasi-monotone, pseudo-monotone, or star-monotone notion. In addition, nonmonotone instances of (NI) have also received extensive attention in the literature, with many theoretical results and algorithms focusing on local properties. Additional information can be found in references such as [12, 18, 17, 35, 44, 109, 115, 119].
Existing solution methods for (NI) and its special cases often rely on a fundamental assumption: _maximal monotonicity_ of \(F\) and \(T\), or of \(\Phi\) to guarantee global convergence. These methods generally generalize existing optimization algorithms such as gradient, proximal-point, Newton, and interior-point schemes to (NI) and its special cases [35, 50, 51, 92, 107, 118, 133, 137], while leveraging the splitting structure of (NI) to use individual operators defined on \(F\) and \(T\). This approach leads to a class of splitting algorithms for solving (NI) such as forward-backward splitting (FBS) and Douglas-Rachford (DRS) splitting schemes, as
seen in [11, 36, 41, 46, 80, 81]. Alternatively, other approaches rely on primal-dual, dual averaging, and mirror descent techniques, with notable works including [31, 100, 104]. These methods have also been further studied in many recent works such as [32, 33, 37, 39, 49, 63, 105, 131, 134, 145].
When it comes to convergence analysis for gradient-based/forward methods, there is a fundamental challenge for generalized equation (NI) because an objective function, which plays a central role in guaranteeing convergence for optimization problems, does not exist. This creates a significant challenge, particularly in nonmonotone settings. Additionally, unlike convex functions where strong properties such as coerciveness and cyclic monotonicity hold for their [sub]gradients beyond monotonicity, this is not the case for general monotone and Lipschitz continuous operators. This lack of a strong property results in gradient-based (or forward) methods being non-convergent, which limits their practicality, see, e.g., [50]. To address this issue, the extragradient (EG) method was introduced by G. M. Korpelevich in 1976 [74] and also by A. S. Antipin in [3]. This method performs two sequential gradient steps at each iteration, making it twice as expensive as the standard gradient method, but converges under only the monotonicity and the Lipschitz continuity of \(F\). Since then, this method has been extended and modified in different directions to reduce its per-iteration complexity, including in certain nonmonotone settings, see, e.g., [1, 28, 29, 66, 69, 87, 89, 90, 97, 98, 113, 122, 123, 124, 135, 136]. Among these variants of EG, the past-extragradient scheme in [113] and Teng's forward-backward-forward splitting method in [136] are the most notable ones. However, the results discussed here are only applicable to the monotone setting of (NI) and its special cases. Additionally, most of the convergence results discussed are asymptotic, leading to sublinear "best-iterate" convergence rates of the residual norm associated with (NI). Under stronger assumptions such as "strong monotonicity", linear convergence rates can be achieved. Such types of convergence guarantees have been widely studied in the literature and are beyond the scope of this paper, see, e.g., [11, 50, 72].
Motivated by recent applications in machine learning and robust optimization, such as Generative Adversarial Networks (GANs), adversarial training, distributionally robust optimization, reinforcement learning, and online learning, several methods for solving minimax problems have become critically important and attractive. This is particularly true in nonconvex-nonconcave, large-scale, and stochastic settings, as evidenced in works such as [4, 9, 14, 15, 55, 67, 76, 78, 84, 114]. Several researchers have proposed and revisited EG and its variants, including [16, 38, 43, 111]. A notable work is due to [43], where the authors proposed an EG-plus (EG+) variant of EG, capable of handling nonmonotone instances of (NE), known as weak-Minty solutions. In [111], this method was further extended to (NI), while [16, 83] modified EG+ for Popov's methods, as well as optimistic gradient variants.
In contrast to classical methods, there has been a significant focus on developing accelerated methods for solving (NI) and its special cases under both monotone and co-hypomonotone structures. Early works in this area relied on dual averaging and mirror descent techniques such as those proposed in [37, 100, 104], which require the monotonicity or specific assumptions. Attouch et al [8] proposed accelerated proximal-point methods for solving (NI) under the maximal monotonicity of \(\Phi\). Since then, numerous works have followed up and explored Nesterov's acceleration-type methods guided by dynamical systems, utilizing momentum and correction terms for solving (NI) under monotone assumptions, as demonstrated in works such as [5, 21, 22, 70, 86, 85]. Accelerated methods based on Halpern's fixed-point iteration [60] have also gained popularity. Although initially developed to approximate a fixed-point of a nonexpansive operator, this method can be applied to solve (NE), (VIP), and (NI) under monotonicity. In [77], it was shown that Halpern's fixed-point iteration can achieve \(\mathcal{O}\big{(}1/k\big{)}\) last-iterate convergence rates using a specific choice of parameters, where \(k\) is the iteration counter. The authors in [42] further exploited this approach to solve monotone VIPs of the form (VIP). Yoon and Ryu extended Halpern's fixed-point iteration idea to EG methods to solve (NE) without the co-coerciveness assumption on \(F\) in their pioneering work [143]. Lee and Kim [75] proposed a similar algorithm for solving (NE) under the co-hypomonotonicity, further advancing [143] without sacrificing the \(\mathcal{O}\big{(}1/k\big{)}\)-convergence rates. In [132], the authors proposed a Halpern-type variant for the past-extragradient method in [113] by adopting the technique from [143]. Recently, [25, 27] extended [143] and [132] to (VIP) and (NI) under either monotonicity or co-hypomonotonicity assumptions. New convergence analysis for these schemes can also be found in [130]. Note that both Halpern's fixed-point iteration and Nesterov's accelerated schemes for solving (NE) and (NI) are related to each other, as shown in [129] for different methods, including EG. Nesterov's accelerated variants of EG can also be found in [130].
**What does this paper survey?** Our main objective is to provide a comprehensive survey of both classical and recent sublinear convergence rate results for EG and its variants for solving (NI) and its special cases, as summarized in Table 1. Specifically, we survey the following results.
* First, we present both the \(\mathcal{O}\big{(}1/\sqrt{k}\big{)}\)-best-iterate and last-iterate sublinear convergence rate results of EG and its variants for solving (NE). The best-iterate rate is classical for the monotone case, but has recently been obtained under a weak Minty solution condition, see [43, 111] for EG and [16, 83] for past-EG in the non-composite case, i.e., for solving (NE). The last-iterate convergence rates for EG and past-EG have been recently proven in [54, 56] for the monotone equation (NE) and in [83] for the co-hypomonotone case (see also [57]). In this paper, we provide a new and unified proof that covers the results in [54, 56, 83]. Our results are stated in a single theorem.
* Second, we review the \(\mathcal{O}\big{(}1/\sqrt{k}\big{)}\)-sublinear best-iterate convergence rates for EG and past-EG (also known as Popov's method) to solve (NI) under the monotonicity of \(\Phi\). We unify the proof of both methods in a single theorem and extend it to cover monotone inclusions of the form (NI) instead of VIP or MVIP as in the literature. We also prove \(\mathcal{O}\big{(}1/\sqrt{k}\big{)}\) last-iterate convergence for the class of EG-type schemes for solving (NI) under the monotonicity of \(F\) and the 3-cyclical monotonicity of \(T\) (in particular, for solving (MVIP)), which covers the results in [26] as special cases. Next, we discuss the \(\mathcal{O}\big{(}1/\sqrt{k}\big{)}\)-sublinear best-iterate convergence rates of the FBFS scheme and its variant: optimistic gradient under the weak-Minty solution notion, which was obtained in [83]. We again unify the proof in a single theorem, and our analysis is also different from [83].
* Third, we provide a new convergence analysis for both best-iterate and last-iterate rates of the reflected forward-backward splitting (RFBS) methods for solving (NI) under the monotonicity of \(F\) and \(T\). RFBS was proposed in [87] to solve (VIP) and was extended to solve (NI) in [30]. The best-iterate rates were proven in these works, and the last-iterate rate of RFBS for solving (VIP) has recently been proven in [27]. Our result here is more general and covers these works as special cases. In addition, we also review the best-iterate convergence rate of the golden ration method in [88], but extend it to the case \(T\) is 3-cyclically monotone, and extend the range of the golden-ratio parameter \(\omega\) to \(1<\omega<1+\sqrt{3}\) instead of fixing it at \(\omega:=\frac{1+\sqrt{5}}{2}\) as in [88].
* Fourth, we present a new analysis for the extra-anchored gradient (EAG) method to solve (NI), which covers the results in [25, 143] as special cases. Our result extends to 3-cyclically monotone operator \(T\).
* Fifth, we summarize the convergence results of EAG and past EAG (also called fast extragradient method [70]) for solving (NI) under both monotonicity and co-hypomonotonicity of \(\Phi\) from [130]. Note
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Methods & Assumptions & Add. Assumptions & Convergence Rates & Citations \\ \hline \multicolumn{5}{|c|}{For solving (NE)} \\ \hline
**EG/EG+/FBFS** & **wMs** & \(F\) is **chm** & \(\mathcal{O}\big{(}1/\sqrt{k}\big{)}\) best and last & [43, 54, 56, 83] \\ \hline
**PEG/OG/FRBS** /RFBS/GR** & **wMs** & \(F\) is **chm** & \(\mathcal{O}\big{(}1/\sqrt{k}\big{)}\) best and last & [16, 83] \\ \hline
**EAG/FEG/AEG** & \(F\) is **chm** & None & \(\mathcal{O}\big{(}1/k\big{)}\) last-iterate & [143, 70, 129] \\ \hline
**PEAG/APEG** & \(F\) is **chm** & None & \(\mathcal{O}\big{(}1/k\big{)}\) last-iterate & [132, 129] \\ \hline \multicolumn{5}{|c|}{For solving (NI), (MVIP), and (VIP)} \\ \hline
**EG/EG+** & \(\Phi\) is **wMs** & \(F\) is **mono**, \(T\) is **3-cm** & \(\mathcal{O}\big{(}1/\sqrt{k}\big{)}\) best and last & [26] \\ \hline
**FBFS** & \(\Phi\) is **wMs** & None & \(\mathcal{O}\big{(}1/\sqrt{k}\big{)}\) best-iterate & [111, 83] \\ \hline
**OG/FRFS** & \(\Phi\) is **wMs** & None & \(\mathcal{O}\big{(}1/\sqrt{k}\big{)}\) best-iterate & [83] \\ \hline
**RFBS** & \(F\) is **mono** & \(T\) is **mono** & \(\mathcal{O}\big{(}1/\sqrt{k}\big{)}\) best and last & [30, 87] \\ \hline
**GR** & \(F\) is **mono** & \(T\) is 3-**cm** & \(\mathcal{O}\big{(}1/\sqrt{k}\big{)}\) best-iterate & [88] \\ \hline
**EAG/FEG/AEG** & \(\Phi\) is **chm** & None & \(\mathcal{O}\big{(}1/k\big{)}\) last-iterate & [25, 130] \\ \hline
**PEAG/APEG** & \(\Phi\) is **chm** & None & \(\mathcal{O}\big{(}1/k\big{)}\) last-iterate & [27, 130] \\ \hline \end{tabular}
\end{table}
Table 1: Summary of the results surveyed in this paper and the most related references
that EAG and past-EAG were first proposed in [143] and [132], respectively, to solve monotone (NE). EAG was extended to the co-hypomonotone case of (NE) in [70]. Recently, [25, 27] extended EAG and past-EAG to solve (NI) under the co-hypomonotonicity of \(\Phi\).
* Finally, we review two Nesterov's accelerated extragradient methods presented in [129, 130] for solving (NI) under the co-hypomonotonicity of \(\Phi\), which achieve the same last-iterate convergence rates as EAG. Note that Nesterov's accelerated extragradient methods have recently been studied in [21] for solving (NE) via a dynamical system point of view.
**What is not covered in this paper?** The literature on EG and its variants is extensive, and it is not feasible for us to cover it in detail in this paper. First, there are various classical and recent variants of EG and past-EG, such as those discussed in [28, 29, 66, 69, 89, 90, 123, 124, 122, 138], that are not included in this paper. These methods are essentially rooted from EG with the aim of improving the per-iteration complexity, theoretical aspects, or practical performance. Second, we do not review results from methods such as gradient/forward, forward-backward splitting, proximal-point and its variants, inertial, dual averaging, mirror descent, and projective methods. The majority of these methods is not immediately derived from EG, including recent developments such as those in [19, 20, 23, 35, 34, 33, 47, 48, 79]. Third, we do not cover stochastic and randomized methods, including recent works such as [2, 40, 53, 64, 68, 65, 110, 108, 128]. Fourth, we do not present adaptive stepsizes/parameters and linesearch variants of EG-type methods. Fifth, we do not disuss continuous view of EG-type methods via dynamical systems or ordinary differential equations (ODEs), which is an emerging research topic in recent years. Finally, we also do not cover specific applications to minimax problems and other concrete applications.
**Paper outline.** This paper is organized as follows. Section 2 reviews basic concepts and related results used in this paper. Section 3 covers the convergence rate results of EG and its variants for solving (NE). Section 4 discusses the convergence rate results of EG and past-EG for solving (NI). Section 5 provides a new convergence rate analysis of FBFS and OG for solving (NI). Section 6 presents a new analysis for both the reflected-forward-backward splitting and golden ratio methods for solving (NI). Section 7 focuses on the extra-anchored gradient method and its variants for solving (NI). Finally, Section 8 presents Nesterov's accelerated variants of EG for solving (NI). We conclude this paper with some final remarks.
## 2 Background and Preliminary Results
To prepare for our survey, we will briefly review certain basic concepts and properties of monotone operators and their extensions, as well as resolvents and other related mathematical tools. These concepts and properties are well-known and can be found in several monographs, including [11, 24, 50, 112, 116, 117, 118, 120].
### Basic concepts, monotonicity, and Lipschitz continuity
We work with finite dimensional Euclidean spaces \(\mathbb{R}^{p}\) and \(\mathbb{R}^{n}\) equipped with standard inner product \(\langle\cdot,\cdot\rangle\) and Euclidean norm \(\|\cdot\|\). For a set-valued or multivalued mapping \(T:\mathbb{R}^{p}\rightrightarrows2^{\mathbb{R}^{p}}\), \(\operatorname{dom}(T)=\{x\in\mathbb{R}^{p}:Tx\neq\emptyset\}\) denotes its domain, \(\operatorname{ran}(T):=\bigcup_{x\in\operatorname{dom}(T)}Tx\) is its range, and \(\operatorname{gra}(T)=\{(x,y)\in\mathbb{R}^{p}\times\mathbb{R}^{p}:y\in Tx\}\) stands for its graph, where \(2^{\mathbb{R}^{p}}\) is the set of all subsets of \(\mathbb{R}^{p}\). The inverse of \(T\) is defined as \(T^{-1}y:=\{x\in\mathbb{R}^{p}:y\in Tx\}\). For a proper, closed, and convex function \(f:\mathbb{R}^{p}\rightarrow\mathbb{R}\cup\{+\infty\}\), \(\operatorname{dom}(f):=\{x\in\mathbb{R}^{p}:f(x)<+\infty\}\) denotes the domain of \(f\), \(\partial f\) denotes the subdifferentties of \(f\), and \(\nabla f\) stands for the gradient of \(f\). For any function \(f\), which can be nonconvex, we call \(f^{*}(y):=\sup_{x\in\mathbb{R}^{p}}\{\langle x,y\rangle-f(x)\}\) the Fenchel conjugate of \(f\).
**(a) Monotonicity.** For a single-valued or multivalued mapping \(T:\mathbb{R}^{p}\rightrightarrows2^{\mathbb{R}^{p}}\) and \(\mu\in\mathbb{R}\), we say that \(T\) is \(\mu\)-monotone if \(\langle u-v,x-y\rangle\geq\mu\|x-y\|^{2}\) for all \((x,u),(y,v)\in\operatorname{gra}(T)\). If \(T\) is single-valued, then this condition reduces to \(\langle Tx-Ty,x-y\rangle\geq\mu\|x-y\|^{2}\) for all \(x,y\in\operatorname{dom}(T)\). If \(\mu=0\), then we say that \(T\) is monotone. If \(\mu>0\), then \(T\) is \(\mu\)-strongly monotone (or sometimes called coercive), where \(\mu>0\) is called a strong monotonicity parameter. If \(\mu<0\), then we say that \(T\) is weakly monotone. It is also called \(-\mu\)-hypomonotone, see [12]. If \(T=\partial g\), the subdifferential of a proper and convex function, then \(T\) is also monotone. If \(g\) is \(\mu\)-strongly convex with \(\mu>0\), then \(T=\partial g\) is also \(\mu\)-strongly monotone.
Alternatively, if there exists \(\rho\in\mathbb{R}\) such that \(\langle u-v,x-y\rangle\geq\rho\|u-v\|^{2}\) for all \((x,u),(y,v)\in\operatorname{gra}(T)\), then we say that \(T\) is \(\rho\)-comonotone. If \(\rho=0\), then this condition reduces to the monotonicity of \(T\). If \(\rho>0\), then
\(T\) is called \(\rho\)-cocoercive. In particular, if \(\rho=1\), then \(T\) is firmly nonexpansive. If \(\rho<0\), then \(T\) is called \(-\rho\)-cohypomonotone, see, e.g., [12, 35]. For a mapping \(T\), we say that \(T\) is pseudo-monotone if \(\langle u,y-x\rangle\geq 0\) implies \(\langle v,y-x\rangle\geq 0\) for all \((x,u),(y,v)\in\operatorname{gra}(T)\). Clearly, if \(T\) is monotone, then it is also pseudo-monotone, but the conversion is not true in general.
We say that \(T\) is maximally \(\mu\)-monotone if \(\operatorname{gra}(T)\) is not properly contained in the graph of any other \(\mu\)-monotone operator. If \(\mu=0\), then we say that \(T\) is maximally monotone. Note that \(T\) is maximally monotone, then \(\eta T\) is also maximally monotone for any \(\eta>0\), and if \(T\) and \(U\) are maximally monotone, and \(\operatorname{dom}(T)\cap\operatorname{int}\left(\operatorname{dom}(U)\right)\neq\emptyset\), then \(T+U\) is maximally monotone. For a proper, closed, and convex function \(f:\mathbb{R}^{p}\to\mathbb{R}\cup\{+\infty\}\), the subdifferential \(\partial f\) of \(f\) is maximally monotone.
For a given mapping \(T\) such that \(\operatorname{zer}(T):=\{x\in\operatorname{dom}(T):0\in Tx\}\neq\emptyset\), we say that \(T\) is star-monotone (respectively, \(\mu\)-star-monotone or \(\rho\)-star-comonotone (see [82])) if for some \(x^{\star}\in\operatorname{zer}(T)\), we have \(\langle u,x-x^{\star}\rangle\geq 0\) (respectively, \(\langle u,x-x^{\star}\rangle\geq\mu\|x-x^{\star}\|^{2}\) or \(\langle u,x-x^{\star}\rangle\geq\rho\|u\|^{2}\)) for all \((x,u)\in\operatorname{gra}(T)\). Clearly, if \(T\) is monotone (respectively, \(\mu\)-monotone or \(\rho\)-co-monotone), then it is also star-monotone (respectively, \(\mu\)-star-monotone or \(\rho\)-star-comonotone). However, the reverse statement does not hold in general.
**(b) Cyclic monotonicity.** We also say that a mapping \(T\) is \(m\)-cyclically monotone (\(m\geq 2\)) if \(\sum_{i=1}^{n}\langle u^{i},x^{i}-x^{i+1}\rangle\geq 0\) for all \((x^{i},u^{i})\in\operatorname{gra}(T)\) and \(x_{1}=x_{m+1}\) (see [11]). We say that \(T\) is cyclically monotone if it is \(m\)-cyclically monotone for every \(m\geq 2\). If \(T\) is \(m\)-cyclically monotone, then it is also \(\hat{m}\)-cyclically monotone for any \(2\leq\hat{m}\leq m\). Since a \(2\)-cyclically monotone operator \(T\) is monotone, any \(m\)-cyclically monotone operator \(T\) is \(2\)-cyclically monotone, and thus is also monotone. An \(m\)-cyclically monotone operator \(T\) is called maximally \(m\)-cyclically monotone if \(\operatorname{gra}(T)\) is not properly contained into the graph of any other \(m\)-cyclically monotone operator. As proven in [11, Theorem 22.18] that \(T\) is maximally cyclically monotone iff \(T=\partial f\), the subdifferential of a proper, closed, and convex function \(f\). However, there exist maximally \(m\)-cyclically monotone operators (e.g., rotation linear operators) that are not the subdifferential \(\partial f\) of a proper, closed, and convex function \(f\), see, e.g., [11]. Furthermore, as indicated in [10, Example 2.16], there exist maximally \(3\)-cyclically monotone operators that are not maximal monotone.
**(c) Lipschitz continuity and contraction.** A single-valued or multivalued mapping \(T\) is said to be \(L\)-Lipschitz continuous if \(\sup\left\{\|u-v\|:u\in Tx,\ v\in Ty\right\}\leq L\|x-y\|\) for all \(x,y\in\operatorname{dom}(T)\), where \(L\geq 0\) is a Lipschitz constant. If \(T\) is single-valued, then this condition becomes \(\|Tx-Ty\|\leq L\|x-y\|\) for all \(x,y\in\operatorname{dom}(T)\). If \(L=1\), then we say that \(T\) is nonexpansive, while if \(L\in[0,1)\), then we say that \(T\) is \(L\)-contractive, and \(L\) is its contraction factor. If \(T\) is \(\rho\)-co-coercive with \(\rho>0\), then \(T\) is also \(L\)-Lipschitz continuous with the Lipschitz constant \(L:=\frac{1}{\rho}\). However, the reverse statement is not true in general. For a continuously differentiable function \(f:\mathbb{R}^{p}\to\mathbb{R}\), we say that \(f\) is \(L\)-smooth if its gradient \(\nabla f\) is \(L\)-Lipschitz continuous on \(\operatorname{dom}(f)\). If \(f\) is convex and \(L\)-smooth, then \(\nabla f\) is \(\frac{1}{L}\)-co-coercive and vice versa, see, e.g., [102].
**(d) Normal cone.** Given a nonempty, closed, and convex set \(\mathcal{X}\) in \(\mathbb{R}^{p}\), the normal cone of \(\mathcal{X}\) is defined as \(\mathcal{N}_{\mathcal{X}}(x):=\{w\in\mathbb{R}^{p}:\langle w,x-y\rangle\geq 0,\ \forall y\in \mathcal{X}\}\) if \(x\in\mathcal{X}\) and \(\mathcal{N}_{\mathcal{X}}(x)=\emptyset\), otherwise. The dual cone \(\mathcal{N}_{\mathcal{X}}^{*}(x):=\{w\in\mathbb{R}^{p}:\langle w,u\rangle\geq 0, \ \forall u\in\mathcal{N}_{\mathcal{X}}(x)\}\) of \(\mathcal{N}_{\mathcal{X}}(x)\) at \(x\) is \(\mathcal{T}_{\mathcal{X}}(x)\), the tangent cone of \(\mathcal{X}\) at \(x\). If \(f:=\delta_{\mathcal{X}}\), the indicator of \(\mathcal{X}\), and \(f^{*}\) is its Fenchel conjugate, then \(\partial f=\mathcal{N}_{\mathcal{X}}\), and \(\partial f^{*}=\mathcal{T}_{\mathcal{X}}\).
**(e) Resolvent and proximal operators.** The operator \(J_{T}x:=\{y\in\mathbb{R}^{p}:x\in y+Ty\}\) is called the resolvent of \(T\), denoted by \(J_{T}x=(\mathbb{I}+T)^{-1}x\), where \(\mathbb{I}\) is the identity mapping. If \(T\) is \(\rho\)-monotone with \(\rho>-1\), then evaluating \(J_{T}\) requires solving a strongly monotone inclusion \(0\in y-x+Ty\). If \(T=\partial f\), the subdifferential of proper, closed, and convex function \(f\), then \(J_{T}x\) reduces to the proximal operator of \(f\), denoted by \(\operatorname{prox}_{f}\), which can be computed as \(\operatorname{prox}_{f}(x):=\operatorname{arg}\min_{y}\{f(y)+(1/2)\|y-x\|^{2}\}\). In particular, if \(T=\mathcal{N}_{\mathcal{X}}\), the normal cone of a closed and convex set \(\mathcal{X}\), then \(J_{T}\) is the projection onto \(\mathcal{X}\), denoted by \(\operatorname{proj}_{\mathcal{X}}\). If \(T\) is maximally monotone, then \(\operatorname{ran}(\mathbb{I}+T)=\mathbb{R}^{p}\) (by Minty's theorem) and \(T\) is firmly nonexpansive (and thus nonexpansive).
### Best-iterate and last-iterate convergence rates
The results presented in this paper are related to two types of sublinear convergence rates: the best-iterate and the last-iterate convergence rates. To elaborate on these concepts, we assume that \(D\) is a given metric (e.g., \(\|Fx^{k}\|^{2}\) or \(e(x^{k})^{2}\) defined by (2) below) defined on an iterate sequence \(\{x^{k}\}\) generated by the underlying
algorithm for solving (NI) or its special cases. For any \(k\geq 0\) and a given order \(\alpha>0\), if
\[\min_{0\leq l\leq k}D(x^{l})\leq\frac{1}{k+1}\sum_{l=0}^{k}D(x^{l})=\mathcal{O} \left(\frac{1}{k^{\alpha}}\right),\]
then we say that \(\left\{x^{k}\right\}\) has a \(\mathcal{O}\left(1/k^{\alpha}\right)\) best-iterate convergence rate. In this case, we can take \(\hat{x}_{k}:=x_{k_{\min}}\) with \(k_{\min}:=\arg\min_{0\leq l\leq k}D(x^{l})\) as the "best" output of our algorithm. If we instead have \(D(x^{k})=\mathcal{O}\left(\frac{1}{k^{\alpha}}\right)\) with \(x^{k}\) being the \(k\)-th iterate, then we say that \(\left\{x^{k}\right\}\) has a \(\mathcal{O}\left(1/k^{\alpha}\right)\) last-iterate convergence rate. We emphasize that the convergence on the metric \(D\) of \(\left\{x^{k}\right\}\) does not generally imply the convergence of \(\left\{x^{k}\right\}\) itself, especially when characterize the rate of convergence in different metrics.
### Exact solutions and approximate solutions
There are different metrics to characterize exact and approximate solutions of (NI). The most obvious one is the residual norm of \(\Phi\), which is defined as
\[e(x):=\min_{\xi\in Tx}\|Fx+\xi\|,\quad x\in\mathrm{dom}(\Phi). \tag{2}\]
Clearly, if \(e(x^{\star})=0\) for some \(x^{\star}\in\mathrm{dom}(\Phi)\), then \(x^{\star}\in\mathrm{zer}(\Phi)\), a solution of (NI). If \(T=0\), then \(e(x)=\|Fx\|\). However, if \(e(\hat{x})\leq\epsilon\) for a given tolerance \(\epsilon>0\), then \(\hat{x}\) can be considered as an \(\epsilon\)-approximate solution of (NI). The algorithms presented in this paper use this metric as one means to characterize approximate solutions.
Other metrics often used for monotone (VIP), a special case of (NI), are gap functions and restricted gap functions [50, 72, 104], which are respectively defined as
\[\mathcal{G}(x):=\max_{y\in\mathcal{X}}\langle Fy,y-x\rangle\quad\text{and} \quad\mathcal{G}_{\mathbb{B}}(x):=\max_{y\in\mathcal{X}\cap\mathbb{B}}\langle F y,y-x\rangle, \tag{3}\]
where \(\mathbb{B}\) is a given nonempty, closed, and bounded convex set. Note that \(\mathcal{G}(x)\geq 0\) for all \(x\in\mathcal{X}\), and \(\mathcal{G}(x^{\star})=0\) iff \(x^{\star}\) is a solution of (VIP). Therefore, to characterize an \(\epsilon\)-approximate solution \(\tilde{x}\) of (VIP), we can impose \(\mathcal{G}(\tilde{x})\leq\epsilon\). For the restricted gap function \(\mathcal{G}_{\mathbb{B}}\), if \(x^{\star}\) is a solution of (VIP) and \(x^{\star}\in\mathbb{B}\), then \(\mathcal{G}_{\mathbb{B}}(x^{\star})=0\). Conversely, if \(\mathcal{G}_{\mathbb{B}}(x^{\star})=0\) and \(x^{\star}\in\mathrm{int}\left(\mathbb{B}\right)\), then \(x^{\star}\) is a solution of (VIP) in \(\mathbb{B}\) (see [104, Lemma 1]). For (DVIP), we can also define similar dual gap functions and restricted dual gap functions [104]. Gap functions have been widely used in the literature to characterize approximate solutions generated by many numerical methods for solving (VIP) or (DVIP), see, e.g., [33, 37, 50, 72, 100, 104].
If \(J_{\eta T}\) is well-defined and single-valued for some \(\eta>0\), and \(F\) is single-valued, then we can use the following forward-backward splitting residual operator:
\[G_{\eta\Phi}x:=\tfrac{1}{\eta}\left(x-J_{\eta T}(x-\eta Fx)\right), \tag{4}\]
to characterize solutions of (NI), where \(F\) is single-valued and \(J_{\eta T}\) is the resolvent of \(\eta T\) for any \(\eta>0\). It is clear that \(G_{\eta}x^{\star}=0\) iff \(x^{\star}\in\mathrm{zer}(\Phi)\). In addition, if \(J_{\eta T}\) is firmly nonexpansive, then we also have
\[\|G_{\eta\Phi}x\|\leq\|Fx+\xi\|,\quad(x,\xi)\in\mathrm{gra}(T). \tag{5}\]
Hence, for a given tolerance \(\epsilon>0\), if \(\|G_{\eta\Phi}\tilde{x}\|\leq\epsilon\), then we can say that \(\tilde{x}\) is an \(\epsilon\)-approximate solution of (NI). If \(T:=\mathcal{N}_{\mathcal{X}}\), i.e., (NI) reduces to (VIP), then, with \(\eta=1\), \(G_{\Phi}x\) reduces to the classical natural map \(\Pi_{F,\mathcal{X}}x=x-\mathrm{proj}_{\mathcal{X}}(x-Fx)\) of (VIP), and \(r_{n}(x):=\|G_{\Phi}x\|=\|\Pi_{F,\mathcal{X}}x\|\) is the corresponding natural residual at \(x\). From (5), we have \(r_{n}(x)\leq\|Fx+\xi\|\) for any \(\xi\in\mathcal{N}_{\mathcal{X}}(x)\).
### Gradient/forward-type methods
Let us briefly recall the gradient/forward scheme for solving (NE) as follows. Starting from \(x^{0}\in\mathrm{dom}(F)\), at each iteration \(k\geq 0\), we update
\[x^{k+1}:=x^{k}-\eta Fx^{k},\] (FW)
where \(\eta>0\) is a given constant stepsize. If \(F\) is \(\rho\)-co-coercive and \(0<\eta<\rho\), then \(\{x^{k}\}\) converges to \(x^{\star}\in\mathrm{zer}(F)\) (see, e.g., [50]). Otherwise, if \(F\) is only monotone and \(L\)-Lipschitz continuous, then there exist examples (e.g., \(Fx=[x_{2},-x_{1}]\)) showing that (FW) is divergent for any choice of constant stepsize \(\eta\).
To solve (NI), we can instead apply the forward-backward splitting method as follows. Starting from \(x^{0}\in\mathrm{dom}(F)\), at each iteration \(k\geq 0\), we update
\[x^{k+1}:=J_{\eta T}(x^{k}-\eta Fx^{k}),\] (FBS)
where \(\eta>0\) is a given constant stepsize. Similar to (FW), if \(F\) is \(\rho\)-co-coercive and \(T\) is maximally monotone, then with \(\eta\in(0,\rho)\), \(\{x^{k}\}\) generated by (FBS) converges to \(x^{\star}\in\mathrm{zer}(\Phi)\). If \(F=\nabla f\), the gradient of a convex and \(L\)-smooth function \(f\), then \(F\) is co-coercive. However, imposing the co-coerciveness for a general mapping \(F\) is often restrictive. Hence, both (FW) and (FBS) are less practical.
## 3 Extragradient-Type Methods For Nonlinear Equations
As we have discussed before, the extragradient method was originally proposed by G. M. Korpelevic in 1976 [73] and by A. S. Antipin around the same time [3] to tackle saddle-point problems. Since then, this method has been extensively studied in the literature, with numerous variants proposed (see, e.g., [50, 62, 71, 72, 91, 124, 126, 127, 142]). In recent years, the popularity of this method has increased further due to its effectiveness in solving minimax problems, including those in convex-concave and nonconvex-nonconcave settings, which are common in machine learning and robust optimization. In this section, we briefly survey both classical and recent works [43, 54, 56, 83] on the extragradient method, as well as some closely related variants with minor modifications. We unify the convergence analysis in one single theorem.
### The class of extragradient methods for nonlinear equations
The class of extragradient methods for solving (NE) we discuss in this section is presented as follows. Starting from an initial point \(x^{0}\in\mathrm{dom}(F)\), at each iteration \(k\geq 0\), we update
\[\left\{\begin{aligned} & y^{k}&:=x^{k}-\frac{\eta}{ \beta}u^{k},\\ & x^{k+1}&:=x^{k}-\eta Fy^{k},\end{aligned}\right.\] (EG)
where \(\eta>0\) is a given constant stepsize, \(\beta\in(0,1]\) is a scaling factor, and \(u^{k}\) has two options as follows.
* **Option 1.** If we set \(u^{k}:=Fx^{k}\), then we obtain the **extragradient** scheme [73] for (NE).
* **Option 2.** If we choose \(u^{k}:=Fy^{k-1}\), the we obtain the **past-extragradient** method, also called **Popov's method**[113], to solve (NE). This scheme is also known as an **optimistic gradient** method in the literature, see also [38, 93, 96].
For **Option 1** with \(u^{k}:=Fx^{k}\), if \(\beta=1\), then we obtain exactly the _classical extragradient method_[73] for solving (NE). If \(\beta<1\), then we recover the **extragradient-plus (EG+)** scheme from [43] for solving (NE). If we compute \(x^{k}=y^{k}+\frac{\eta}{\beta}Fx^{k}\) from the first line of (EG) and substitute it into the second line of (EG), then we get \(x^{k+1}=y^{k}-\eta(Fy^{k}-\frac{1}{\beta}Fx^{k})\). In this case, we obtain from (EG) a **forward-backward-forward splitting** variant of Tseng's method in [136] as follows:
\[\left\{\begin{aligned} & y^{k}&:=x^{k}-\frac{\eta}{ \beta}Fx^{k},\\ & x^{k+1}&:=y^{k}-\eta(Fy^{k}-\frac{1}{\beta}Fx^{k}). \end{aligned}\right.\] (FBFS)
Clearly, if \(\beta=1\), then we recover exactly **Tseng's method** for solving (NE).
For **Option 2** with \(u^{k}:=Fy^{k-1}\), we can show that it is equivalent to the following variants. First, we can rewrite (EG) as
\[\left\{\begin{aligned} & x^{k+1}&:=x^{k}-\eta Fy^{k}\\ & y^{k+1}&:=x^{k+1}-\frac{\eta}{\beta}Fy^{k}.\end{aligned}\right.\] (PEG)
This form shows us that (PEG) saves one evaluation \(Fx^{k}\) of \(F\) at each iteration compared to **Option 1**. If \(\beta=1\), then we obtain exactly the **Popov's method** in [113]. If we rotate the second line up and use \(\beta=1\) as \(y^{k}=x^{k}-\eta Fy^{k-1}\), then we get the **past-extragradient method**.
Now, under this choice of \(u^{k}\), from the first line of (EG), we have \(x^{k}=y^{k}+\frac{\eta}{\beta}u^{k}=y^{k}+\frac{\eta}{\beta}Fy^{k-1}\). Substituting this expression into the first line of (PEG), we get \(x^{k+1}=y^{k}-\eta Fy^{k}+\frac{\eta}{\beta}Fy^{k-1}\). Substituting this relation into the second line of (PEG), we can eliminate \(x^{k+1}\) to get the following variant:
\[y^{k+1}:=y^{k}-\frac{\eta}{\beta}\big{(}(1+\beta)Fy^{k}-Fy^{k-1}\big{)}.\] (FRBS)
This scheme can be considered as a simplified variant of the **forward-reflected-backward splitting** scheme in [89] for solving (NE) when we set \(\beta:=1\) as \(y^{k+1}:=y^{k}-\eta(2Fy^{k}-Fy^{k-1})\).
Alternatively, from (PEG), we have \(x^{k-1}-x^{k}=\eta Fy^{k-1}\) and \(\beta(x^{k}-y^{k})=\eta Fy^{k-1}\), leading to \(x^{k-1}-x^{k}=\beta(x^{k}-y^{k})\). Therefore, we get \(y^{k}=\frac{1}{\beta}((1+\beta)x^{k}-x^{k-1})\). Substituting this expression into the first line of (PEG), we can show that
\[x^{k+1}:=x^{k}-\eta F\big{(}\tfrac{1}{\beta}((1+\beta)x^{k}-x^{k-1})\big{)}.\] (RFB)
In particular, if \(\beta=1\), then we obtain \(x^{k+1}:=x^{k}-\eta F(2x^{k}-x^{k-1})\), which turns out to be the **reflected gradient** method in [87] or the **reflected forward-backward splitting** scheme in [30] for solving (NE).
Using the relation \(x^{k-1}-x^{k}=\beta(x^{k}-y^{k})\) above, we can compute that \(x^{k}=\frac{\beta}{1+\beta}y^{k}+\frac{1}{1+\beta}x^{k-1}=\frac{(\omega-1)}{ \omega}y^{k}+\frac{1}{\omega}x^{k-1}\), where \(\omega:=1+\beta\). Combining the two lines of (EG), we get \(y^{k+1}:=x^{k+1}-\frac{\eta}{\beta}Fy^{k}=x^{k}-\frac{\eta(1+\beta)}{\beta}Fy ^{k}\). Putting both expressions together, we get
\[\left\{\begin{aligned} x^{k}&:=\,\frac{(\omega-1)}{ \omega}y^{k}+\frac{1}{\omega}x^{k-1},\\ y^{k+1}&:=\,x^{k}-\frac{\eta(1+\beta)}{\beta}Fy^{k}. \end{aligned}\right.\] (GR)
This method is a simplified variant of the **golden-ratio** method in [88] for solving (NE). Overall, the template (EG) covers a class of EG algorithms with many common instances as discussed.
### Convergence analysis
The results presented in this section were obtained in [83], but here we provide a different proof and unify several methods in one. To analyze the convergence of (EG), we first prove the following lemmas.
**Lemma 3.1**.: _If \(\big{\{}(x^{k},y^{k})\big{\}}\) is generated by (EG), then for any \(\gamma>0\) and any \(\hat{x}\in\mathrm{dom}(F)\), we have_
\[\|x^{k+1}-\hat{x}\|^{2} \leq \|x^{k}-\hat{x}\|^{2}-\beta\|y^{k}-x^{k}\|^{2}+\tfrac{\eta^{2}}{ \gamma}\|Fy^{k}-u^{k}\|^{2}-2\eta\langle Fy^{k},y^{k}-\hat{x}\rangle \tag{6}\] \[-\,(\beta-\gamma)\|x^{k+1}-y^{k}\|^{2}-(1-\beta)\|x^{k+1}-x^{k}\| ^{2}.\]
Proof.: First, for any \(\hat{x}\in\mathrm{dom}(F)\), using \(x^{k+1}-x^{k}=-\eta Fy^{k}\) from the second line of (EG), we have
\[\|x^{k+1}-\hat{x}\|^{2} = \|x^{k}-\hat{x}\|^{2}+2\langle x^{k+1}-x^{k},x^{k+1}-\hat{x} \rangle-\|x^{k+1}-x^{k}\|^{2}\] \[= \|x^{k}-\hat{x}\|^{2}-2\eta\langle Fy^{k},x^{k+1}-\hat{x} \rangle-\|x^{k+1}-x^{k}\|^{2}.\]
Next, using \(\eta u^{k}=\beta(x^{k}-y^{k})\) from the first line of (EG), the Cauchy-Schwarz inequality, the identity \(2\langle x^{k+1}-y^{k},x^{k}-y^{k}\rangle=\|x^{k}-y^{k}\|^{2}+\|x^{k+1}-y^{k} \|^{2}-\|x^{k+1}-x^{k}\|^{2}\), and an elementary inequality \(2wz\leq\gamma w^{2}+\frac{z^{2}}{\gamma}\) for any \(\gamma>0\) and \(w,z\geq 0\), we can derive that
\[2\eta\langle Fy^{k},x^{k+1}-\hat{x}\rangle = 2\eta\langle Fy^{k},y^{k}-\hat{x}\rangle+2\eta\langle Fy^{k}-u^{k },x^{k+1}-y^{k}\rangle+2\eta\langle u^{k},x^{k+1}-y^{k}\rangle\] \[\geq 2\eta\langle Fy^{k},y^{k}-\hat{x}\rangle-2\eta\|Fy^{k}-u^{k}\| \|x^{k+1}-y^{k}\|+2\beta\langle x^{k+1}-y^{k},x^{k}-y^{k}\rangle\] \[\geq 2\eta\langle Fy^{k},y^{k}-\hat{x}\rangle-\tfrac{\eta^{2}}{\gamma }\|Fy^{k}-u^{k}\|^{2}-\gamma\|x^{k+1}-y^{k}\|^{2}\] \[+\,\beta\big{[}\|x^{k}-y^{k}\|^{2}+\|x^{k+1}-y^{k}\|^{2}-\|x^{k +1}-x^{k}\|^{2}\big{]}\] \[= 2\eta\langle Fy^{k},y^{k}-\hat{x}\rangle+\beta\|y^{k}-x^{k}\|^{ 2}-\tfrac{\eta^{2}}{\gamma}\|Fy^{k}-u^{k}\|^{2}\] \[+\,(\beta-\gamma)\|x^{k+1}-y^{k}\|^{2}-\beta\|x^{k+1}-x^{k}\|^{2}.\]
Finally, combining the last two expressions, we obtain (6).
**Lemma 3.2**.: _Let \(F\) be \(\rho\)-co-hypomonotone, i.e. there exists \(\rho\geq 0\) such that \(\langle Fx-Fy,x-y\rangle\geq-\rho\|Fx-Fy\|^{2}\) for all \(x,y\in\mathrm{dom}(F)\) and \(L\)-Lipschitz continuous. Let \(\big{\{}(x^{k},y^{k})\big{\}}\) be generated by (EG). Then, for any \(c>0\) and \(\omega>0\), we have_
\[\|Fx^{k+1}\|^{2} \leq \|Fx^{k}\|^{2}-\tfrac{[c\eta-2(1+c)\rho]}{c\eta}\|Fy^{k}-Fx^{k}\| ^{2}+\tfrac{[\eta\omega+2(1+c)\rho]L^{2}\eta}{\beta^{2}}\|\beta Fy^{k}-u^{k}\| ^{2}\] \[-\ (\omega-1)\|Fx^{k+1}-Fy^{k}\|^{2}.\]
Proof.: Since \(F\) is \(\rho\)-cohypomonotone, we have \(\langle Fx^{k+1}-Fx^{k},x^{k+1}-x^{k}\rangle+\rho\|Fx^{k+1}-Fx^{k}\|^{2}\geq 0\). Substituting \(x^{k+1}-x^{k}=-\eta Fy^{k}\) from the second line of (EG) into this inequality, we can show that
\[0 \leq 2\langle Fx^{k},Fy^{k}\rangle-2\langle Fx^{k+1},Fy^{k}\rangle+ \tfrac{2\rho}{\eta}\|Fx^{k+1}-Fx^{k}\|^{2}\] \[\leq \|Fx^{k}\|^{2}-\|Fy^{k}-Fx^{k}\|^{2}-\|Fx^{k+1}\|^{2}+\|Fx^{k+1}- Fy^{k}\|^{2}+\tfrac{2\rho}{\eta}\|Fx^{k+1}-Fx^{k}\|^{2}.\]
Now, by utilizing Young's inequality, the \(L\)-Lipschitz continuity of \(F\), and \(x^{k+1}-y^{k}=-\eta(Fy^{k}-\tfrac{1}{\beta}u^{k})\) from (EG), for any \(c>0\) and \(\omega\geq 1\), the last estimate leads to
\[\|Fx^{k+1}\|^{2} \leq \|Fx^{k}\|^{2}-\|Fy^{k}-Fx^{k}\|^{2}+\omega\|Fx^{k+1}-Fy^{k}\|^{2 }+\tfrac{2\rho}{\eta}\|Fx^{k+1}-Fx^{k}\|^{2}-(\omega-1)\|Fx^{k+1}-Fy^{k}\|^{2}\] \[\leq \|Fx^{k}\|^{2}-\tfrac{c\eta-2(1+c)\rho}{c\eta}\|Fy^{k}-Fx^{k}\|^ {2}+\tfrac{[\eta\omega+2(1+c)\rho]L^{2}}{\eta}\|x^{k+1}-y^{k}\|^{2}-(\omega-1 )\|Fx^{k+1}-Fy^{k}\|^{2}\] \[\leq \|Fx^{k}\|^{2}-\tfrac{c\eta-2(1+c)\rho}{c\eta}\|Fy^{k}-Fx^{k}\|^ {2}+[\eta\omega+2(1+c)\rho]\,L^{2}\eta\|Fy^{k}-\tfrac{1}{\beta}u^{k}\|^{2}\] \[-\ (\omega-1)\|Fx^{k+1}-Fy^{k}\|^{2},\]
which exactly proves (7).
Now, we are ready to establish both the best-iterate and the last-iterate convergence rates of (EG).
**Theorem 3.1**.: _Suppose that \(F\) in (NE) is \(L\)-Lipschitz continuous and \(\mathrm{zer}(F)\neq\emptyset\). Let \(\big{\{}(x^{k},y^{k})\big{\}}\) be generated by (EG) for solving (NE). Then, we have the following statements._
* _(**Extragradient method**) Let us choose_ \(u^{k}:=Fx^{k}\) _and assume that there exists_ \(\rho\geq 0\) _such that_ \(\langle Fx,x-x^{\star}\rangle\geq-\rho\|Fx\|^{2}\) _for all_ \(x\in\mathrm{dom}(F)\) _and a given_ \(x^{\star}\in\mathrm{zer}(F)\) _(this condition holds if, in particular,_ \(F\) _is_ \(\rho\)_-co-hypomonotone on_ \(\mathrm{dom}(F)\)_). Then, if_ \(L\rho\leq\frac{3\sqrt{2}-2}{12}\approx 0.1869\)_,_ \(\beta\in(0,1]\)_, and_ \(\eta\) _is chosen such that_ \[0\leq\tfrac{\beta[1-\sqrt{1-24L\rho(3L\rho+1)}]}{2L(3L\rho+1)}<\eta<\tfrac{ \beta[1+\sqrt{1-24L\rho(3L\rho+1)}]}{2L(3L\rho+1)}\leq\tfrac{\beta}{L},\] (8) _then we have_ \[\min_{0\leq l\leq k}\|Fx^{l}\|^{2}\leq\frac{1}{k+1}\sum_{l=0}^{k}\|Fx^{l}\|^{2 }\leq\frac{C_{\rho}\|x^{0}-x^{\star}\|^{2}}{k+1},\] (9) _where_ \(C_{\rho}:=\tfrac{\beta^{2}}{\eta[\eta\beta-6\beta^{2}\rho-(3L\rho+1)L\eta^{2 }]}>0\)_. Consequently,_ \(\{\|x^{k}-x^{\star}\|\}\) _is nonincreasing and_ \(\lim_{k\to\infty}\|x^{k}-y^{k}\|=\lim_{k\to\infty}\|Fx^{k}\|=\lim_{k\to\infty} \|Fy^{k}\|=0\)_. Moreover, we have_ \(\min_{0\leq l\leq k}\|Fx^{l}\|=\mathcal{O}\big{(}1/\sqrt{k}\big{)}\) _showing the_ \(\mathcal{O}\big{(}1/\sqrt{k}\big{)}\) _best-iterate convergence rate of_ \(\{x^{k}\}\)_._ _In particular, if_ \(\beta:=1\) _and_ \(F\) _is_ \(\rho\)_-co-hypomonotone on_ \(\mathrm{dom}(F)\) _such that_ \(L\rho\leq\frac{3\sqrt{2}-2}{12}\)_, then_ \[\|Fx^{k+1}\|^{2}\leq\|Fx^{k}\|^{2}-\psi\cdot\|Fy^{k}-Fx^{k}\|^{2}\quad\text{and} \quad\|Fx^{k}\|\leq\frac{\sqrt{C_{\rho}}\|x^{0}-x^{\star}\|}{\sqrt{k+1}},\] (10) _where_ \(\psi:=1-\tfrac{4\rho}{\eta}-L^{2}\eta(\eta+4\rho)>0\)_. Hence, we have_ \(\|Fx^{k}\|=\mathcal{O}\big{(}1/\sqrt{k}\big{)}\) _on the last-iterate_ \(x^{k}\)_._
* (**Past-extragradient method**) Let us choose_ \(u^{k}:=Fy^{k-1}\) _and_ \(y^{-1}:=x^{0}\) _and assume that there exists_ \(\rho\geq 0\) _such that_ \(\langle Fx,x-x^{\star}\rangle\geq-\rho\|Fx\|^{2}\) _for all_ \(x\in\mathrm{dom}(F)\) _and a given_ \(x^{\star}\in\mathrm{zer}(F)\) _(in particular, if_ \(F\) _is_ \(\rho\)_-co-hypomonotone on_ \(\mathrm{dom}(F)\)_). Then, for fixed_ \(\beta\in(0,1]\)_, if_ \(L\rho\leq\frac{\beta^{2}}{12}\) _and_ \(\eta\) _is chosen such that_ \[0\leq\tfrac{\beta-\sqrt{\beta^{2}-12L\rho}}{6L}<\eta<\tfrac{\beta+\sqrt{\beta^{2} -12L\rho}}{6L}\leq\tfrac{\beta}{3L},\] (11)
_then we have_
\[\min_{0\leq l\leq k}\|Fx^{l}\|^{2}\leq\frac{1}{k+1}\sum_{l=0}^{k}\left[\|Fx^{l} \|^{2}+\kappa\|x^{l}-y^{l-1}\|^{2}\right]\leq\frac{\hat{C}_{\rho}\|x^{0}-x^{ \star}\|^{2}}{k+1}, \tag{12}\]
_where \(\kappa:=\frac{(2L^{2}\eta^{2}+3)L}{(\beta\eta-3L\eta^{2}-4\rho)}>0\) and \(\hat{C}_{\rho}:=\frac{2L^{2}\eta^{2}+3}{(\beta\eta-3L\eta^{2}-4\rho)\eta}>0\). Consequently, we also have \(\lim_{k\to\infty}\|x^{k}-y^{k}\|=\lim_{k\to\infty}\|x^{k+1}-y^{k}\|=\lim_{k\to \infty}\|Fx^{k}\|=\lim_{k\to\infty}\|Fy^{k}\|=0\), and \(\min_{0\leq l\leq k}\|Fx^{l}\|=\mathcal{O}\big{(}1/\sqrt{k}\big{)}\) showing the \(\mathcal{O}\big{(}1/\sqrt{k}\big{)}\) best-iterate convergence rate of \(\{x^{k}\}\)._
_In particular, if_ \(\beta:=1\) _and_ \(F\) _is_ \(\rho\)_-co-hypomonotone on_ \(\mathrm{dom}(F)\) _such that_ \(12L\rho\leq 1\)_, then_
\[\|Fx^{k+1}\|^{2}+\hat{\kappa}\|Fx^{k+1}-Fy^{k}\|^{2}\leq\|Fx^{k}\|^{2}+\hat{ \kappa}\|Fx^{k}-Fy^{k-1}\|^{2}\quad\text{and}\quad\|Fx^{k}\|\leq\frac{\sqrt{M _{\rho}}\|x^{0}-x^{\star}\|}{\sqrt{k+1}}, \tag{13}\]
_where_ \(\hat{\kappa}:=\frac{2(\eta+4\rho)L^{2}\eta}{1-2L^{2}\eta^{2}}\) _and_ \(M_{\rho}:=\hat{C}_{\rho}\cdot\max\left\{\frac{L^{2}\hat{\kappa}}{\kappa},1\right\}\)_. Moreover, we have the last-iterate convergence rate as_ \(\|Fx^{k}\|=\mathcal{O}\big{(}1/\sqrt{k}\big{)}\)_._
Before proving Theorem 3.1, we give the following remarks.
**Remark 3.1**.: If \(\rho=0\) in Theorem 3.1, i.e. \(F\) is star-monotone (and in particular, monotone), then our condition on the stepsize \(\eta\) reduces to \(0<\eta<\frac{\beta}{L}\) for the extragradient method and \(0<\eta<\frac{\beta}{3L}\) for the past-extragradient method. These choices are standard and often seen in both methods. However, we have not yet optimized the choice of these parameters for (EG) stated in Theorem 3.1. Here, the stepsize \(\eta\) can be carefully chosen so that we can possibly enlarge the range of \(L\rho\) (see [57] as an example).
**Remark 3.2**.: The results in Theorem 3.1 were proven in [83], and then were revised in [57]. The last-iterate convergence rates were proven in previous works such as [54] for the monotone case but with an additional assumption. Note that the best-iterate rates for the monotone or the star-monotone case are classical, which can be found, e.g., in [50, 73]. The last-iterate convergence for the monotone case can be found in recent works such as [54, 56]. The best-iterate rates for the co-hypomonotone or the star-co-hypomonotone case can be found in [43], while the last-iterate convergence rates were recently proven in [83]. Nevertheless, in this survey, we provide a unified analysis for all of these variants of EG, which covers both the monotone and co-hypomonotone cases altogether. Our analysis is also different from [83].
**Remark 3.3**.: One can easily modify the proof of Theorem 3.1 to handle the star-strongly-monotone case of \(F\). Indeed, if \(F\) is \(\mu\)-star-strongly monotone, then we have \(\mu\|x^{k}-x^{\star}\|^{2}\leq\langle Fx^{k},x^{k}-x^{\star}\rangle\leq\|Fx^{ k}\|\|x^{k}-x^{\star}\|\). This leads to \(\|Fx^{k}\|^{2}\geq\mu^{2}\|x^{k}-x^{\star}\|^{2}\). Using this inequality, \(\hat{x}:=x^{\star}\), and \(u^{k}:=Fx^{k}\) into (6), we obtain \(\|x^{k+1}-x^{\star}\|^{2}\leq(1-\eta^{2}(1-L^{2}\eta^{2})\mu^{2})\|x^{k}-x^{ \star}\|^{2}\). Clearly, if we choose \(\eta\in\big{(}0,\frac{1}{L}\big{)}\), then \(\varphi:=1-\eta^{2}(1-L^{2}\eta^{2})\mu^{2}\in(0,1)\), and we obtain a linear convergence rate of \(\{\|x^{k}-x^{\star}\|^{2}\}\) with a contraction factor \(\varphi\). Note that since \(\mu\leq L\), we have \(\mu^{4}-4L^{2}\mu^{2}<0\). Hence, \(L^{2}\mu^{2}\eta^{4}-\mu^{2}\eta^{2}+1\geq 0\) always holds to guarantee that \(\varphi\in(0,1)\). Another proof for the monotone case can be found, e.g., in [50].
Proof of Theorem 3.1.: (a) **Extragradient method.** Since \(u^{k}:=Fx^{k}\), by the \(L\)-Lipschitz continuity of \(F\), we have \(\|Fy^{k}-u^{k}\|=\|Fy^{k}-Fx^{k}\|\leq L\|x^{k}-y^{k}\|\). Using this inequality, \(\hat{x}:=x^{\star}\in\mathrm{zer}(F)\), and \(\langle Fy^{k},y^{k}-x^{\star}\rangle\geq-\rho\|Fy^{k}\|^{2}\) into (6), we have
\[\|x^{k+1}-x^{\star}\|^{2} \leq \|x^{k}-x^{\star}\|^{2}-\left(\beta-\frac{L^{2}\eta^{2}}{\gamma} \right)\|y^{k}-x^{k}\|^{2}+2\eta\rho\|Fy^{k}\|^{2} \tag{14}\] \[- (\beta-\gamma)\|x^{k+1}-y^{k}\|^{2}-(1-\beta)\|x^{k+1}-x^{k}\|^{2}.\]
Now, using the first line of (EG) with \(u^{k}:=Fx^{k}\) as \(\beta(x^{k}-y^{k})=\eta Fx^{k}\), the \(L\)-Lipschitz continuity of \(F\), and Young's inequality, we have
\[\|Fy^{k}\|^{2} \leq \tfrac{3}{2}\|Fy^{k}-Fx^{k}\|^{2}+3\|Fx^{k}\|^{2}\leq\tfrac{3(2 \beta^{2}+L^{2}\eta^{2})}{2\eta^{2}}\|x^{k}-y^{k}\|^{2}. \tag{15}\]
Substituting this inequality, \(\gamma:=L\eta\), and \(\beta^{2}\|y^{k}-x^{k}\|^{2}=\eta^{2}\|Fx^{k}\|^{2}\) into (14), we obtain
\[\|x^{k+1}-x^{\star}\|^{2} \leq \|x^{k}-x^{\star}\|^{2}-(\beta-L\eta)\|x^{k+1}-y^{k}\|^{2}-(1- \beta)\|x^{k+1}-x^{k}\|^{2} \tag{16}\] \[- \frac{\eta|\eta\beta-6\beta^{2}\rho-(3L\rho+1)Ln^{2}\eta^{2}}{ \beta^{2}}\|Fx^{k}\|^{2}.\]
Let us choose \(\eta>0\) such that \(\eta\beta-6\beta^{2}\rho-(3L\rho+1)L\eta^{2}>0\), which holds if \(\eta\) satisfies
\[0\leq\frac{\beta\left[1-\sqrt{1-24L\rho(3L\rho+1)}\right]}{2L(3L\rho+1)}<\eta< \frac{\beta\left[1+\sqrt{1-24L\rho(3L\rho+1)}\right]}{2L(3L\rho+1)}\leq\frac{ \beta}{L},\]
provided that \(L\rho\leq\frac{3\sqrt{2}-2}{12}\approx 0.1869\). This condition is exactly (8). In this case, we also have \(\beta-L\eta\geq 0\), and (16) implies (9). The next statement is a consequence of (16) combining with (15) using standard arguments.
Next, if \(F\) is \(\rho\)-co-hypomonotone, then with \(\beta:=1\), using \(\|\beta Fy^{k}-u^{k}\|=\|Fy^{k}-Fx^{k}\|\) and \(c=\omega=1\) into (7), we obtain
\[\|Fx^{k+1}\|^{2}\leq\|Fx^{k}\|^{2}-\frac{\left[\eta-4\rho-L^{2}\eta^{2}(\eta+ 4\rho)\right]}{\eta}\|Fy^{k}-Fx^{k}\|^{2}.\]
However, by the choice of \(\eta\) as in (8), one has \(\eta-6\rho-(3L\rho+1)L\eta^{2}>0\). It is obvious to check that \(\psi:=\eta-4\rho-L^{2}\eta^{2}(\eta+4\rho)\geq\eta-6\rho-(3L\rho+1)L\eta^{2}>0\). This condition leads to the first part of (10). The second part of (10) is a consequence of the first part and (9).
(b) **Past-extragradient method.** If we choose \(u^{k}:=Fy^{k-1}\), then by Young's inequality and the \(L\)-Lipschitz continuity of \(F\), we have
\[\|Fy^{k}-u^{k}\|^{2} = \|Fy^{k}-Fy^{k-1}\|^{2}\leq 3\|Fy^{k}-Fx^{k}\|^{2}+\tfrac{3}{2}\|Fx ^{k}-Fy^{k-1}\|^{2} \tag{17}\] \[\leq 3L^{2}\|x^{k}-y^{k}\|^{2}+\tfrac{3L^{2}}{2}\|x^{k}-y^{k-1}\|^{2}.\]
Moreover, since \(\langle Fx,x-x^{\star}\rangle\geq-\rho\|Fx\|^{2}\) for all \(x\in\mathrm{dom}(F)\), using this condition, \(\eta Fy^{k}=x^{k+1}-x^{k}\) from (EG), and Young's inequality, we can lower bound that
\[2\eta\langle Fy^{k},y^{k}-x^{\star}\rangle \geq -2\rho\eta\|Fy^{k}\|^{2}=-\tfrac{2\rho}{\eta}\|x^{k+1}-x^{k}\|^{2 }\geq-\tfrac{4\rho}{\eta}\big{[}\|x^{k+1}-y^{k}\|^{2}+\|y^{k}-x^{k}\|^{2}\big{]}. \tag{18}\]
Substituting (17), (18), and \(\hat{x}:=x^{\star}\in\mathrm{zer}(F)\) into (6), we obtain
\[\|x^{k+1}-x^{\star}\|^{2} + \left(\tfrac{3L^{2}\eta^{2}}{\gamma}-\gamma\right)\|x^{k+1}-y^{k }\|^{2}\leq\|x^{k}-x^{\star}\|^{2}+\left(\tfrac{3L^{2}\eta^{2}}{\gamma}- \gamma\right)\|x^{k}-y^{k-1}\|^{2} \tag{19}\] \[- \left(\beta-\tfrac{3L^{2}\eta^{2}}{\gamma}-\tfrac{4\rho}{\eta} \right)\big{[}\|x^{k+1}-y^{k}\|^{2}+\|y^{k}-x^{k}\|^{2}\big{]}-\left(\tfrac{3L ^{2}\eta^{2}}{2\gamma}-\gamma\right)\|x^{k}-y^{k-1}\|^{2}\] \[- (1-\beta)\|x^{k+1}-x^{k}\|^{2}.\]
Next, by the second line of (EG) and Young's inequality, we can easily show that \(\|Fy^{k}\|^{2}=\frac{1}{\eta^{2}}\|x^{k+1}-x^{k}\|^{2}\leq\frac{3}{2\eta^{2}} \|x^{k}-y^{k}\|^{2}+\frac{3}{\eta^{2}}\|x^{k+1}-y^{k}\|^{2}\). Alternatively, by Young's inequality and the \(L\)-Lipschitz continuity of \(F\), we also have \(\|Fx^{k}\|^{2}\leq\left(L^{2}+\frac{3}{2\eta^{2}}\right)\|x^{k}-y^{k}\|^{2}+ \left(1+\frac{2L^{2}\eta^{2}}{3}\right)\|Fy^{k}\|^{2}\). Combining both inequalities, we get
\[\|Fx^{k}\|^{2}\leq\tfrac{2L^{2}\eta^{2}+3}{\eta^{2}}\left[\|x^{k}-y^{k}\|^{2}+ \|x^{k+1}-y^{k}\|^{2}\right].\]
Now, let us choose \(\gamma:=L\eta\) and using the last inequality into (19), we can show that
\[\|x^{k+1}-x^{\star}\|^{2}+2L\eta\|x^{k+1}-y^{k}\|^{2} \leq \|x^{k}-x^{\star}\|^{2}+2L\eta\|x^{k}-y^{k-1}\|^{2}-(1-\beta)\|x ^{k+1}-x^{k}\|^{2} \tag{20}\] \[- \frac{\eta(\beta\eta-3L\eta^{2}-4\rho)}{2L^{2}\eta^{2}+3}\|Fx^{k} \|^{2}-\tfrac{Ln}{2}\|x^{k}-y^{k-1}\|^{2}.\]
If we choose \(\eta>0\) such that \(\beta\eta-3L\eta^{2}-4\rho>0\), then (20) implies (12). The next statement of Part (b) follows from (12) and the fact that \(\|Fy^{k-1}\|^{2}\leq\frac{L^{2}+\kappa}{\kappa}\big{[}\|Fx^{k}\|^{2}+\kappa\|x^{k }-y^{k-1}\|^{2}\big{]}\). Note that, the condition \(\beta\eta-3L\eta^{2}-4\rho>0\) holds if \(0\leq\frac{\beta-\sqrt{\beta^{2}-12L\rho}}{6L}<\eta<\frac{\beta+\sqrt{\beta^{2}- 12L\rho}}{6L}\leq\frac{\beta}{3L}\), provided that \(12L\rho\leq\beta^{2}\) as stated in Theorem 3.1.
To prove the last-iterate rate, let us choose \(\omega:=\frac{1+8L^{2}\eta\eta}{1-2L^{2}\eta^{2}}>1\), provided that \(\sqrt{2}L\eta<1\). Moreover, for \(\beta=1\) and \(u^{k}:=Fy^{k-1}\), we have \(\|\beta Fy^{k}-u^{k}\|^{2}=\|Fy^{k}-Fy^{k-1}\|^{2}\leq 2\|Fy^{k}-Fx^{k}\|^{2}+2\|Fx^{k}-Fy^{k- 1}\|^{2}\). Substituting this inequality, \(c:=1\), and \(\omega:=\frac{1+8L^{2}\eta\eta}{1-2L^{2}\eta^{2}}\) into (7), we obtain
\[\|Fx^{k+1}\|^{2}+(\omega-1)\|Fx^{k+1}-Fy^{k}\|^{2}\,\leq \,\|Fx^{k}\|^{2}+(\omega-1)\|Fx^{k}-Fy^{k-1}\|^{2} \tag{21}\] \[-\,\left[1-\frac{4\rho}{\eta}-\frac{2L^{2}\eta(\eta+4\rho)}{1-2L ^{2}\eta^{2}}\right]\|Fy^{k}-Fx^{k}\|^{2}.\]
It is obvious to show that the conditions \(1-\frac{4\rho}{\eta}-3L\eta>0\) and \(\sqrt{2}L\eta<1\) guarantee that \(1-\frac{4\rho}{\eta}-\frac{2L^{2}\eta(\eta+4\rho)}{1-2L^{2}\eta^{2}}\geq 0\). Hence, if we define \(\hat{\kappa}:=\omega-1=\frac{2(\eta+4\rho)L^{2}\eta^{2}}{1-2L^{2}\eta^{2}}\), then (21) reduces to
\[\|Fx^{k+1}\|^{2}+\hat{\kappa}\|Fx^{k+1}-Fy^{k}\|^{2}\,\leq\,\|Fx^{k}\|^{2}+\hat {\kappa}\|Fx^{k}-Fy^{k-1}\|^{2}. \tag{22}\]
For \(C_{0}:=\max\left\{\frac{L^{2}\hat{\kappa}}{\kappa},1\right\}\), we have \(\|Fx^{k}\|^{2}+\hat{\kappa}\|Fx^{k}-Fy^{k-1}\|^{2}\,\leq\,C_{0}\left[\|Fx^{k} \|^{2}+\frac{\kappa}{L^{2}}\|Fx^{k}-Fy^{k-1}\|^{2}\right]\,\leq\,C_{0}\left[ \|Fx^{k}\|^{2}+\kappa\|x^{k}-y^{k-1}\|^{2}\right]\). Combining this inequality and (12), we get
\[\frac{1}{k+1}\sum_{l=0}^{k}\left[\|Fx^{l}\|^{2}+\hat{\kappa}\|Fx^{l}-Fy^{l-1} \|^{2}\right]\leq\frac{C_{0}}{k+1}\sum_{l=0}^{k}\left[\|Fx^{l}\|^{2}+\kappa\| x^{l}-y^{l-1}\|^{2}\right]\leq\frac{C_{0}\hat{C}_{\rho}\|x^{0}-x^{\star}\|^{2}}{k+1},\]
Using (22) into the last bound, we obtain (13). Note that the condition \(3L\eta<1\) guarantees that \(\sqrt{2}L\eta<1\). The remaining statement of (b) in Theorem 3.1 is a direct consequence of (13) and (18).
## 4 Extragradient-Type Methods for Monotone Inclusions
In this section, we go beyond (NE) to survey recent results on both best-iterate and last-iterate convergence rates of the EG method and its variants for solving (NI). Again, we provide a unified analysis that covers a wide class of EG variants of the monotone instances of (NI) as can be seen below.
### The class of extragradient methods
The class of EG methods for solving (NI) we consider in this section can be described as follows. Starting from an initial point \(x^{0}\in\mathrm{dom}(\Phi)\), at each iteration \(k\geq 0\), we update
\[\left\{\begin{array}{ll}y^{k}&:=\,J_{\frac{\eta}{2}T}(x^{k}-\frac{\eta}{ \beta}u^{k}),\\ x^{k+1}&:=\,J_{\eta T}(x^{k}-\eta Fy^{k}),\end{array}\right.\] (EG2)
where \(J_{\eta T}\) is the resolvent of \(\eta T\), \(\eta>0\) is a given stepsize, and \(\beta>0\) is a scaling factor. Here, we consider two different choices of \(u^{k}\) as follows:
* EG+ in [43] if \(\beta\in(0,1)\)) for solving (NI).
* **Option 2.** If \(u^{k}:=Fy^{k-1}\), then we obtain \(y^{k}:=J_{\frac{\eta}{2}T}(x^{k}-\frac{\eta}{\beta}Fy^{k-1})\), leading to the **past-extragradient method** (or equivalently, **Popov's method [113]**) for solving (NI).
Clearly, when \(T=\mathcal{N}_{\mathcal{X}}\), the normal cone of a nonempty, closed, and convex set \(\mathcal{X}\), then \(J_{\gamma T}=\mathrm{proj}_{\mathcal{X}}\), the projection onto \(\mathcal{X}\) and hence, (EG2) reduces to the extragradient variant for solving (VIP) widely studied in the literature [50, 72]. In terms of computational complexity, (EG2) requires two evaluations of \(F\) at \(x^{k}\) and \(y^{k}\), and two evaluations of the resolvent \(J_{\eta T}\) at each iteration. It costs as twice as one iteration of the forward-backward splitting method (FBS). However, its does not require the co-coerciveness of \(F\) to guarantee convergence. Again, we use a scaling factor \(\beta\) as in (EG), which covers EG+ in [43] as a special case.
Now, for given \(\zeta^{k}\in Ty^{k}\) and \(\xi^{k+1}\in Tx^{k+1}\), we denote \(\tilde{w}^{k}:=Fx^{k}+\zeta^{k}\), and \(\hat{w}^{k+1}:=Fy^{k}+\xi^{k+1}\). Then, we can rewrite (EG2) equivalently to
\[\left\{\begin{aligned} & y^{k}\quad:=\,x^{k}-\frac{\eta}{\beta}(u^{k}+ \zeta^{k})&=\,x^{k}-\frac{\eta}{\beta}(\tilde{w}^{k}+u^{k}-Fx^{k }),&\quad\zeta^{k}\,\,\,\,\,\in Ty^{k},\\ & x^{k+1}:=\,x^{k}-\eta(Fy^{k}+\xi^{k+1})&=\,x^{k}- \eta\hat{w}^{k+1},&\quad\xi^{k+1}\,\,\,\,\in Tx^{k+1}.\end{aligned}\right. \tag{23}\]
This representation makes (EG2) looks like (EG), and it is a key step for our convergence analysis.
### One-iteration analysis
We establish both the best-iterate and last-iterate convergence rates of (EG2) under the assumption that \(F\) is monotone and \(T\) is maximally \(3\)-cyclically monotone. Note that if \(T\) is maximally cyclically monotone, then \(T=\partial g\), the subdifferential of a proper, closed, and convex function due to [11, Theorem 22.18]. In this case, (NI) reduces to (MVIP). However, we do not require \(T\) to be maximally cyclically monotone, but only \(3\)-maximally cyclically monotone, which may not be necessarily identical to \(\partial g\). Therefore, our result below is more general than existing variants in the recent literature, including [26].
To analyze the convergence of (EG2), we also define
\[w^{k}:=Fx^{k}+\xi^{k}\quad\text{for some}\quad\xi^{k}\in Tx^{k}. \tag{24}\]
The following lemma provides key estimates to establish convergence of (EG2).
**Lemma 4.1**.: _Suppose that \(\{(x^{k},y^{k})\}\) is generated by (EG2), \(w^{k}\) is defined by (24) and \(T\) is maximally \(3\)-cyclically monotone. Then, for any \(\gamma>0\), any \(x^{\star}\in\operatorname{zer}(\Phi)\), we have_
\[\|x^{k+1}-x^{\star}\|^{2} \leq\,\|x^{k}-x^{\star}\|^{2}-(1-\beta)\|x^{k+1}-x^{k}\|^{2}-( \beta-\gamma)\|x^{k+1}-y^{k}\|^{2} \tag{25}\] \[\quad-\,\beta\|x^{k}-y^{k}\|^{2}+\tfrac{\eta^{2}}{\gamma}\|Fy^{k }-u^{k}\|^{2}-2\eta\langle Fy^{k}-Fx^{\star},y^{k}-x^{\star}\rangle.\]
_If, in addition, \(F\) is monotone and \(\beta:=1\), then for \(\omega\geq 1\), \(\gamma>0\), and \(t>0\), we have_
\[\|w^{k+1}\|^{2}+(\omega-1)\|w^{k+1}-\hat{w}^{k+1}\|^{2} \leq\,\|w^{k}\|^{2}-(1-\gamma)\|w^{k}-\tilde{w}^{k}\|^{2}+\left[ \tfrac{1}{\gamma}+\tfrac{\omega(1+t)L^{2}\eta^{2}}{t}\right]\|Fx^{k}-u^{k}\|^ {2} \tag{26}\] \[\quad-\,\left[1-\omega(1+t)L^{2}\eta^{2}\right]\|\hat{w}^{k+1}- \tilde{w}^{k}\|^{2}.\]
Proof.: Firstly, since \(\xi^{k+1}\in Tx^{k+1}\), \(\zeta^{k}\in Ty^{k}\), and \(\xi^{\star}=-Fx^{\star}\in Tx^{\star}\), by the maximally \(3\)-cyclic monotonicity of \(T\), we have \((\xi^{k+1},x^{k+1}-x^{\star})+\langle\xi^{\star},x^{\star}-y^{k}\rangle+ \langle\zeta^{k},y^{k}-x^{k+1}\rangle\geq 0\), leading to \(\langle\xi^{k+1}-\zeta^{k},x^{k+1}-x^{\star}\rangle\geq\langle\zeta^{k}-\xi^{ \star},x^{\star}-y^{k}\rangle=-\langle Fx^{\star}+\zeta^{k},y^{k}-x^{\star}\rangle\). Utilizing this inequality and the second line \(x^{k}-x^{k+1}=\eta(Fy^{k}+\xi^{k+1})\) of (23), for any \(x^{\star}\in\operatorname{zer}(\Phi)\), we can derive that
\[\|x^{k+1}-x^{\star}\|^{2} =\,\|x^{k}-x^{\star}\|^{2}-2\langle x^{k}-x^{k+1},x^{k+1}-x^{ \star}\rangle-\|x^{k+1}-x^{k}\|^{2}\] \[=\,\|x^{k}-x^{\star}\|^{2}-2\eta\langle Fy^{k}+\xi^{k+1},x^{k+1}- x^{\star}\rangle-\|x^{k+1}-x^{k}\|^{2}\] \[=\,\|x^{k}-x^{\star}\|^{2}-2\eta\langle Fy^{k}+\zeta^{k},x^{k+1}- x^{\star}\rangle-\|x^{k+1}-x^{k}\|^{2}-2\eta\langle\xi^{k+1}-\zeta^{k},x^{k+1}-x^{ \star}\rangle\] \[\leq\,\|x^{k}-x^{\star}\|^{2}-\|x^{k+1}-x^{k}\|^{2}-2\eta\langle F y ^{k}+\zeta^{k},x^{k+1}-y^{k}\rangle-2\eta\langle Fy^{k}-Fx^{\star},y^{k}-x^{ \star}\rangle.\]
Next, from the first line of (23), we have \(\eta(Fy^{k}+\zeta^{k})=\beta(x^{k}-y^{k})+\eta(Fy^{k}-u^{k})\). Therefore, by the Cauchy-Schwarz inequality and Young's inequality, for any \(\gamma>0\), we can derive that
\[2\eta\langle Fy^{k}+\zeta^{k},x^{k+1}-y^{k}\rangle =\,2\beta\langle x^{k}-y^{k},x^{k+1}-y^{k}\rangle+2\eta\langle Fy ^{k}-u^{k},x^{k+1}-y^{k}\rangle\] \[\geq\,\beta\,\left[\|x^{k}-y^{k}\|^{2}+\|x^{k+1}-y^{k}\|^{2}-\|x^ {k+1}-x^{k}\|^{2}\right]-2\eta\|Fy^{k}-u^{k}\|\|x^{k+1}-y^{k}\|\] \[\geq\,\beta\|x^{k}-y^{k}\|^{2}+(\beta-\gamma)\|x^{k+1}-y^{k}\|^{2 }-\beta\|x^{k+1}-x^{k}\|^{2}-\tfrac{\eta^{2}}{\gamma}\|Fy^{k}-u^{k}\|^{2}.\]
Finally, substituting this inequality into the above estimate, we obtain (25).
To prove (30), we process as follows. Using again the \(3\)-cyclic monotonicity of \(T\) but with \(\xi^{k}\in Tx^{k}\), we have \(\langle\xi^{k+1},x^{k+1}-x^{k}\rangle+\langle\xi^{k},x^{k}-y^{k}\rangle+\langle \zeta^{k},y^{k}-x^{k+1}\rangle\geq 0\). By the monotonicity of \(F\), we get \(\langle Fx^{k+1}-Fx^{k},x^{k+1}-x^{k}\rangle\geq 0\). Summing up these inequalities and using \(w^{k}=Fx^{k}+\xi^{k}\) and \(\tilde{w}^{k}:=Fx^{k}+\zeta^{k}\), we have
\[\langle w^{k+1}-\tilde{w}^{k},x^{k+1}-x^{k}\rangle+\langle w^{k}-\tilde{w}^{k},x^{k}-y^{k}\rangle\geq 0. \tag{27}\]
From the second line of (EG2), we have \(x^{k+1}-x^{k}=-\eta(Fy^{k}+\xi^{k+1})=-\eta\tilde{w}^{k+1}\). From the first line of (EG2) and \(\beta=1\), we also have \(x^{k}-y^{k}=\frac{\eta}{\beta}(\tilde{w}^{k}+u^{k}-Fx^{k})=\eta\tilde{w}^{k}+ \eta(u^{k}-Fx^{k})\). Substituting these expressions into (27), and using an elementary inequality \(2\langle z,s\rangle\leq\gamma\|s\|^{2}+\frac{\|z\|^{2}}{\gamma}\) for any \(\gamma>0\), we have
\[0 \leq 2\langle\tilde{w}^{k},\hat{w}^{k+1}\rangle-2\langle w^{k+1}, \hat{w}^{k+1}\rangle+2\langle w^{k},\tilde{w}^{k}\rangle-2\|\tilde{w}^{k}\|^{ 2}+2\langle w^{k}-\tilde{w}^{k},u^{k}-Fx^{k}\rangle\] \[= \|w^{k}\|^{2}-\|w^{k+1}\|^{2}+\|w^{k+1}-\hat{w}^{k+1}\|^{2}-\| \hat{w}^{k+1}-\tilde{w}^{k}\|^{2}-\|w^{k}-\tilde{w}^{k}\|^{2}+2\langle w^{k}- \tilde{w}^{k},Fx^{k}-u^{k}\rangle\] \[\leq \|w^{k}\|^{2}-\|w^{k+1}\|^{2}+\|w^{k+1}-\hat{w}^{k+1}\|^{2}-\| \hat{w}^{k+1}-\tilde{w}^{k}\|^{2}-(1-\gamma)\|w^{k}-\tilde{w}^{k}\|^{2}+\frac {1}{\gamma}\|Fx^{k}-u^{k}\|^{2}.\]
This inequality leads to
\[\|w^{k+1}\|^{2}\leq\|w^{k}\|^{2}+\|w^{k+1}-\hat{w}^{k+1}\|^{2}+\frac{1}{\gamma }\|Fx^{k}-u^{k}\|^{2}-(1-\gamma)\|w^{k}-\tilde{w}^{k}\|^{2}-\|\hat{w}^{k+1}- \tilde{w}^{k}\|^{2}.\]
Now, by the \(L\)-Lipschitz continuity of \(F\), (EG2), and Young's inequality, for any \(t>0\), we have
\[\|w^{k+1}-\hat{w}^{k+1}\|^{2} = \|Fx^{k+1}-Fy^{k}\|^{2}\leq L^{2}\|x^{k+1}-y^{k}\|^{2}=L^{2}\eta^ {2}\|\hat{w}^{k+1}-\tilde{w}^{k}+Fx^{k}-u^{k}\|^{2}\] \[\leq (1+t)L^{2}\eta^{2}\|\hat{w}^{k+1}-\tilde{w}^{k}\|^{2}+\frac{(1+t )L^{2}\eta^{2}}{t}\|Fx^{k}-u^{k}\|^{2}.\]
Multiplying this inequality by \(\omega\geq 1\) and adding the last inequality, we obtain (30).
### Unified convergence analysis
The following theorem proves the best-iterate and the last-iterate convergence of (EG2).
**Theorem 4.1**.: _Suppose that \(\mathrm{zer}(\Phi)\neq\emptyset\), \(F\) in (NI) is \(L\)-Lipschitz continuous and satisfies \(\langle Fx-Fx^{\star},x-x^{\star}\rangle\geq 0\) for all \(x\in\mathrm{dom}(F)\) and some \(x^{\star}\in\mathrm{zer}(\Phi)\), and \(T\) is maximally \(3\)-cyclically monotone. Let \(\{(x^{k},y^{k})\}\) be generated by (EG2). Then, the following statements hold._
* \((\)_EG method_\()\) _If we choose_ \(u^{k}:=Fx^{k}\) _and_ \(0<\eta<\frac{\beta}{L}\)_, then we have_ \[\min_{1\leq l\leq k+1}\|Fx^{l}+\xi^{l}\|^{2}\leq\frac{1}{k+1}\sum_{l=1}^{k+1} \|Fx^{l}+\xi^{l}\|^{2}\leq\frac{C_{0}\|x^{0}-x^{\star}\|^{2}}{k+1},\quad\xi^{l }\in Tx^{l},\] (28) _where_ \(C_{0}:=\frac{3+2L^{2}}{\eta^{2}(\beta-L\eta)}>0\)_. As a consequence,_ \(\{\|x^{k}-x^{\star}\|\}\) _is nonincreasing and_ \[\lim_{k\to\infty}\|x^{k}-y^{k}\|=\lim_{k\to\infty}\|Fy^{k}+\zeta^{k}\|=\lim_{k \to\infty}\|Fx^{k}+\xi^{k}\|=0.\] (29) _Moreover,_ \(\{x^{k}\}\) _converges to_ \(x^{\star}\)_, a solution of (NI)._ \((\)_Last-iterate convergence rate of EG_\()\) _If, in addition,_ \(F\) _is monotone, then we have_ \[\|Fx^{k+1}+\xi^{k+1}\|^{2}\leq\|Fx^{k}+\xi^{k}\|^{2}\quad\text{and}\quad\|Fx^{k} +\xi^{k}\|\leq\frac{\sqrt{C_{0}}\|x^{0}-x^{\star}\|}{\sqrt{k}}.\] (30) _Hence, we have a last-iterate convergence rate_ \(\|Fx^{k}+\xi^{k}\|=\mathcal{O}\big{(}1/\sqrt{k}\big{)}\) _of_ \(\|Fx^{k}+\xi^{k}\|\)_, where_ \(\xi^{k}\in Tx^{k}\)_._
* \((\)_Past-EG method_\()\) _If we choose_ \(u^{k}:=Fy^{k-1}\) _with_ \(y^{-1}:=x^{0}\) _and_ \(0<\eta<\frac{\beta}{3L}\)_, then_ \[\min_{1\leq l\leq k+1}\|Fx^{l}+\xi^{l}\|^{2}\leq\frac{1}{k+1}\sum_{l=1}^{k+1} \big{[}\|Fx^{l}+\xi^{l}\|^{2}+\psi\cdot\|x^{l}-y^{l-1}\|^{2}\big{]}\leq\frac{ \hat{C}_{0}\|x^{0}-x^{\star}\|^{2}}{k+1},\] (31) _where_ \(\hat{C}_{0}:=\frac{3+2L^{2}}{\eta^{2}(\beta-3L\eta)}>0\) _and_ \(\psi:=\frac{L(3+2L^{2})}{2\eta(\beta-3L\eta)}\)_. Moreover,_ \(\{x^{k}\}\) _converges to_ \(x^{\star}\) _and (_29_) still holds._
(_Last-iterate convergence rate of Past-EG_) _If, in addition,_ \(F\) _is monotone, then we have_
\[\begin{split}&\|Fx^{k+1}+\xi^{k+1}\|^{2}+\kappa\|Fx^{k+1}-Fy^{k}\|^ {2}\leq\|Fx^{k}+\xi^{k}\|^{2}+\kappa\|Fx^{k}-Fy^{k-1}\|^{2}\\ &\text{and}\ \ \|Fx^{k}+\xi^{k}\|^{2}\leq\|Fx^{k}+\xi^{k}\|^{2}+ \kappa\|Fx^{k}-Fy^{k-1}\|^{2}\leq\frac{\hat{C}_{0}\|x^{0}-x^{\star}\|^{2}}{m_{0 }k},\end{split} \tag{32}\]
_where_ \(\kappa:=\frac{1+9L^{2}\eta^{2}}{1-9L^{2}\eta^{2}}>0\) _and_ \(m_{0}:=\max\left\{\frac{\kappa}{\psi},1\right\}\)_. Therefore, we have_ \(\|Fx^{k}+\xi^{k}\|=\mathcal{O}\big{(}1/\sqrt{k}\big{)}\)_._
_In both cases_ (a) _and_ (b)_, if_ \(T\) _is maximally monotone, then we also have_
\[\min_{1\leq l\leq k}\|G_{\eta\Phi}x^{l}\|=\mathcal{O}\left(\frac{1}{\sqrt{k}} \right)\quad\text{and}\quad\min_{1\leq l\leq k}\|G_{\eta\Phi}y^{l}\|=\mathcal{ O}\left(\frac{1}{\sqrt{k}}\right), \tag{33}\]
_where_ \(G_{\eta\Phi}x:=\frac{1}{\eta}(x-J_{\eta T}(x-\eta Fx))\) _is given by (_4_). Moreover,_ \(\lim_{k\to\infty}\|G_{\eta\Phi}y^{k}\|=\lim_{k\to\infty}\|G_{\eta\Phi}x^{k}\|=0\)_._
**Remark 4.1**.: If \(\beta=1\), then the condition \(0<\eta<\frac{1}{L}\) in Part (a) is the same as in classical EG methods [50]. Similarly, when \(\beta=1\), the condition \(0<\eta\leq\frac{1}{3L}\) in Part (b) is the same as the one in [113]. It remains open to establish both best-iterate and last-iterate convergence rates of (EG2) under weak-Minty solution assumption or the co-hypomonotonicity of \(\Phi\).
Proof of Theorem 4.1.: (a) **(EG method)** First, since \(u^{k}:=Fx^{k}\), by the \(L\)-Lipschitz continuity of \(F\), we have \(\|Fy^{k}-u^{k}\|=\|Fy^{k}-Fx^{k}\|\leq L\|x^{k}-y^{k}\|\). Using this inequality and \(\langle Fy^{k}-Fx^{\star},y^{k}-x^{\star}\rangle\geq 0\) from our assumption into (25), we have
\[\begin{split}\|x^{k+1}-x^{\star}\|^{2}\,\leq&\|x^{k }-x^{\star}\|^{2}-(1-\beta)\|x^{k+1}-x^{k}\|^{2}-(\beta-\gamma)\|x^{k+1}-y^{k} \|^{2}-\left(\beta-\frac{L^{2}\eta^{2}}{\gamma}\right)\|x^{k}-y^{k}\|^{2}. \end{split} \tag{34}\]
Next, using the second line of (23), Young's inequality, and the \(L\)-Lipschitz continuity of \(F\), we have
\[\begin{split}\eta^{2}\|Fx^{k+1}+\xi^{k+1}\|^{2}& \stackrel{{\eqref{eq:L2}}}{{=}}\|x^{k+1}-x^{k}+\eta(Fx^{k+1}-Fy^{ k})\|^{2}\\ &\leq\ (1+\frac{2L^{2}}{3})\|x^{k+1}-x^{k}\|^{2}+(1+\frac{3}{2L^{2}})\| Fx^{k+1}-Fy^{k}\|^{2}\\ &\leq\ \frac{3}{2}(1+\frac{2L^{2}}{3})\|x^{k+1}-y^{k}\|^{2}+(1+\frac{3}{2L^ {2}})L^{2}\|x^{k+1}-y^{k}\|^{2}+3(1+\frac{2L^{2}}{3})\|x^{k}-y^{k}\|^{2}\\ &=\ (3+2L^{2})[\|x^{k+1}-y^{k}\|^{2}+\|x^{k}-y^{k}\|^{2}].\end{split} \tag{35}\]
Substituting (35) into (34) with \(\gamma:=L\eta\), and assuming that \(L\eta<\beta\), we get
\[\begin{split}\|x^{k+1}-x^{\star}\|^{2}&\,\leq\,\|x^ {k}-x^{\star}\|^{2}-(\beta-L\eta)\big{[}\|x^{k+1}-y^{k}\|^{2}+\|x^{k}-y^{k}\|^ {2}\big{]}\\ &\leq\,\|x^{k}-x^{\star}\|^{2}-\frac{(\beta-L\eta)\eta^{2}}{3+2L ^{2}}\|Fx^{k+1}+\xi^{k+1}\|^{2}.\end{split}\]
Now, using this estimate, we can easily prove (28) in Theorem 4.1. Note that, by (5), Young's inequality, the \(L\)-Lipschitz continuity of \(F\), and \(x^{k}-y^{k}=\frac{\eta}{\beta}(u^{k}+\zeta^{k})=\frac{\eta}{\beta}(Fx^{k}+ \zeta^{k})\) from the first line of (23), we have
\[\|Fy^{k}+\zeta^{k}\|^{2}\,\leq\,\tfrac{3}{2}\|Fy^{k}-Fx^{k}\|^{2}+2\|Fx^{k}+ \zeta^{k}\|^{2}\leq\tfrac{3(L^{2}\eta^{2}+2\beta^{2})}{2\eta^{2}}\|x^{k}-y^{k} \|^{2}. \tag{36}\]
Therefore, the remaining statements are direct consequences of (28), (36), and (35) using standard arguments.
Finally, since \(\beta=1\), using \(u^{k}:=Fx^{k}\), \(\omega:=1\), and \(t=\gamma\to 0^{+}\) into (30), and noting that \(L\eta\leq 1\), we get
\[\|w^{k+1}\|^{2}\,\leq\,\|w^{k}\|^{2}-(1-L^{2}\eta^{2})\|\hat{w}^{k+1}-\tilde{w} ^{k}\|^{2}-\|w^{k}-\tilde{w}^{k}\|^{2}\leq\|w^{k}\|^{2}.\]
This shows that \(\big{\{}\|w^{k}\|\big{\}}\) is monotonically nonincreasing. Combining this property and (28), we obtain (30).
(b) **(Past-extragradient method)** If we choose \(u^{k}:=Fy^{k-1}\), then by Young's inequality and the \(L\)-Lipschitz continuity of \(F\), similar to the proof of (17), we have
\[\|Fy^{k}-u^{k}\|^{2}=\|Fy^{k}-Fy^{k-1}\|^{2}\leq 3L^{2}\|x^{k}-y^{k}\|^{2}+ \tfrac{3L^{2}}{2}\|x^{k}-y^{k-1}\|^{2}.\]
Substituting this expression into (25), using \(\langle Fy^{k}-Fx^{\star},y^{k}-x^{\star}\rangle\geq 0\), and choosing \(\gamma:=L\eta\), we obtain
\[\begin{split}\|x^{k+1}-x^{\star}\|^{2}+\tfrac{3L\eta}{2}\|x^{k+1}- y^{k}\|^{2}\,\leq&\,\|x^{k}-x^{\star}\|^{2}+\tfrac{3L\eta}{2}\|x^{k}-y^{k-1}\|^{2}-(1- \beta)\|x^{k+1}-x^{k}\|^{2}\\ &-\tfrac{L\eta}{2}\|x^{k+1}-y^{k}\|^{2}-(\beta-3L\eta)\big{[}\|x ^{k+1}-y^{k}\|^{2}+\|x^{k}-y^{k}\|^{2}\big{]}.\end{split} \tag{37}\]
Assuming that \(3L\eta<\beta\). Then, combining (37) and (35), we can easily prove (31) in Theorem 4.1. Moreover, (37) also implies \(\lim_{k\to\infty}\|x^{k}-y^{k}\|=\lim_{k\to\infty}\|x^{k+1}-y^{k}\|=\lim_{k \to\infty}\|Fx^{k}+\xi^{k}\|=0\). By the second line of (23), we have \(\eta\|Fy^{k}+\xi^{k}\|\leq L\eta\|x^{k}-y^{k}\|+L\eta\|x^{k}-y^{k-1}\|+\|\eta (Fy^{k-1}+\zeta^{k}\|=(L\eta+1)\|x^{k}-y^{k}\|+L\eta\|x^{k}-y^{k-1}\|\). Using this relation and \(\lim_{k\to\infty}\|x^{k}-y^{k}\|=\lim_{k\to\infty}\|x^{k+1}-y^{k}\|=0\), we obtain \(\lim_{k\to\infty}\|Fy^{k}+\zeta^{k}\|=0\). The convergence of \(\{x^{k}\}\) to \(x^{\star}\) follows from standard arguments.
Next, assume that \(3L\eta<1\) and \(u^{k}:=Fy^{k-1}\). Then, substituting \(\gamma:=1\), \(t:=\frac{1}{8}\), \(\omega:=\frac{2}{1-9L^{2}\eta^{2}}>1\) into (30), and using \(\|Fx^{k}-u^{k}\|=\|Fx^{k}-Fy^{k-1}\|=\|w^{k}-\hat{w}^{k}\|\), we obtain
\[\|w^{k+1}\|^{2}+\kappa\|w^{k+1}-\hat{w}^{k+1}\|^{2}\,\leq\,\|w^{k}\|^{2}+ \kappa\|w^{k}-\hat{w}^{k}\|^{2}-\tfrac{4-27L^{2}\eta^{2}}{4(1-9L^{2}\eta^{2}) }\|\hat{w}^{k+1}-\tilde{w}^{k}\|^{2}.\]
This is exactly first line of (32). Since \(3L\eta<1\), we have \(\kappa:=\frac{1+9L^{2}\eta^{2}}{1-9L^{2}\eta^{2}}>0\) and \(\frac{4-27L^{2}\eta^{2}}{4(1-9L^{2}\eta^{2})}>0\). Let \(m_{0}:=\max\{\frac{\kappa}{\psi},1\}\), where \(\psi\) is given in (31). Then, we have
\[\|Fx^{k}+\xi^{k}\|^{2}+\kappa\|Fx^{k}-Fy^{k-1}\|^{2}\,\leq\,\|Fx^{k}+\xi^{k}\|^ {2}+\kappa L^{2}\|x^{k}-y^{k-1}\|^{2}\leq m_{0}\big{[}\|Fx^{k}+\xi^{k}\|^{2}+ \psi\cdot\|x^{k}-y^{k-1}\|^{2}\big{]}.\]
Combining this inequality and (31), we obtain \(\frac{1}{k+1}\sum_{l=1}^{k+1}\big{[}\|Fx^{l}+\xi^{l}\|^{2}+\kappa\|Fx^{l}-Fy^{ l-1}\|^{2}\big{]}\,\leq\,\frac{C_{0}\|x^{0}-x^{\star}\|^{2}}{m_{0}(k+1)}\). This bound together with the first line of (32) imply the second line of (32).
Finally, since \(T\) is maximally monotone, \(J_{\eta T}\) is single-valued and nonexpansive. By using \(\|G_{\Phi}x^{k}\|\leq\|Fx^{k}+\xi^{k}\|\) from (5) and either (28) or (31), we obtain (33). Using again (5) and the limits (29), we obtain \(\lim_{k\to\infty}\|G_{\Phi}x^{k}\|\leq\lim_{k\to\infty}\|Fx^{k}+\xi^{k}\|=0\) and \(\lim_{k\to\infty}\|G_{\Phi}y^{k}\|\leq\lim_{k\to\infty}\|Fy^{k}+\zeta^{k}\|=0\).
## 5 Forward-Backward-Forward Splitting-Type Methods for (Ni)
Alternative to (EG2), we now survey recent results on the best-iterate convergence rates of the FBFS method and its variants for solving (NI). As before, we provide a unified analysis that covers a wide class of FBFS variants, which can also solve (NI) under a weak-Minty solution and particularly, the co-hypomonotonicity.
### The class of forward-backward-forward splitting methods
The forward-backward-forward splitting (FBFS) method was proposed by P. Tseng in [136] for solving (NI), which is originally called a _modified forward-backward splitting_ method. This method was developed to solve (NI) with additional constraints. Instead of presenting the original scheme in [136], we modify it using the idea of EG+ in [43] and combine two variants in one. Starting from an initial point \(x^{0}\in\text{dom}(\Phi)\), at each iteration \(k\geq 0\), we update
\[\left\{\begin{aligned} & y^{k}&\quad\in\;J_{\frac{ \eta}{2}T}(x^{k}-\frac{\eta}{\beta}u^{k}),\\ & x^{k+1}&:=\,\beta y^{k}+(1-\beta)x^{k}-\eta(Fy^{k}-u ^{k}),\end{aligned}\right.\] (FBFS2)
where \(\eta>0\) is a given stepsize, \(\beta\in(0,1]\) is a scaling factor, and \(u^{k}\) is one of the following choices:
* **Option 1.** If we choose \(u^{k}:=Fx^{k}\), then we obtain a variant of **Tseng's FBFS method**. In particular, if \(\beta=1\), then we get exactly **Tseng's FBFS method** in [136] for solving (NI). Note that one can extend (FBFS2) to cover the case \(\text{zer}(\Phi)\cap\mathcal{C}\neq\emptyset\) for some subset \(\mathcal{C}\) of \(\mathbb{R}^{p}\) as presented in [136]. Nevertheless, for simplicity, we assume that \(\mathcal{X}=\mathbb{R}^{p}\). As shown in (FBFS), if \(T=0\), then (FBFS2) reduces to the classical extragradient method (EG). However, if \(T\neq 0\), then (FBFS2) is different from (EG2).
* **Option 2.** If we choose \(u^{k}:=Fy^{k-1}\), where \(y^{-1}:=x^{0}\), then we obtain a **past-FBFS variant**. This variant can also be referred to as a generalized variant of the **optimistic gradient (OG) method**,
see, e.g., [38, 95, 96]. If \(\beta=1\), then \(y^{k+1}\in J_{\eta T}(x^{k+1}-\eta Fy^{k})\) and \(x^{k+1}=y^{k}-\eta(Fy^{k}-Fy^{k-1})\). Combining these two expressions, (FBFS2) reduces to
\[y^{k+1}\,\in\,J_{\eta T}\left(y^{k}-\eta(2Fy^{k}-Fy^{k-1})\right).\] (FRBS2)
This is exactly the **forward-reflected-backward splitting (FRBS) method** in [89].
Compared to (EG2), we do not require \(T\) to be monotone in (FBFS2). However, to guarantee the well-definedness of \(\{(x^{k},y^{k})\}\), we need \(y^{k}\in\operatorname{ran}(J_{\eta T})\) and \(y^{k}\in\operatorname{dom}(F)\). Hence, we can assume that \(\operatorname{ran}(J_{\eta T})\subseteq\operatorname{dom}(F)=\mathbb{R}^{p}\) and \(\operatorname{dom}(J_{\eta T})=\mathbb{R}^{p}\). This requirement makes (FBFS2) cover a broader class of problems than (EG2), and it obviously holds if \(T\) is maximally monotone and \(F\) is monotone and Lipschitz continuous as in (EG2). In addition, (FBFS2) only requires one evaluation of \(J_{\eta T}\) instead of two as in (EG2), reducing the per-iteration complexity when \(J_{\eta T}\) is expensive to evaluate.
Similar to (23), we can rewrite (FBFS2) equivalently to
\[\left\{\begin{aligned} & y^{k}&:=\,x^{k}-\frac{\eta}{ \beta}(u^{k}+\zeta^{k}),\quad\zeta^{k}\in Ty^{k},\\ & x^{k+1}&:=\,x^{k}+\beta(y^{k}-x^{k})-\eta(Fy^{k}-u ^{k}).\end{aligned}\right. \tag{38}\]
This representation is an important step for our convergence analysis below.
### One-iteration analysis
The following lemma provides a key estimate to establish convergence of (FBFS2).
**Lemma 5.1**.: _Suppose that \(\{(x^{k},y^{k})\}\) is generated by (FBFS2) and \(T\) is not necessary monotone, but \(\operatorname{ran}(J_{\eta T})\subseteq\operatorname{dom}(F)=\mathbb{R}^{p}\) and \(\operatorname{dom}(J_{\eta T})=\mathbb{R}^{p}\). Then, for any \(\gamma>0\), any \(x^{\star}\in\operatorname{zer}(\Phi)\), we have_
\[\|x^{k+1}-x^{\star}\|^{2} \leq\,\|x^{k}-x^{\star}\|^{2}-(1-\beta)\|x^{k+1}-x^{k}\|^{2}-( \beta-\gamma)\|x^{k+1}-y^{k}\|^{2} \tag{39}\] \[\quad-\,\beta\|x^{k}-y^{k}\|^{2}+\frac{\eta^{2}}{\gamma}\|Fy^{k} -u^{k}\|^{2}-2\eta(Fy^{k}+\zeta^{k},y^{k}-x^{\star}).\]
Proof.: First, combining the first and second lines of (38), we obtain \(x^{k+1}=x^{k}-\beta(x^{k}-y^{k})+\eta(u^{k}-Fy^{k})=x^{k}-\eta(Fy^{k}+\zeta^{k})\). Using this relation, we have
\[\|x^{k+1}-x^{\star}\|^{2}\,=\,\|x^{k}-x^{\star}\|^{2}-2\eta\langle Fy^{k}+ \zeta^{k},x^{k+1}-y^{k}\rangle-\|x^{k+1}-x^{k}\|^{2}-2\eta\langle Fy^{k}+ \zeta^{k},y^{k}-x^{\star}\rangle.\]
Next, from the first line of (38), we have \(\eta(Fy^{k}+\zeta^{k})=\beta(x^{k}-y^{k})+\eta(Fy^{k}-u^{k})\). Therefore, by the Cauchy-Schwarz inequality and Young's inequality, we can derive that
\[2\eta(Fy^{k}+\zeta^{k},x^{k+1}-y^{k}) =\,2\beta\langle x^{k}-y^{k},x^{k+1}-y^{k}\rangle+2\eta\langle Fy ^{k}-u^{k},x^{k+1}-y^{k}\rangle\] \[\geq\,\beta\left[\|x^{k}-y^{k}\|^{2}+\|x^{k+1}-y^{k}\|^{2}-\|x^{k+ 1}-x^{k}\|^{2}\right]-2\eta\|Fy^{k}-u^{k}\|\|x^{k+1}-y^{k}\|\] \[\geq\,\beta\|x^{k}-y^{k}\|^{2}+(\beta-\gamma)\|x^{k+1}-y^{k}\|^{2 }-\beta\|x^{k+1}-x^{k}\|^{2}-\frac{\eta^{2}}{\gamma}\|Fy^{k}-u^{k}\|^{2}.\]
Finally, substituting this inequality into the above estimate, we obtain (39).
### Unified convergence analysis
The following theorem establishes the best-iterate convergence rates of (FBFS2) under star-co-hypomonotonicity.
**Theorem 5.1**.: _Suppose that \(F\) in (NI) is \(L\)-Lipschitz continuous, and \(\operatorname{zer}(\Phi)\neq\emptyset\). Suppose additionally that \(\Phi\) is \(\rho\)-star co-hypomonotone \((\)i.e. \(\langle u,x-x^{\star}\rangle\geq-\rho\|u\|^{2}\) for all \((x,u)\in\operatorname{gra}(\Phi)\) and for any \(x^{\star}\in\operatorname{zer}(\Phi)\), where \(\rho\geq 0\), \(T\) is not necessarily monotone, but \(\operatorname{ran}(J_{\eta T})\subseteq\operatorname{dom}(F)=\mathbb{R}^{p}\) and \(\operatorname{dom}(J_{\eta T})=\mathbb{R}^{p}\). Let \(\{(x^{k},y^{k})\}\) be generated by (FBFS2). Then, the following statements hold._
1. (_FBFS method_) _Let us choose_ \(u^{k}:=Fx^{k}\) _and assume that_ \(L\rho\leq\frac{3\sqrt{2}-2}{12}\approx 0.1869\)_. Then, for any_ \(\beta\in(0,1]\)_, if_ \(\eta\) _is chosen such that_ \[0\leq\frac{\beta\left[1-\sqrt{1-24L\rho(3L\rho+1)}\right]}{2L(3L\rho+1)}<\eta< \frac{\beta\left[1+\sqrt{1-24L\rho(3L\rho+1)}\right]}{2L(3L\rho+1)}\leq\frac{ \beta}{L},\] (40) _then we have_ \[\min_{1\leq l\leq k+1}\|Fx^{l}+\xi^{l}\|^{2}\leq\frac{1}{k+1}\sum_{l=1}^{k+1} \|Fx^{l}+\xi^{l}\|^{2}\leq\frac{C_{\rho}\|x^{0}-x^{\star}\|^{2}}{k+1},\quad\xi ^{l}\in Tx^{l},\] (41) _where_ \(C_{\rho}:=\frac{3+2L^{2}}{\eta(\beta\eta-6\beta^{2}\rho-(3L\rho+1)L\eta^{2}) }>0\)_. As a consequence,_ \(\{\|x^{k}-x^{\star}\|\}\) _is nonincreasing and_ \[\lim_{k\to\infty}\|x^{k}-y^{k}\|=\lim_{k\to\infty}\|Fy^{k}+\zeta^{k}\|=\lim_{ k\to\infty}\|Fx^{k}+\xi^{k}\|=0.\] (42) _Moreover,_ \(\{x^{k}\}\) _converges to_ \(x^{\star}\in\mathrm{zer}(\Phi)\)_, a solution of (_NI_)._
2. (_Past-FBFS/OG method_) _Let us choose_ \(u^{k}:=Fy^{k-1}\) _with_ \(y^{-1}:=x^{0}\) _and assume that_ \(L\rho\leq\frac{2\sqrt{3}-3}{24}\approx 0.01934\)_. Then, for any_ \(\beta\in(0,1]\)_, if_ \(\eta\) _is chosen such that_ \[0\leq\frac{\beta\left[1-\sqrt{1-48L\rho(4L\rho+1)}\right]}{6L(4L\rho+1)}<\eta< \frac{\beta\left[1+\sqrt{1-48L\rho(4L\rho+1)}\right]}{6L(4L\rho+1)}\leq\frac{ \beta}{3L},\] (43) _then we have_ \[\min_{1\leq l\leq k+1}\|Fx^{l}+\xi^{l}\|^{2}\leq\frac{1}{k+1}\sum_{l=1}^{k+1} \left[\|Fx^{l}+\xi^{l}\|^{2}+\frac{L\eta^{2}+8\beta^{2}\rho}{2\eta}\|x^{l}-y ^{l-1}\|^{2}\right]\leq\frac{\hat{C}_{\rho}\|x^{0}-x^{\star}\|^{2}}{k+1},\] (44) _where_ \(\hat{C}_{\rho}:=\frac{3+2L^{2}}{\eta(\beta\eta-4\beta^{2}\rho-3(4L\rho+1)L \eta^{2})}>0\)_. Moreover,_ \(\{x^{k}\}\) _converges to_ \(x^{\star}\in\mathrm{zer}(\Phi)\) _and (_29_) still holds._
_In both cases (a) and (b), if \(J_{\eta T}\) is single-valued and nonexpansive, then_
\[\min_{1\leq l\leq k}\|G_{\eta\Phi}x^{l}\|=\mathcal{O}\left(\frac{1}{\sqrt{k}} \right)\quad\text{and}\quad\min_{1\leq l\leq k}\|G_{\eta\Phi}y^{l}\|=\mathcal{ O}\left(\frac{1}{\sqrt{k}}\right), \tag{45}\]
_where \(G_{\eta\Phi}x:=\frac{1}{\eta}(x-J_{\eta T}(x-\eta Fx))\) is given by (4). Moreover, \(\lim_{k\to\infty}\|G_{\eta\Phi}y^{k}\|=\lim_{k\to\infty}\|G_{\eta\Phi}x^{k}\|=0\)._
Proof.: (a) **(FBFS method)** For \(u^{k}:=Fx^{k}\), by the \(L\)-Lipschitz continuity of \(F\), we have \(\|Fy^{k}-u^{k}\|=\|Fy^{k}-Fx^{k}\|\leq L\|y^{k}-x^{k}\|\). Using this inequality into (39), we have
\[\|x^{k+1}-x^{\star}\|^{2} \leq \|x^{k}-x^{\star}\|^{2}-(1-\beta)\|x^{k+1}-x^{k}\|^{2}-(\beta- \gamma)\|x^{k+1}-y^{k}\|^{2} \tag{46}\] \[-\ \left(\beta-\frac{L^{2}\eta^{2}}{\gamma}\right)\|x^{k}-y^{k}\|^ {2}-2\eta(Fy^{k}+\zeta^{k},y^{k}-x^{\star}).\]
Now, by Young's inequality, the \(L\)-Lipschitz continuity of \(F\), and \(x^{k}-y^{k}=\frac{\eta}{\beta}(u^{k}+\zeta^{k})=\frac{\eta}{\beta}(Fx^{k}+ \zeta^{k})\) from the first line of (38), we have
\[\|Fy^{k}+\zeta^{k}\|^{2} \leq \frac{3}{2}\|Fy^{k}-Fx^{k}\|^{2}+2\|Fx^{k}+\zeta^{k}\|^{2}\leq \frac{3(L^{2}\eta^{2}+2\beta^{2})}{2\eta^{2}}\|x^{k}-y^{k}\|^{2}. \tag{47}\]
Similarly, using the second line of (38), Young's inequality, and the \(L\)-Lipschitz continuity of \(F\), we have
\[\eta^{2}\|Fx^{k+1}+\xi^{k+1}\|^{2} = \|x^{k+1}-x^{k}+\eta(Fx^{k+1}-Fy^{k})\|^{2} \tag{48}\] \[\leq (1+\frac{2L^{2}}{3})\|x^{k+1}-x^{k}\|^{2}+(1+\frac{3}{2L^{2}})\| Fx^{k+1}-Fy^{k}\|^{2}\] \[\leq \frac{3}{2}(1+\frac{2L^{2}}{3})\|x^{k+1}-y^{k}\|^{2}+(1+\frac{3}{ 2L^{2}})L^{2}\|x^{k+1}-y^{k}\|^{2}+3(1+\frac{2L^{2}}{3})\|x^{k}-y^{k}\|^{2}\] \[= (3+2L^{2})[\|x^{k+1}-y^{k}\|^{2}+\|x^{k}-y^{k}\|^{2}].\]
Next, since \(\langle u,x-x^{\star}\rangle\geq-\rho\|u\|^{2}\) for any \((x,u)\in\operatorname{gra}(\Phi)\), we have \(\langle Fy^{k}+\zeta^{k},y^{k}-x^{\star}\rangle\geq-\rho\|Fy^{k}+\zeta^{k}\|^{2}\) since \((y^{k},\zeta^{k})\in\operatorname{gra}(T)\). Using this relation and (47) into (46) with \(\gamma:=L\eta\), we get
\[\|x^{k+1}-x^{\star}\|^{2} \leq\,\|x^{k}-x^{\star}\|^{2}-(\beta-L\eta)\|x^{k}-y^{k}\|^{2}-( \beta-L\eta)\|x^{k+1}-y^{k}\|^{2}+2\eta\rho\|Fy^{k}+\zeta^{k}\|^{2} \tag{49}\] \[\leq\,\|x^{k}-x^{\star}\|^{2}-(\beta-L\eta)\|x^{k+1}-y^{k}\|^{2}- \frac{\beta\eta-6\beta^{2}\rho-(3L\rho+1)L\eta^{2}}{\eta}\|x^{k}-y^{k}\|^{2}.\]
Let us impose \(\beta\eta-6\beta^{2}\rho-(3L\rho+1)L\eta^{2}>0\), which holds if
\[0\leq\frac{\beta\left[1-\sqrt{1-24L\rho(3L\rho+1)}\right]}{2L(3L\rho+1)}<\eta <\frac{\beta\left[1+\sqrt{1-24L\rho(3L\rho+1)}\right]}{2L(3L\rho+1)}\leq\frac {\beta}{L},\]
provided that \(L\rho\leq\frac{3\sqrt{2}-2}{12}\approx 0.1869\). This choice of \(\eta\) is exactly (43). Let \(\psi:=\frac{\beta\eta-6\beta^{2}\rho-(3L\rho+1)L\eta^{2}}{\eta}=\beta-L\eta- \frac{3\rho}{\eta}(2\beta^{2}+L^{2}\eta^{2})>0\). Then, we have \(\beta-L\eta-\psi\geq 0\). In this case, (49) becomes
\[\|x^{k+1}-x^{\star}\|^{2} \leq\,\|x^{k}-x^{\star}\|^{2}-\psi[\|x^{k}-y^{k}\|^{2}+\|x^{k+1}- y^{k}\|^{2}]-(\beta-L\eta-\psi)\|x^{k+1}-y^{k}\|^{2}\] \[\leq\,\|x^{k}-x^{\star}\|^{2}-\psi[\|x^{k}-y^{k}\|^{2}+\|x^{k+1}- y^{k}\|^{2}].\]
Finally, using this estimate, (48), we can easily prove (41) in Theorem 5.1. The remaining statements are direct consequences of (41), (47), and (48) using standard arguments.
(b) **(Past-FBFS/OG method)** For \(u^{k}:=Fy^{k-1}\), using Young's inequality and the \(L\)-Lipschitz continuity of \(F\), we can derive
\[\|Fy^{k}-u^{k}\|^{2}=\|Fy^{k}-Fy^{k-1}\|^{2}\leq 3L^{2}\|x^{k}-y^{k}\|^{2}+ \frac{3L^{2}}{2}\|x^{k}-y^{k-1}\|^{2}.\]
Next, since \(\langle u,x-x^{\star}\rangle\geq-\rho\|u\|^{2}\) for all \((x,u)\in\operatorname{gra}(\Phi)\), we have \(\langle Fy^{k}+\zeta^{k},y^{k}-x^{\star}\rangle\geq-\rho\|Fy^{k}+\zeta^{k}\|^ {2}\). Using this inequality and the first line of (38) as \(\eta(Fy^{k}+\zeta^{k})=\beta(x^{k}-y^{k})+\eta(Fy^{k}-Fy^{k-1})\).
\[\langle Fy^{k}+\zeta^{k},y^{k}-x^{\star}\rangle \geq\,-\rho\|Fy^{k}+\zeta^{k}\|^{2}=-\frac{\rho}{\eta^{2}}\|\beta( x^{k}-y^{k})+\eta(Fy^{k}-Fy^{k-1})\|^{2}\] \[\geq\,-\frac{2\rho\beta^{2}}{\eta^{2}}\|x^{k}-y^{k}\|^{2}-4\rho \|Fy^{k}-Fx^{k}\|^{2}-4\rho\|Fx^{k}-Fy^{k-1}\|^{2}\] \[\geq\,-\frac{2\rho(\beta^{2}+2L^{2}\eta^{2})}{\eta^{2}}\|x^{k}-y^ {k}\|^{2}-4\rho L^{2}\|x^{k}-y^{k-1}\|^{2}.\]
Substituting the last two expressions into (39) and choosing \(\gamma:=L\eta\), we obtain
\[\begin{split}\mathcal{V}_{k+1}&\,\leq\,\mathcal{V}_{k }-(1-\beta)\|x^{k+1}-x^{k}\|^{2}-\left(\beta-\frac{5L\eta}{2}-8\rho L^{2}\eta \right)\|x^{k+1}-y^{k}\|^{2}\\ &\quad-\,\left[\beta-3L\eta-\frac{4\rho(\beta^{2}+2L^{2}\eta^{2} )}{\eta}\right]\|x^{k}-y^{k}\|^{2},\end{split} \tag{50}\]
where \(\mathcal{V}_{k}:=\|x^{k}-x^{\star}\|^{2}+L\eta\left(\frac{3}{2}+8\rho L\right) \|x^{k}-y^{k-1}\|^{2}\).
Let us define \(\hat{\psi}:=\beta-3L\eta-\frac{4\rho(\beta^{2}+2L^{2}\eta^{2})}{\eta}\) and \(\hat{\varphi}:=\beta-\frac{5L\eta}{2}-8\rho L^{2}\eta\) and impose the condition that \(\hat{\psi}>0\). Then, it is clear that \(\hat{\varphi}-\hat{\psi}=\frac{L\eta}{2}+\frac{4\beta^{2}\rho}{\eta}\geq 0\). Moreover, (50) becomes \(\mathcal{V}_{k+1}\leq\mathcal{V}_{k}-\hat{\psi}\big{[}\|x^{k+1}-y^{k}\|^{2}+\|x ^{k}-y^{k}\|^{2}\big{]}-(\hat{\varphi}-\hat{\psi})\|x^{k+1}-y^{k}\|^{2}\). Combining this estimate and (48), we can easily prove (44) in Theorem 5.1. Note that the condition \(\hat{\psi}>0\) holds if \(\eta\) is chosen as in (43) provided that \(L\rho\leq\frac{2\sqrt{3}-3}{24}\approx 0.01934\). Using (44), we can easily prove that \(\{x^{k}\}\) converges to \(x^{\star}\). Moreover, (44) also implies \(\lim_{k\to\infty}\|x^{k}-y^{k}\|=\lim_{k\to\infty}\|x^{k+1}-y^{k}\|=\lim_{k\to \infty}\|x^{k}+\xi^{k}\|=0\). By the second line of (38), we have
\[\eta\|Fy^{k}+\zeta^{k}\|\leq L\eta\|x^{k}-y^{k}\|+L\eta\|x^{k}-y^{k-1}\|+\|\eta( Fy^{k-1}+\zeta^{k}\|=(L\eta+1)\|x^{k}-y^{k}\|+L\eta\|x^{k}-y^{k-1}\|.\]
Combing this inequality and \(\lim_{k\to\infty}\|x^{k}-y^{k}\|=\lim_{k\to\infty}\|x^{k+1}-y^{k}\|=0\), we obtain \(\lim_{k\to\infty}\eta\|Fy^{k}+\zeta^{k}\|=0\).
Finally, if \(J_{\eta T}\) is single-valued and nonexpansive, then by using \(\|G_{\Phi}x^{k}\|\leq\|Fx^{k}+\xi^{k}\|\) from (5) and either (41) or (44), we obtain (45). Using again (5) and the limits (42), we obtain \(\lim_{k\to\infty}\|G_{\Phi}x^{k}\|\leq\lim_{k\to\infty}\|Fx^{k}+\xi^{k}\|=0\) and \(\lim_{k\to\infty}\|G_{\Phi}y^{k}\|\leq\lim_{k\to\infty}\|Fy^{k}+\zeta^{k}\|=0\).
**Remark 5.1**.: The \(\rho\)-star co-hypomonotone condition in Theorem 5.1(b) trivially holds if \(\Phi:=F+T\) is \(\rho\)-co-hypomonotone. Hence, the star co-hypomonotone condition is generally weaker than the \(\rho\)-co-hypomonotonicity. Note that we have not tried to optimize the parameters in Theorem 5.1, and generally in the whole paper. By careful tightening bounds and selecting parameters in our analysis (e.g., where Young's inequality is used), we can improve the range of parameters. Note that the convergence analysis for the monotone case is very classical, which can be found, e.g., in [50, 73]. However, the convergence rates for \(\beta<1\), and for the star co-hypomonotone case are recent results. The best-iterate convergence rate of (FBFS2) was proven in [83] for the star co-hypomonotone case, but using a potential function. Here, we provide a different proof using classical results in [50] combining with the star co-hypomonotonicity of \(\Phi\).
## 6 Two Other Variants of The Extragradient Method
In this section, we review two additional methods: the reflected forward-backward splitting (RFBS) algorithm [30, 87] and the golden ratio (GR) scheme [88]. The last-iterate analysis for RFBS scheme was recently given in [27], but only for (VIP). Here, we provide a new analysis for both the best-iterate and the last-iterate rates for RFBS to solve (NI), which is more general than (VIP). The best-iterate convergence analysis for GR is modified the proof from [88] to expand the range of parameters. Nevertheless, the last-iterate convergence rate analysis of GR is still open.
### Reflected forward-backward splitting method
The reflected forward-backward splitting method was proposed by Malitsky in [87] to solve (VIP) and it is called the **projected reflected gradient** method. It was generalized to solve monotone (NI) in [30], which is called the **reflected forward-backward splitting (RFBS)** scheme. The last iterate convergence rate of the **projected reflected gradient** method for (VIP) was recently proven in [27]. In this subsection, we survey this method for solving (NI). We provide a new best-iterate convergence rate analysis compared to [30]. We also present an elementary proof for the last-iterate convergence rate of RFBS for solving monotone (NI).
The reflected forward-backward splitting (RFBS) method to approximate a solution of (NI) is described as follows. Starting from \(x^{0}\in\operatorname{dom}(\Phi)\), we choose \(x^{-1}:=x^{0}\) and at each iteration \(k\geq 0\), we update
\[\left\{\begin{aligned} & y^{k}\quad:=\,2x^{k}-x^{k-1},\\ & x^{k+1}\,:=\,J_{\eta T}(x^{k}-\eta Fy^{k}),\end{aligned}\right.\] (RFBS2)
where \(\eta>0\) is a given step-size, determined later. Clearly, if we eliminate \(y^{k}\), then (RFBS2) can be written as
\[x^{k+1}:=J_{\eta T}(x^{k}-\eta F(2x^{k}-x^{k-1})).\]
From the second line of (RFBS2), we have \(\xi^{k+1}:=\frac{1}{\eta}(x^{k}-\eta Fy^{k}-x^{k+1})\in Tx^{k+1}\). As before, if we denote
\[w^{k}:=Fx^{k}+\xi^{k}\quad\text{and}\quad\hat{w}^{k}:=Fy^{k-1}+\xi^{k}, \tag{51}\]
then we can rewrite (RFBS2) equivalently to
\[\left\{\begin{aligned} & y^{k}\quad:=\,x^{k}+x^{k}-x^{k-1}\,=\,x^{k}- \eta\hat{w}^{k},\\ & x^{k+1}\,:=\,x^{k}-\eta\hat{w}^{k+1}.\end{aligned}\right. \tag{52}\]
This expression leads to \(x^{k+1}-y^{k}=-\eta(\hat{w}^{k+1}-\hat{w}^{k})\). Next, we prove the following lemmas for our analysis.
**Lemma 6.1**.: _Assume that \(T\) in (NI) is maximally monotone and \(F\) in (NI) is \(L\)-Lipschitz continuous and satisfies \(\langle Fx-Fx^{*},x-x^{*}\rangle\geq 0\) for all \(x\in\operatorname{dom}(\Phi)\) and some \(x^{*}\in\operatorname{zer}(\Phi)\). Let \(\left\{(x^{k},y^{k})\right\}\) be generated by (RFBS2) using \(\eta>0\) and \(\mathcal{V}_{k}\) be defined as_
\[\mathcal{V}_{k}\,:=\,\|x^{k}-x^{*}\|^{2}+2\|x^{k}-x^{k-1}\|^{2}+\big{(}1-\sqrt {2}L\eta\big{)}\|x^{k}-y^{k-1}\|^{2}+2\eta\langle Fy^{k-1}-Fx^{*},x^{k}-x^{k-1 }\rangle. \tag{53}\]
_Then, we have_
\[\begin{split}\mathcal{V}_{k}&\,\geq\,\mathcal{V}_{k+1}+ \left[1-(1+\sqrt{2})L\eta\right]\left[\|y^{k}-x^{k}\|^{2}+\|x^{k}-y^{k-1}\|^{2} \right],\\ \mathcal{V}_{k}&\,\geq\,(1-L\eta)\|x^{k}-x^{\star} \|^{2}+(1-(1+\sqrt{2})L\eta)\|x^{k}-y^{k-1}\|^{2}+2(1-L\eta)\|x^{k}-x^{k-1}\| ^{2}.\end{split} \tag{54}\]
Proof.: First, since (RFBS2) is equivalent to (52), we have \(x^{k+1}-x^{k}=-\eta\hat{w}^{k+1}\) from the second line of (52). Using this expression, for any \(x^{\star}\in\mathrm{zer}(\Phi)\), we have
\[\|x^{k+1}-x^{\star}\|^{2}\,=\,\|x^{k}-x^{\star}\|^{2}-2\eta\langle\hat{w}^{k+ 1},x^{k+1}-x^{\star}\rangle-\|x^{k+1}-x^{k}\|^{2}. \tag{55}\]
Next, since \(Fx^{\star}+\xi^{\star}=0\) from the fact that \(x^{\star}\in\mathrm{zer}(\Phi)\) and \(T\) is monotone, we have \(\langle\xi^{k+1},x^{k+1}-x^{\star}\rangle\geq\langle\xi^{\star},x^{k+1}-x^{ \star}\rangle=-\langle Fx^{\star},x^{k+1}-x^{\star}\rangle\), where \(\xi^{\star}\in Tx^{\star}\). Using this relation, we can prove that
\[\begin{split}\langle\hat{w}^{k+1},x^{k+1}-x^{\star}\rangle& \,=\,\langle Fy^{k},x^{k+1}-x^{\star}\rangle+\langle\xi^{k+1},x^{ k+1}-x^{\star}\rangle\\ &\,\geq\,\langle Fy^{k}-Fx^{\star},x^{k+1}-y^{k}\rangle+\langle F y ^{k}-Fx^{\star},y^{k}-x^{\star}\rangle.\end{split} \tag{56}\]
Utilizing \(y^{k}-x^{k}=x^{k}-x^{k-1}\) from the first line of (RFBS2), we can further expand
\[\langle Fy^{k}-Fx^{\star},y^{k}-x^{k+1}\rangle \,=\,\langle Fy^{k-1}-Fx^{\star},y^{k}-x^{k}\rangle-\langle Fy^{k }-Fx^{\star},x^{k+1}-x^{k}\rangle+\langle Fy^{k}-Fy^{k-1},y^{k}-x^{k}\rangle\] \[\,=\,\langle Fy^{k-1}-Fx^{\star},x^{k}-x^{k-1}\rangle-\langle Fy ^{k}-Fx^{\star},x^{k+1}-x^{k}\rangle\] (57) \[\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\
which proves the first inequality of (54).
Next, using Young's inequality twice, we can show that
\[2\eta\langle Fy^{k-1}-Fx^{\star},x^{k}-x^{k-1}\rangle \geq\,-\tfrac{L\eta}{2}\|y^{k-1}-x^{\star}\|^{2}-2L\eta\|x^{k}-x^{k- 1}\|^{2}\] \[\geq\,-L\eta\|x^{k}-x^{\star}\|^{2}-L\eta\|x^{k}-y^{k-1}\|^{2}-2L \eta\|x^{k}-x^{k-1}\|^{2}.\]
Substituting this estimate into (53), we get
\[\mathcal{V}_{k} :=\,\|x^{k}-x^{\star}\|^{2}+2\|x^{k}-x^{k-1}\|^{2}+\big{(}1-\sqrt {2}L\eta\big{)}\,\|x^{k}-y^{k-1}\|^{2}+2\eta\langle Fy^{k-1}-Fx^{\star},x^{k}- x^{k-1}\rangle\] \[\geq\,(1-L\eta)\|x^{k}-x^{\star}\|^{2}+(1-(1+\sqrt{2})L\eta)\|x^ {k}-y^{k-1}\|^{2}+2(1-L\eta)\|x^{k}-x^{k-1}\|^{2},\]
which proves the second line of (54).
**Lemma 6.2**.: _Assume that \(F+T\) in (NI) is maximally monotone and \(F\) in (NI) is \(L\)-Lipschitz continuous. Let \(\big{\{}(x^{k},y^{k})\big{\}}\) be generated by (RFBS2) using \(\eta>0\). Then, we have_
\[\|Fx^{k}+\xi^{k}\|^{2}\leq\tfrac{5L^{2}\eta^{2}+3}{3\eta^{2}}\|x^{k}-y^{k}\|^{ 2}+\tfrac{5L^{2}\eta^{2}+3}{5\eta^{2}}\|x^{k}-y^{k-1}\|^{2}. \tag{58}\]
_Moreover, if \(\sqrt{2}L\eta<1\), then with \(\kappa:=\tfrac{2L^{2}\eta^{2}}{1-2L^{2}\eta^{2}}>0\), we also have_
\[\|Fx^{k+1}+\xi^{k+1}\|^{2}+\kappa\|Fx^{k+1}-Fy^{k}\|^{2} \leq\,\|Fx^{k}+\xi^{k}\|^{2}+\kappa\|Fx^{k}-Fy^{k-1}\|^{2} \tag{59}\] \[\quad-\,\left(\tfrac{1-4L^{2}\eta^{2}}{1-2L^{2}\eta^{2}}\right) \|Fy^{k}-Fx^{k}+\xi^{k+1}-\xi^{k}\|^{2}.\]
Proof.: First, by Young's inequality, \(x^{k}-y^{k}=-\eta(Fy^{k-1}+\xi^{k})\) from (52), and the \(L\)-Lipschitz continuity of \(F\), we have
\[\|Fx^{k}+\xi^{k}\|^{2} \leq\,\big{(}1+\tfrac{5L^{2}\eta^{2}}{3}\big{)}\|Fy^{k-1}+\xi^{k} \|^{2}+\left(1+\tfrac{3}{5L^{2}\eta^{2}}\right)\|Fx^{k}-Fy^{k-1}\|^{2}\] \[\leq\,\tfrac{5L^{2}\eta^{2}+3}{3\eta^{2}}\|x^{k}-y^{k}\|^{2}+ \tfrac{5L^{2}\eta^{2}+3}{5\eta^{2}}\|x^{k}-y^{k-1}\|^{2}.\]
This estimate is exactly the first line of (58).
Next, by the monotonicity of \(F+T\), we have \(\langle w^{k+1}-w^{k},x^{k+1}-x^{k}\rangle\geq 0\). Substituting \(x^{k+1}-x^{k}=-\eta\hat{w}^{k+1}\) into this inequality, we get
\[0\,\leq\,2\langle w^{k},\hat{w}^{k+1}\rangle-2\langle w^{k+1},\hat{w}^{k+1} \rangle=\|w^{k}\|^{2}-\|w^{k+1}\|+\|w^{k+1}-\hat{w}^{k+1}\|^{2}-\|\hat{w}^{k+1 }-w^{k}\|^{2}.\]
This inequality implies that
\[\|w^{k+1}\|^{2}\,\leq\,\|w^{k}\|^{2}+\|w^{k+1}-\hat{w}^{k+1}\|^{2}-\|\hat{w}^{ k+1}-w^{k}\|^{2}.\]
On the other hand, by the Lipschitz continuity of \(F\) and \(x^{k+1}-y^{k}=-\eta(\hat{w}^{k+1}-\hat{w}^{k})\), we have \(\|w^{k+1}-\hat{w}^{k+1}\|^{2}=\|Fx^{k+1}-Fy^{k}\|^{2}\leq L^{2}\|x^{k+1}-y^{k} \|^{2}=L^{2}\eta^{2}\|\hat{w}^{k+1}-\hat{w}^{k}\|^{2}\leq 2L^{2}\eta^{2}\|\hat{w}^{k+1}- \hat{w}^{k}\|^{2}\leq 2L^{2}\eta^{2}\|\hat{w}^{k+1}-w^{k}\|^{2}+2L^{2}\eta^{2}\|w^{k}- \hat{w}^{k}\|^{2}\). Multiplying this inequality by \(\omega\geq 1\), and adding the result to the last inequality, we get
\[\|w^{k+1}\|^{2}+(\omega-1)\|w^{k+1}-\hat{w}^{k+1}\|^{2}\,\leq\,\|w^{k}\|^{2}+2 \omega L^{2}\eta^{2}\|w^{k}-\hat{w}^{k}\|^{2}-(1-2\omega L^{2}\eta^{2})\|\hat{ w}^{k+1}-w^{k}\|^{2}.\]
Finally, let us choose \(\omega\geq 1\) such that \(\omega-1=2\omega L^{2}\eta^{2}\). If \(2L^{2}\eta^{2}<1\), then \(\omega:=\tfrac{1}{1-2L^{2}\eta^{2}}\) satisfies \(\omega-1=2\omega L^{2}\eta^{2}\). Consequently, the last estimate leads to (58).
Now, we are ready to establish the convergence of (RFBS2) in the following theorem.
**Theorem 6.1**.: _Assume that \(\operatorname{zer}(\Phi)\neq\emptyset\), \(T\) in (NI) is maximally monotone, and \(F\) in (NI) is \(L\)-Lipschitz continuous and satisfies \(\langle Fx-Fx^{\star},x-x^{\star}\rangle\geq 0\) for all \(x\in\operatorname{dom}(\Phi)\) and some \(x^{\star}\in\operatorname{zer}(\Phi)\). Let \(\big{\{}(x^{k},y^{k})\big{\}}\) be generated by (RFBS2) using \(\eta\in\Big{(}0,\tfrac{\sqrt{2}-1}{L}\Big{)}\). Then, we have the following statements._
* _The_ \(\mathcal{O}\big{(}1/\sqrt{k}\big{)}\) _best-iterate convergence rate._ _The following bound holds:_ \[\frac{1}{k+1}\sum_{l=0}^{k}\|Fx^{l}+\xi^{l}\|^{2}\leq\frac{1}{k+1}\sum_{l=0}^{k }\big{[}\|Fx^{l}+\xi^{l}\|^{2}+\kappa\|Fx^{l}-Fy^{l-1}\|^{2}\big{]}\leq\frac{C_ {0}\|x^{0}-x^{\star}\|^{2}}{k+1},\] (60) _where_ \(\kappa:=\frac{2L^{2}\eta^{2}}{1-L^{2}\eta^{2}}>0\) _and_ \(C_{0}:=\frac{5L^{2}\eta^{2}+3}{3\eta^{2}\big{[}1-(1+\sqrt{2})L\eta\big{]}}>0\)_._
* _The_ \(\mathcal{O}\big{(}1/\sqrt{k}\big{)}\) _last-iterate convergence rate._ _If_ \(\Phi\) _is additionally monotone, then we also have_ \[\|Fx^{k}+\xi^{k}\|^{2}\leq\|Fx^{k}+\xi^{k}\|^{2}+\kappa\|Fx^{k}-Fy^{k-1}\|^{2 }\leq\frac{C_{0}\|x^{0}-x^{\star}\|^{2}}{k+1}.\] (61) _As a consequence, we have the last-iterate convergence rate_ \(\mathcal{O}\left(\frac{1}{\sqrt{k}}\right)\) _of the residual norm_ \(\|Fx^{k}+\xi^{k}\|\)_._
Proof.: First, since \(0<\eta<\frac{\sqrt{2}-1}{L}\), we have \(1-(1+\sqrt{2})L\eta>0\) and \(\kappa:=\frac{2L^{2}\eta^{2}}{1-2L^{2}\eta^{2}}<\frac{2}{3}\). From (58), we have
\[\|Fx^{k}+\xi^{k}\|^{2}+\kappa\|Fx^{k}-Fy^{k-1}\|^{2} \leq \frac{5L^{2}\eta^{2}+3}{3\eta^{2}}\|x^{k}-y^{k}\|^{2}+\left( \frac{5L^{2}\eta^{2}+3}{5\eta^{2}}+\frac{2L^{2}}{3}\right)\|x^{k}-y^{k-1}\|^{2}\] \[\leq \frac{5L^{2}\eta^{2}+3}{3\eta^{2}}\left[\|x^{k}-y^{k}\|^{2}+\|x^ {k}-y^{k-1}\|^{2}\right].\]
Combining this estimate and (54), we get
\[\frac{3\eta^{2}\big{[}1-(1+\sqrt{2})L\eta\big{]}}{5L^{2}\eta^{2} +3}\left[\|Fx^{k}+\xi^{k}\|^{2}+\kappa\|Fx^{k}-Fy^{k-1}\|^{2}\right] \leq (1-(1+\sqrt{2})L\eta)\left[\|x^{k}-y^{k}\|^{2}+\|x^{k}-y^{k-1}\|^ {2}\right]\] \[\leq \mathcal{V}_{k}-\mathcal{V}_{k+1}.\]
Summing up this inequality from \(l=0\) to \(l=k\), and using \(y^{-1}:=x^{0}\), we get
\[\frac{3\eta^{2}\big{[}1-(1+\sqrt{2})L\eta\big{]}}{5L^{2}\eta^{2} +3}\sum_{l=0}^{k}\left[\|Fx^{l}+\xi^{l}\|^{2}+\kappa\|Fx^{l}-Fy^{l-1}\|^{2} \right]\leq\,\mathcal{V}_{0}-\mathcal{V}_{k+1}\leq\mathcal{V}_{0}=\|x^{0}-x^{ \star}\|^{2}.\]
This inequality implies (60). Finally, combining (60) and (59), we obtain (61).
**Remark 6.1**.: We can modify (RFBS2) to capture adaptive parameters as \(y^{k}:=x^{k}+\beta_{k}(x^{k}-x^{k-1})\) and \(x^{k+1}:=J_{\eta_{k}T}(x^{k}-\eta_{k}Fy^{k})\), where \(\eta_{k}:=\beta_{k}\eta_{k-1}\) for some \(\beta_{k}>0\). Then, by imposing appropriate bounds on \(\eta_{k}\), we can still prove the convergence of this variant by modifying the proof of Theorem 6.1. We also note that our best-iterate convergence analysis of (RFBS2) in this paper is relatively different from [30], while the last-iterate convergence rate analysis is new and very simple.
### The golden ratio method for (Ni)
The golden ratio (GR) method for solving (NI) is presented as follows. Starting from \(x^{0}\in\mathrm{dom}(\Phi)\), we set \(y^{-1}:=x^{0}\), and at each iteration \(k\geq 0\), we update
\[\left\{\begin{array}{lcl}y^{k}&:=&\frac{\omega-1}{\omega}x^{k}+\frac{1}{ \omega}y^{k-1},\\ x^{k+1}&:=&J_{\eta T}(y^{k}-\eta Fx^{k}),\end{array}\right.\] (GR2)
where \(J_{\eta T}\) is the resolvent of \(\eta T\), \(\omega>1\) is given, and \(\eta\in(0,\frac{\omega}{2L})\).
This method was proposed by Malitsky in [88] to solve monotone (MVIP), where \(\omega\) is chosen as \(\omega:=\frac{\sqrt{5}+1}{2}\), leading to the name: _golden ratio_. We now extend it to solve (NI) for the case \(F\) is monotone and \(L\)-Lipschitz continuous, and \(T\) is maximally 3-cyclically monotone. Moreover, we extend our analysis for any \(\omega\in(1,1+\sqrt{3})\) instead of fixing \(\omega:=\frac{\sqrt{5}+1}{2}\). We call this extension the GR2+ scheme.
Let us denote \(\hat{w}^{k}:=Fx^{k-1}+\xi^{k}\) for \(\xi^{k}\in Tx^{k}\). Then, we can rewrite the second line of (GR2) as \(x^{k+1}:=y^{k}-\eta(Fx^{k}+\xi^{k+1})=y^{k}-\eta\hat{w}^{k+1}\) for \(\xi^{k+1}\in Tx^{k+1}\). In this case, we have \(x^{k}=y^{k-1}-\eta(Fx^{k-1}+\xi^{k})\)
leading to \(y^{k-1}=x^{k}+\eta\tilde{w}^{k}\). Combining this expression and the first line of (GR2), we have \(y^{k}=\frac{w-1}{\omega}x^{k}+\frac{1}{\omega}(x^{k}+\eta\tilde{w}^{k})=x^{k}+ \frac{\eta}{\omega}\tilde{w}^{k}\). Consequently, we can rewrite (GR2) equivalently as follows:
\[\left\{\begin{aligned} y^{k}&:=\,x^{k}+\frac{\eta}{ \omega}\tilde{w}^{k},\\ x^{k+1}&:=\,y^{k}-\eta(Fx^{k}+\xi^{k+1})\,=\,y^{k}- \eta\tilde{w}^{k+1}.\end{aligned}\right. \tag{62}\]
If we eliminate \(y^{k}\), then we obtain
\[\left\{\begin{aligned} x^{k+1}&:=\,J_{\eta T} \left(x^{k}-\eta\left(Fx^{k}-\frac{1}{\omega}(Fx^{k-1}+\xi^{k})\right)\right)= x^{k}-\eta\tilde{w}^{k+1}+\frac{\eta}{\omega}\tilde{w}^{k},\\ \xi^{k+1}&:=\,\frac{1}{\eta}(x^{k}-x^{k+1})-\left( Fx^{k}-\frac{1}{\omega}(Fx^{k-1}+\xi^{k})\right).\end{aligned}\right. \tag{63}\]
The convergence of (GR2) is established based on the following key lemma.
**Lemma 6.3**.: _Suppose that \(\mathrm{zer}(\Phi)\neq\emptyset\), \(T\) in (NI) is maximally \(3\)-cyclically monotone, and \(F\) is \(L\)-Lipschitz continuous. Let \(\left\{(x^{k},y^{k})\right\}\) be generated by (GR2) with \(\omega>1\). Then, for any \(x^{\star}\in\mathrm{zer}(\Phi)\), we have_
\[\begin{split}\omega\|y^{k+1}-x^{\star}\|^{2}+(\omega-1)(\omega- \gamma)\|x^{k+1}-x^{k}\|^{2}&\,\leq\,\omega\|y^{k}-x^{\star}\|^ {2}+\frac{(\omega-1)^{2}\eta^{2}}{\gamma}\|x^{k}-x^{k-1}\|^{2}\\ &\,-\,\omega(\omega-1)\|x^{k}-y^{k}\|^{2}-\frac{(\omega-1)(1- \omega^{2}+\omega)}{\omega}\|x^{k+1}-y^{k}\|^{2}\\ &\,-\,2\eta(\omega-1)\langle Fx^{k}-Fx^{\star},x^{k}-x^{\star} \rangle.\end{split} \tag{64}\]
Proof.: Since \(T\) is \(3\)-cyclically monotone, for \(\xi^{k+1}\in Tx^{k+1}\), \(\xi^{k}\in Tx^{k}\), and \(x^{\star}\in Tx^{\star}\), we have
\[\langle\xi^{k+1},x^{k+1}-x^{\star}\rangle+\langle\xi^{\star},x^{\star}-x^{k} \rangle+\langle\xi^{k},x^{k}-x^{k+1}\rangle\geq 0. \tag{65}\]
From (62), we have \(\eta\xi^{k+1}=y^{k}-x^{k+1}-\eta Fx^{k}\) and \(\eta\xi^{k}=y^{k-1}-x^{k}-\eta Fx^{k-1}\). Moreover, since \(x^{\star}\in\mathrm{zer}(\Phi)\), we have \(Fx^{\star}+\xi^{\star}=0\), leading to \(\eta\xi^{\star}=-\eta Fx^{\star}\). Substituting these expressions into (65), we have
\[\langle y^{k}-x^{k+1}-\eta Fx^{k},x^{k+1}-x^{\star}\rangle+\langle y^{k-1}-x^ {k}-\eta Fx^{k-1},x^{k}-x^{k+1}\rangle-\langle Fx^{\star},x^{\star}-x^{k} \rangle\geq 0.\]
However, since \(y^{k}:=\frac{\omega-1}{\omega}x^{k}+\frac{1}{\omega}y^{k-1}\) from the first line of (GR2), we get \(y^{k-1}-x^{k}=\omega(y^{k}-x^{k})\). Substituting this relation into the last inequality, rearranging the result, we obtain
\[\langle y^{k}-x^{k+1},x^{k+1}-x^{\star}\rangle+\omega\langle x^{k}-y^{k},x^{k +1}-x^{k}\rangle+\eta\langle Fx^{k-1}-Fx^{k},x^{k+1}-x^{k}\rangle-\eta\langle Fx ^{k}-Fx^{\star},x^{k}-x^{\star}\rangle\geq 0. \tag{66}\]
By Young's inequality and the \(L\)-Lipschitz continuity of \(F\), for any \(\gamma>0\), we get the first line of the following:
\[\begin{split} 2\eta\langle Fx^{k-1}-Fx^{k},x^{k+1}-x^{k}\rangle& \,\leq\,\frac{L^{2}\eta^{2}}{\gamma}\|x^{k}-x^{k-1}\|^{2}+\gamma\|x^{k+1}-x^{k }\|^{2},\\ 2\langle y^{k}-x^{k+1},x^{k+1}-x^{\star}\rangle& \,=\,\|y^{k}-x^{\star}\|^{2}-\|x^{k+1}-x^{\star}\|^{2}-\|x^{k+1}-y^{k}\|^{2}, \\ 2\langle x^{k}-y^{k},x^{k+1}-x^{k}\rangle&\,=\,\|x^{k+1 }-y^{k}\|^{2}-\|x^{k}-y^{k}\|^{2}-\|x^{k+1}-x^{k}\|^{2}.\end{split}\]
Substituting these expressions into (66), and rearranging the result, we can show that
\[\begin{split}\|x^{k+1}-x^{\star}\|^{2}&\,\leq\,\|y^{k}- x^{\star}\|^{2}+(\omega-1)\|x^{k+1}-y^{k}\|^{2}-\omega\|x^{k}-y^{k}\|^{2}-( \omega-\gamma)\|x^{k+1}-x^{k}\|^{2}\\ &\,+\,\frac{L^{2}\eta^{2}}{\gamma}\|x^{k}-x^{k-1}\|^{2}-2\eta \langle Fx^{k}-Fx^{\star},x^{k}-x^{\star}\rangle.\end{split} \tag{67}\]
Now, using \((\omega-1)x^{k+1}=\omega y^{k+1}-y^{k}\) and \(\omega(y^{k+1}-y^{k})=(\omega-1)(x^{k+1}-y^{k})\) from the first line of (GR2), we can derive that
\[\begin{split}(\omega-1)^{2}\|x^{k+1}-x^{\star}\|^{2}& \,=\,\omega(\omega-1)\|y^{k+1}-x^{\star}\|^{2}-(\omega-1)\|y^{k}-x^{\star}\|^{2}+ \omega\|y^{k+1}-y^{k}\|^{2}\\ &\,=\,\omega(\omega-1)\|y^{k+1}-x^{\star}\|^{2}-(\omega-1)\|y^{k}- x^{\star}\|^{2}+\frac{(\omega-1)^{2}}{\omega}\|x^{k+1}-y^{k}\|^{2}.\end{split}\]
Simplifying this expression to get \((\omega-1)\|x^{k+1}-x^{\star}\|^{2}=\omega\|y^{k+1}-x^{\star}\|^{2}-\|y^{k}-x^{ \star}\|^{2}+\frac{(\omega-1)}{\omega}\|x^{k+1}-y^{k}\|^{2}\). Combining it and (67), and rearranging the result, we obtain (64).
Now, we are ready to state the convergence of (GR2) in the following theorem.
**Theorem 6.2**.: _Assume that \(\mathrm{zer}(\Phi)\neq\emptyset\), \(T\) in (NI) is maximally \(3\)-cyclically monotone, and \(F\) in (NI) is \(L\)-Lipschitz continuous and satisfies \(\langle Fx-Fx^{\star},x-x^{\star}\rangle\geq 0\) for all \(x\in\mathrm{dom}(\Phi)\) and some \(x^{\star}\in\mathrm{zer}(\Phi)\). Let \(\left\{(x^{k},y^{k})\right\}\) be generated by (GR2). Then, the following statements hold._
* _The best-iterate rate of GR2._ _If_ \(1<\omega\leq\frac{1+\sqrt{5}}{2}\) _and_ \(\eta\in\left(0,\frac{\omega}{2L}\right)\)_, then_ \[\frac{1}{k+1}\sum_{l=0}^{k}\|Fx^{l}+\xi^{l}\|^{2}\leq\frac{1}{k+1}\sum_{l=0}^ {k}(\omega-1)\left[\omega\|x^{l}-y^{l}\|^{2}+\varphi\cdot\|x^{l}-x^{l-1}\|^{2 }\right]\leq\frac{C_{0}\|x^{0}-x^{\star}\|^{2}}{k+1},\] (68) _where_ \(\varphi:=\frac{\omega^{2}-4L^{2}\eta^{2}}{2\omega}>0\) _and_ \(C_{0}:=\frac{(\omega^{2}-2L^{2}\eta^{2})\omega^{2}}{(\omega^{2}-4L^{2}\eta^{2 })\eta^{2}(\omega-1)}>0\)_._
* _The best-iterate rate for GR2+._ _If_ \(\frac{1+\sqrt{5}}{2}<\omega<1+\sqrt{3}\) _and_ \(0<\eta<\frac{\psi}{2L}\)_, then_ \[\frac{1}{k+1}\sum_{l=0}^{k}\|Fx^{l}+\xi^{l}\|^{2}\leq\frac{1}{k+1}\sum_{l=0}^{ k}(\omega-1)\left[\psi\cdot\|x^{l}-y^{l}\|^{2}+\kappa\cdot\|x^{l}-y^{l-1}\|^{2} \right]\leq\frac{\hat{C}_{0}\|x^{0}-x^{\star}\|^{2}}{k+1},\] (69) _where_ \(\psi:=\frac{2\omega+2-\omega^{2}}{\omega}>0\)_,_ \(\kappa:=\frac{\psi^{2}-4L^{2}\eta^{2}}{2\psi}\)_, and_ \(\hat{C}_{0}:=\frac{[\psi^{2}-2L^{2}\eta^{2}(2\omega^{2}-\psi^{2})]\omega}{( \omega-1)(\psi^{2}-4L^{2}\eta^{2})\eta^{2}\psi}>0\)_._
Proof.: First, to guarantee that \(1+\omega-\omega^{2}\geq 0\) and \(\omega>1\), we need to choose \(1<\omega\leq\frac{\sqrt{5}+1}{2}\). If \(0<\eta<\frac{\omega}{2L}\), then by choosing \(\gamma:=\frac{\omega}{2}\), we have \(\psi:=\frac{(\omega-1)(\omega\gamma-\gamma^{2}-L^{2}\eta^{2})}{\gamma}=\frac{ (\omega-1)(\omega^{2}-4L^{2}\eta^{2})}{2\omega}>0\). Using this relation and \(\langle Fx^{k}-Fx^{\star},x^{k}-x^{\star}\rangle\geq 0\), if we define \(\mathcal{V}_{k}:=\omega\|y^{k}-x^{\star}\|^{2}+\frac{\omega(\omega-1)}{2}\|x ^{k}-x^{k-1}\|^{2}\geq 0\), then we can deduce from (64) that
\[\mathcal{V}_{k+1}\,\leq\,\mathcal{V}_{k}-\psi\cdot\|x^{k}-x^{k-1}\|^{2}- \omega(\omega-1)\|x^{k}-y^{k}\|^{2}. \tag{70}\]
Next, using \(y^{k}-x^{k}=\frac{\eta}{\omega}\tilde{w}^{k}\) and \(\tilde{w}^{k}=Fx^{k-1}+\xi^{k}\), by Young's inequality, we have
\[\begin{split}\|w^{k}\|^{2}&=\,\|Fx^{k}+\xi^{k}\|^{ 2}\leq\left(1+\frac{\psi\omega}{L^{2}\eta^{2}(\omega-1)}\right)\|Fx^{k}-Fx^{k- 1}\|^{2}+\left(1+\frac{L^{2}\eta^{2}(\omega-1)}{\psi\omega}\right)\|\tilde{w} ^{k}\|^{2}\\ &\leq\,\left(1+\frac{\psi\omega}{L^{2}\eta^{2}(\omega-1)}\right)L^ {2}\|x^{k}-x^{k-1}\|^{2}+\left(1+\frac{L^{2}\eta^{2}(\omega-1)}{\psi\omega} \right)\frac{\omega^{2}}{\eta^{2}}\|x^{k}-y^{k}\|^{2}\\ &=\,\frac{(\omega-1)}{\psi\eta^{2}(\omega-1)}\left[\psi\cdot\|x^ {k}-x^{k-1}\|^{2}+\omega(\omega-1)\|x^{k}-y^{k}\|^{2}\right].\end{split} \tag{71}\]
Combining this estimate and (70), and noting that \(\mathcal{V}_{k}\geq 0\), we can show that
\[\begin{split}\sum_{l=0}^{k}\|w^{l}\|^{2}&\leq\, \frac{L^{2}\eta^{2}(\omega-1)+\psi\omega}{\psi\eta^{2}(\omega-1)}\sum_{l=0}^{ k}\left[\psi\cdot\|x^{l}-x^{l-1}\|^{2}+\omega(\omega-1)\|x^{l}-y^{l}\|^{2}\right]\\ &\leq\,\frac{L^{2}\eta^{2}(\omega-1)+\psi\omega}{\psi\eta^{2}( \omega-1)}\left[\mathcal{V}_{0}-\mathcal{V}_{k+1}\right]\leq\frac{L^{2}\eta^{2 }(\omega-1)+\psi\omega}{\psi\eta^{2}(\omega-1)}\cdot\mathcal{V}_{0}\\ &=\,\frac{(\omega^{2}-2L^{2}\eta^{2})\omega^{2}}{(\omega^{2}-4L^ {2}\eta^{2})\eta^{2}(\omega-1)}\cdot\|x^{0}-x^{\star}\|^{2},\end{split}\]
which is exactly (68), where we have used \(\mathcal{V}_{0}:=\omega\|y^{0}-x^{\star}\|^{2}+\frac{\omega(\omega-1)}{2}\|x^{0 }-x^{-1}\|^{2}=\omega\|x^{0}-x^{\star}\|^{2}\) due to \(x^{-1}=y^{0}=x^{0}\).
Next, if \(1.6180\approx\frac{1+\sqrt{5}}{2}<\omega<1+\sqrt{3}\approx 2.7321\), then we have \(\omega^{2}-\omega-1>0\) and \(\psi:=\omega-\frac{2(\omega^{2}-\omega-1)}{\omega}>0\). In this case, using \(\|x^{k+1}-y^{k}\|^{2}\leq 2\|x^{k+1}-x^{k}\|^{2}+2\|y^{k}-x^{k}\|^{2}\) and \(\langle Fx^{k}-Fx^{\star},x^{k}-x^{\star}\rangle\geq 0\) into (64), rearranging the result, and using \(\gamma:=\frac{\psi}{2}\), we get
\[\begin{split}\omega\|y^{k+1}-x^{\star}\|^{2}+\frac{\psi(\omega-1)}{2} \|x^{k+1}-x^{k}\|^{2}&\leq\,\omega\|y^{k}-x^{\star}\|^{2}+\frac{ \psi(\omega-1)}{2}\|x^{k}-x^{k-1}\|^{2}-\psi(\omega-1)\|x^{k}-y^{k}\|^{2}\\ &\,-\,\frac{(\omega-1)(\psi^{2}-4L^{2}\eta^{2})}{2\psi}\|x^{k}-x^ {k-1}\|^{2}.\end{split} \tag{72}\]
Similar to the proof of (71), we have \(\|w^{k}\|^{2}\leq\frac{\psi^{2}-2L^{2}\eta^{2}(2\omega^{2}-\psi^{2})}{(\psi^{2}-4L ^{2}\eta^{2})\eta^{2}\psi}\big{[}\frac{\psi^{2}-4L^{2}\eta^{2}}{2\psi}\|x^{k}-x^ {k-1}\|^{2}+\psi\|x^{k}-y^{k}\|^{2}\big{]}\). Combining this inequality and (72), with same argument as in the proof of (68), we obtain (69).
## 7 Accelerated Extragradient Methods for Nonlinear Inclusions
**Introduction.** The convergence rate on the residual norm \(\|Fx^{k}+\xi^{k}\|\) of the EG method and its variants using constant stepsize discussed so far is \(\mathcal{O}\big{(}1/\sqrt{k}\big{)}\), which is unimprovable for standard and constant stepsize EG-type methods as shown in [54]. In this section, we survey recent development on accelerated methods that can theoretically achieve a \(\mathcal{O}\left(1/k\right)\) last-iterate convergence rate on \(\|Fx^{k}+\xi^{k}\|\) using variable stepsizes. We will present two different approaches to develop accelerated methods for solving (NE) and (NI) without using averaging sequences. The first one relies on Halpern's fixed-point iteration [60], and the second approach leverages Nesterov's accelerated techniques. Halpern's fixed-point iteration is a classical method to approximate a fixed-point of a nonexpansive operator, or equivalently, to find a root of a co-coercive operator. This method has been intensively studied in fixed-point theory, but the first work showing a \(\mathcal{O}\left(1/k\right)\) last-iterate convergence rate is due to F. Lieder in [77]. This method was then extended and intensively studied in [42] for root-finding problems and VIPs. In a pioneering work [143], Yoon and Ryu extended Halpern's fixed-point iteration to the EG method for solving (NE), which is called _extra-anchored gradient_ (EAG) method. This new method still achieves \(\mathcal{O}\left(1/k\right)\) last-iterate convergence but only requires the monotonicity and Lipschitz continuity of \(F\). Lee and Kim in [75] further advanced [143] to the co-hypomonotone setting of (NE) and still achieved the same rates. The authors in [132] exploited the technique in [143] and applied it to the past-extragradient method in [113] and obtained a past extra-anchored gradient (PEAG) method that has the same \(\mathcal{O}\left(1/k\right)\)-rates (up to a constant factor). Recently, [25] and [27] expanded the results in [75, 132, 143] to develop methods for solving (VIP) and (NI), and preserved the same \(\mathcal{O}\left(1/k\right)\) last-iterate convergence rates on \(\|Fx^{k}+\xi^{k}\|\).
In this section, we summarize the above mentioned results and provide a unified convergence analysis obtained from a recent work [130] that covers all the results from [25, 27, 75, 132, 143] in a unified fashion.
### The extra-anchored gradient method for (Ni)
**The algorithm.** The extra-anchored gradient (EAG) for solving (NI) we discuss here is presented as follows. Starting from \(x^{0}\in\text{dom}(\Phi)\), at each iteration \(k\geq 0\), we update
\[\left\{\begin{aligned} y^{k}&:=J_{\hat{\eta}_{k}T} \left(\tau_{k}x^{0}+(1-\tau_{k})x^{k}-\hat{\eta}_{k}Fx^{k}\right),\\ x^{k+1}&:=J_{\eta_{k}T}\left(\tau_{k}x^{0}+(1-\tau_{k})x ^{k}-\eta_{k}Fy^{k}\right),\end{aligned}\right.\] (EAG2)
where \(\tau_{k}\in(0,1)\), \(\hat{\eta}_{k}>0\), and \(\eta_{k}>0\) are given, which will be determined later. Here, we assume that \(T\) is maximally 3-cyclically monotone, and hence covers the special cases \(T=\mathcal{N}_{\mathcal{X}}\), the normal cone of \(\mathcal{X}\) and \(T=\partial g\), the subdifferential of a convex function \(g\). In fact, [25] considers the special case (VIP) of (NI) when \(T:=\mathcal{N}_{\mathcal{X}}\) is normal cone of a nonempty, closed, and convex set \(\mathcal{X}\), and \(\hat{\eta}_{k}:=\eta_{k}\). As we can see, the scheme (EAG2) purely extends the extra-anchored gradient (EAG) scheme from [143] for (NE) to (NI), when \(F\) is monotone and Lipschitz continuous, and \(T\) is maximally 3-cyclically monotone.
**Convergence analysis.** For simplicity of analysis, we recall the following quantities defined ealier:
\[w^{k}:=Fx^{k}+\xi^{k},\quad\hat{w}^{k}:=Fy^{k-1}+\xi^{k},\quad\text{and}\quad \tilde{w}^{k}:=Fx^{k}+\zeta^{k}, \tag{73}\]
where \(\xi^{k}\in Tx^{k}\) and \(\zeta^{k}\in Ty^{k}\). Then, we can equivalently rewrite (EAG2) as follows:
\[\left\{\begin{aligned} y^{k}&:=\tau_{k}x^{0}+(1-\tau_{k })x^{k}-\hat{\eta}_{k}\tilde{w}^{k},\\ x^{k+1}&:=\tau_{k}x^{0}+(1-\tau_{k})x^{k}-\eta_{k} \hat{w}^{k+1}.\end{aligned}\right. \tag{74}\]
To establish the convergence of (EAG2), we use the following potential function as in [25, 75, 130, 143]:
\[\mathcal{V}_{k}:=a_{k}\|w^{k}\|^{2}+b_{k}\langle w^{k},x^{k}-x^{0}\rangle, \tag{75}\]
where \(a_{k}>0\) and \(b_{k}>0\) are given parameters. Let us prove the convergence of (EAG2).
**Theorem 7.1**.: _For (1), assume that \(\mathrm{zer}(\Phi)\neq\emptyset\), \(F\) is \(L\)-Lipschitz continuous and monotone, and \(T\) is maximally \(3\)-cyclically monotone. Let \(\{(x^{k},y^{k})\}\) be generated by (EAG2) using_
\[\tau_{k}:=\frac{1}{k+2},\quad\eta_{k}:=\eta\in\left(0,\frac{1}{L}\right],\quad \text{and}\quad\hat{\eta}_{k}:=(1-\tau_{k})\eta. \tag{76}\]
_Then, for all \(k\geq 0\) and any \(x^{\star}\in\mathrm{zer}(\Phi)\), the following result holds:_
\[\|Fx^{k}+\xi^{k}\|^{2}\leq\frac{4\|x^{0}-x^{\star}\|^{2}+2\eta^{2}\|Fx^{0}+\xi ^{0}\|^{2}}{\eta^{2}(k+1)^{2}},\quad\text{where}\quad\xi^{k}\in Tx^{k}. \tag{77}\]
_Consequently, we have the last-iterate convergence rate \(\mathcal{O}\left(1/k\right)\) of the residual norm \(\|Fx^{k}+\xi^{k}\|\)._
Proof.: Since \(T\) is maximally \(3\)-cyclically monotone, \(\xi^{k+1}\in Tx^{k+1}\), \(\xi^{k}\in Tx^{k}\), and \(\zeta^{k}\in Ty^{k}\), we have
\[\langle\xi^{k+1},x^{k+1}-x^{k}\rangle+\langle\xi^{k},x^{k}-y^{k}\rangle+ \langle\zeta^{k},y^{k}-x^{k+1}\rangle\geq 0.\]
By the monotonicity of \(F\), we also have \(\langle Fx^{k+1}-Fx^{k},x^{k+1}-x^{k}\rangle\geq 0\). Summing up this inequality and the last one, and then using the fact that \(w^{k+1}=Fx^{k+1}+\xi^{k+1}\), \(w^{k}:=Fx^{k}+\xi^{k}\), and \(\tilde{w}^{k}:=Fx^{k}+\zeta^{k}\), we obtain
\[\langle w^{k+1},x^{k+1}-x^{k}\rangle-\langle\tilde{w}^{k},x^{k+1}-x^{k} \rangle+\langle w^{k}-\tilde{w}^{k},x^{k}-y^{k}\rangle\geq 0. \tag{78}\]
Now, from (74), we have
\[x^{k+1}-x^{k} = -\tfrac{\tau_{k}}{1-\tau_{k}}(x^{k+1}-x^{0})-\tfrac{\eta_{k}}{1- \tau_{k}}\hat{w}^{k+1},\] \[x^{k+1}-x^{k} = -\tau_{k}(x^{k}-x^{0})-\eta_{k}\hat{w}^{k+1},\] \[x^{k}-y^{k} = \tau_{k}(x^{k}-x^{0})+\hat{\eta}_{k}\tilde{w}^{k}.\]
Substituting these relations into (78), and rearranging terms, we arrive at
\[\tau_{k}\langle w^{k},x^{k}-x^{0}\rangle-\tfrac{\tau_{k}}{1-\tau_{k}}\langle w ^{k+1},x^{k+1}-x^{0}\rangle\,\geq\,\tfrac{\eta_{k}}{1-\tau_{k}}\langle w^{k+1},\hat{w}^{k+1}\rangle-\eta_{k}\langle\tilde{w}^{k},\hat{w}^{k+1}\rangle-\hat{ \eta}_{k}\langle w^{k},\tilde{w}^{k}\rangle+\hat{\eta}_{k}\|\tilde{w}^{k}\|^ {2}.\]
Multiplying this inequality by \(\tfrac{b_{k}}{\tau_{k}}\) and assume that \(b_{k+1}=\tfrac{b_{k}}{1-\tau_{k}}\), and then using (75), we can show that
\[\mathcal{V}_{k}-\mathcal{V}_{k+1} = a_{k}\|w^{k}\|^{2}-a_{k+1}\|w^{k+1}\|^{2}+b_{k}\langle w^{k},x^ {k}-x^{0}\rangle-\tfrac{b_{k}}{1-\tau_{k}}\langle w^{k+1},x^{k+1}-x^{0}\rangle \tag{79}\] \[\geq \tfrac{b_{k+1}\eta_{k}}{\tau_{k}}\langle w^{k+1}-\tilde{w}^{k}, \hat{w}^{k+1}\rangle+b_{k+1}\eta_{k}\langle\tilde{w}^{k},\hat{w}^{k+1}\rangle- \tfrac{b_{k}\hat{\eta}_{k}}{\tau_{k}}\langle w^{k},\tilde{w}^{k}\rangle+\tfrac {b_{k}\hat{\eta}_{k}}{\tau_{k}}\|\tilde{w}^{k}\|^{2}\] \[+ a_{k}\|w^{k}\|^{2}-a_{k+1}\|w^{k+1}\|^{2}.\]
Next, from (74), we have \(x^{k+1}-y^{k}=-\eta_{k}\hat{w}^{k+1}+\hat{\eta}_{k}\tilde{w}^{k}\). Using this expression and the \(L\)-Lipschitz continuity of \(F\), we have \(\|w^{k+1}-\hat{w}^{k+1}\|^{2}=\|Fx^{k+1}-Fy^{k}\|^{2}\leq L^{2}\|x^{k+1}-y^{k} \|^{2}=L^{2}\|\eta_{k}\hat{w}^{k+1}-\hat{\eta}_{k}\tilde{w}^{k}\|^{2}\), leading to
\[\|w^{k+1}\|^{2}+(1-L^{2}\eta_{k}^{2})\|\hat{w}^{k+1}\|^{2}-2\langle w^{k+1}- \tilde{w}^{k},\hat{w}^{k+1}\rangle-2(1-L^{2}\eta_{k}\hat{\eta}_{k})\langle\hat {w}^{k+1},\tilde{w}^{k}\rangle-L^{2}\hat{\eta}_{k}^{2}\|\tilde{w}^{k}\|^{2} \leq 0.\]
Multiplying this inequality by \(\tfrac{b_{k+1}\eta_{k}}{2\tau_{k}}\), adding the result to (78), and using \(\hat{\eta}_{k}=(1-\tau_{k})\eta_{k}\), we obtain
\[\mathcal{V}_{k}-\mathcal{V}_{k+1} \geq \left(\tfrac{b_{k+1}\eta_{k}}{2\tau_{k}}-a_{k+1}\right)\|w^{k+1} \|^{2}+\tfrac{b_{k+1}\eta_{k}(1-L^{2}\eta_{k}^{2})}{2\tau_{k}}\|\hat{w}^{k+1}\| ^{2}+a_{k}\|w^{k}\|^{2}+\tfrac{b_{k+1}\eta_{k}(1-\tau_{k})^{2}(2-L^{2}\eta_{k}^ {2})}{2\tau_{k}}\|\tilde{w}^{k}\|^{2}\] \[-\tfrac{b_{k+1}\eta_{k}(1-L^{2}\eta_{k}^{2})(1-\tau_{k})}{\tau_{k}} \langle\tilde{w}^{k},\hat{w}^{k+1}\rangle-\tfrac{b_{k+1}\eta_{k}(1-\tau_{k})^{2 }}{\tau_{k}}\langle w^{k},\tilde{w}^{k}\rangle\] \[= \tfrac{b_{k+1}\eta_{k}(1-L^{2}\eta_{k}^{2})}{2\tau_{k}}\|\hat{w}^{ k+1}-(1-\tau_{k})\tilde{w}^{k}\|^{2}+\tfrac{b_{k+1}\eta_{k}(1-\tau_{k})^{2}}{2 \tau_{k}}\|w^{k}-\tilde{w}^{k}\|^{2}\] \[+\ \left(\tfrac{b_{k+1}\eta_{k}}{2\tau_{k}}-a_{k+1}\right)\|w^{k+1} \|^{2}+\left(a_{k}-\tfrac{b_{k}\eta_{k}(1-\tau_{k})}{2\tau_{k}}\right)\|w^{k}\|^ {2}.\]
Let us choose \(\eta_{k}:=\eta\in\left(0,\frac{1}{L}\right]\) as in (76), \(\tau_{k}:=\frac{1}{k+2}\) and \(a_{k}:=\frac{b_{k}\eta(1-\tau_{k})}{2\tau_{k}}=\frac{\eta b_{k}(k+1)}{2}\). Then, we have \(a_{k+1}=\frac{\eta b_{k+1}(k+2)}{2}=\frac{\eta b_{k+1}}{2\tau_{k}}\). Moreover, since \(b_{k+1}=\frac{b_{k}}{1-\tau_{k}}=\frac{b_{k}(k+2)}{k+1}\). By induction, we obtain \(b_{k}=b_{0}(k+1)\) for some \(b_{0}>0\). Using these parameters into the last estimate, we obtain
\[\mathcal{V}_{k}-\mathcal{V}_{k+1} \geq \tfrac{b_{k+1}\eta(1-L^{2}\eta^{2})}{2\tau_{k}}\|\hat{w}^{k+1}-(1- \tau_{k})\tilde{w}^{k}\|^{2}+\tfrac{b_{k+1}\eta(1-\tau_{k})^{2}}{2\tau_{k}}\|w^{k }-\tilde{w}^{k}\|^{2}\geq 0.\]
Finally, using \(\langle w^{k},x^{k}-x^{\star}\rangle\geq 0\) for \(x^{\star}\in\operatorname{zer}(\Phi)\) and \(b_{k}=b_{0}(k+1)\), we can lower bound \(\mathcal{V}_{k}\) as
\[\mathcal{V}_{k} =\,a_{k}\|w^{k}\|^{2}+b_{k}\langle w^{k},x^{\star}-x^{0}\rangle+b_ {k}\langle w^{k},x^{k}-x^{\star}\rangle\geq a_{k}\|w^{k}\|^{2}-b_{k}\|w^{k}\| \|x^{0}-x^{\star}\|\] \[\geq\,\left(a_{k}-\tfrac{\eta b_{k}^{2}}{4b_{0}}\right)\|w^{k}\|^ {2}-\tfrac{b_{0}}{\eta}\|x^{0}-x^{\star}\|^{2}=\tfrac{b_{0}\eta(k+1)^{2}}{4} \|w^{k}\|^{2}-\tfrac{b_{0}}{\eta}\|x^{0}-x^{\star}\|^{2}.\]
Combining the last two estimates, we can easily show that \(\tfrac{b_{0}\eta(k+1)^{2}}{4}\|w^{k}\|^{2}-\tfrac{b_{0}}{\eta}\|x^{0}-x^{\star }\|^{2}\leq\mathcal{V}_{k}\leq\mathcal{V}_{0}=a_{0}\|w^{0}\|^{2}=\tfrac{\eta b _{0}}{2}\|w^{0}\|^{2}\), leading to (77).
**Remark 7.1**.: Our analysis in Theorem 7.1 essentially relies on the proof technique in [143], and it is also different from [25]. We believe that our proof is rather elementary and using simple arguments. We note that our analysis can also be extended to prove the convergence of the past-extra-anchored gradient method (i.e., replacing \(Fx^{k}\) in (FEG2) by \(Fy^{k-1}\)) by using similar arguments as in [132]. We omit the details here.
### The fast extragradient method for (Ni)
**The algorithm.** The fast extragradient method (FEG) for solving (NI) developed in [27, 75, 130, 143] can be written in a unified form as follows. Starting from \(x^{0}\in\operatorname{dom}(\Phi)\), at each iteration \(k\geq 0\), we update
\[\left\{\begin{aligned} y^{k}&:=x^{k}+\tau_{k}(x^{0}-x^ {k})-(\hat{\eta}_{k}-\beta_{k})(Fx^{k}+\xi^{k}),\\ x^{k+1}&:=x^{k}+\tau_{k}(x^{0}-x^{k})-\eta_{k}(Fy^{k} +\xi^{k+1})+\beta_{k}(Fx^{k}+\xi^{k}),\end{aligned}\right.\] (FEG2)
where \(\xi^{k}\in Tx^{k}\), \(\tau_{k}\in(0,1)\), \(\beta_{k}\geq 0\), \(\eta_{k}>0\), and \(\hat{\eta}_{k}>0\) are given, determined later.
* If \(T=0\), \(\beta_{k}:=0\), and \(\eta_{k}=\hat{\eta}_{k}\), then (FEG2) reduces to the extra-anchored gradient (EAG) scheme for solving (NE) in [143] under the monotonicity of \(F\) as \[y^{k}:=x^{k}-\eta_{k}Fx^{k}\quad\text{and}\quad x^{k+1}=x^{k}-\eta_{k}Fy^{k}.\] (EAG)
* If \(T=0\), \(\beta_{k}:=2\rho(1-\tau_{k})\), and \(\eta_{k}:=\eta>0\), then (FEG2) reduces to the fast extragradient variant for solving (NE) in [75], but under the co-hypomonotonicity of \(F\).
* If \(T\) is a maximally monotone operator (e.g., \(T:=\mathcal{N}_{\mathcal{X}}\), the normal cone of a nonempty, closed, and convex set \(\mathcal{X}\)), \(\beta_{k}:=2\rho(1-\tau_{k})\) and \(\eta_{k}:=\eta>0\), then (FEG2) is exactly the variant studied in [25].
In fact, (FEG2) is rooted from Tseng's forward-backward-forward splitting method (FBFS2) instead of (EG2) because it only requires one resolvent evaluation \(J_{\eta T}\) per iteration. Recently, [130] provides an elementary convergence analysis for (FEG2), which relies on the technique in [143]. We survey this method here and present the convergence analysis from [143].
Let \(w^{k}\) and \(\hat{w}^{k}\) be defined as (73). Then, we can equivalently rewrite (FEG2) as follows:
\[\left\{\begin{aligned} y^{k}&:=x^{k}+\tau_{k}(x^{0}-x^ {k})-(\hat{\eta}_{k}-\beta_{k})w^{k},\\ x^{k+1}&:=x^{k}+\tau_{k}(x^{0}-x^{k})-\eta\hat{w}^{k+1}+ \beta_{k}w^{k}.\end{aligned}\right. \tag{80}\]
Clearly, (80) has the same form as the fast extragradient scheme in [75] for solving (NE), where \(w^{k}\) and \(\hat{w}^{k+1}\) reduce to \(Fx^{k}\) and \(Fy^{k}\), respectively. Since \(x^{k+1}\) are in both sides of line 2 of (80), we can rewrite (FEG2) as
\[\left\{\begin{aligned} y^{k}&:=x^{k}+\tau_{k}(x^{0}-x^ {k})-(\hat{\eta}_{k}-\beta_{k})(Fx^{k}+\xi^{k}),\\ x^{k+1}&\in\ J_{\eta T}\left(y^{k}-\eta Fy^{k}+\hat{ \eta}_{k}(Fx^{k}+\xi^{k})\right),\\ \xi^{k+1}&:=\tfrac{1}{\eta}\left(y^{k}-\eta Fy^{k}+ \hat{\eta}_{k}(Fx^{k}+\xi^{k})-x^{k+1}\right),\end{aligned}\right. \tag{81}\]
where \(\xi^{0}\in Tx^{0}\) is arbitrary, and \(J_{\eta T}\) is the resolvent of \(\eta T\), which may not be single-valued in our case. However, for our iterates to be well-defined, we will assume that \(\operatorname{ran}(J_{\eta T})\subseteq\operatorname{dom}(F)=\mathbb{R}^{p}\), and \(\operatorname{dom}(J_{\eta T})=\mathbb{R}^{p}\).
**Convergence analysis.** Using the same potential function \(\mathcal{V}_{k}\) as in (75), we can prove the convergence of (FEG2) in the following theorem.
**Theorem 7.2**.: _Assume that \(\Phi\) in (NI) is \(\rho\)-co-hypomonotone, \(F\) is \(L\)-Lipschitz continuous such that \(2L\rho<1\), \(\mathrm{zer}(\Phi)\neq\emptyset\), \(\mathrm{ran}(J_{\eta T})\subseteq\mathrm{dom}(F)=\mathbb{R}^{p}\), and \(\mathrm{dom}(J_{\eta T})=\mathbb{R}^{p}\). Let \(\{(x^{k},y^{k})\}\) be generated by (FEG2) using_
\[\tau_{k}:=\frac{1}{k+2},\quad\beta_{k}:=2\rho(1-\tau_{k}),\quad\eta_{k}:=\eta \in\left(2\rho,\frac{1}{L}\right],\quad\text{and}\quad\hat{\eta}_{k}:=(1-\tau_ {k})\eta. \tag{82}\]
_Then, for all \(k\geq 0\) and any \(x^{\star}\in\mathrm{zer}(\Phi)\), we have_
\[\|Fx^{k}+\xi^{k}\|^{2}\leq\frac{4\|x^{0}-x^{\star}\|^{2}+2\eta(\eta-2\rho)\| Fx^{0}+\xi^{0}\|^{2}}{(\eta-2\rho)^{2}(k+1)^{2}},\quad\text{where}\quad\xi^{k}\in Tx ^{k}. \tag{83}\]
_Consequently, we have the last-iterate convergence rate \(\mathcal{O}\left(1/k\right)\) of the residual norm \(\|Fx^{k}+\xi^{k}\|\)._
Proof.: Since (FEG2) is equivalent to (80), from the second line of (FEG2), we can easily show that
\[\left\{\begin{aligned} x^{k+1}-x^{k}&=\,-\tau_{k}(x^ {k}-x^{0})-\eta\hat{w}^{k+1}+\beta_{k}w^{k}\\ x^{k+1}-x^{k}&=\,-\frac{\tau_{k}}{1-\tau_{k}}(x^{k+ 1}-x^{0})-\frac{\eta}{1-\tau_{k}}\hat{w}^{k+1}+\frac{\beta_{k}}{1-\tau_{k}}w^ {k}.\end{aligned}\right. \tag{84}\]
Next, since \(\Phi\) is \(\rho\)-co-hypomonotone and \(w^{k}\in\Phi x^{k}=Fx^{k}+Tx^{k}\), we have \(\langle w^{k+1}-w^{k},x^{k+1}-x^{k}\rangle+\rho\|w^{k+1}-w^{k}\|^{2}\geq 0\). This relation together with \(\eta_{k}:=\eta\in(2\rho,\frac{1}{L}]\) and \(\beta_{k}:=2\rho(1-\tau_{k})\) lead to
\[0 \leq \langle w^{k+1},x^{k+1}-x^{k}\rangle-\langle w^{k},x^{k+1}-x^{k} \rangle+\rho\|w^{k+1}-w^{k}\|^{2}\] \[\stackrel{{\eqref{eq:feg2}}}{{=}} \tau_{k}\langle w^{k},x^{k}-x^{0}\rangle-\frac{\tau_{k}}{1-\tau_ {k}}\langle w^{k+1},x^{k+1}-x^{0}\rangle+\eta\langle w^{k},\hat{w}^{k+1} \rangle-2\rho(1-\tau_{k})\|w^{k}\|^{2}\] \[-\,\frac{\eta}{1-\tau_{k}}\langle w^{k+1},\hat{w}^{k+1}\rangle+2 \rho\langle w^{k+1},w^{k}\rangle+\rho\|w^{k+1}-w^{k}\|^{2}.\]
Multiplying this expression by \(\frac{b_{k}}{\tau_{k}}\), rearranging the result, and using \(b_{k+1}=\frac{b_{k}}{1-\tau_{k}}\), we obtain
\[\mathcal{T}_{[1]} :=\,b_{k}\langle w^{k},x^{k}-x^{0}\rangle-b_{k+1}\langle w^{k+1},x^{k+1}-x^{0}\rangle\] \[\geq\,\frac{\eta b_{k+1}}{\tau_{k}}\langle w^{k+1}-w^{k},\hat{w} ^{k+1}\rangle+\eta b_{k+1}\langle\hat{w}^{k+1},w^{k}\rangle-\frac{\rho b_{k}}{ \tau_{k}}\|w^{k+1}\|^{2}+\frac{\rho b_{k}(1-2\tau_{k})}{\tau_{k}}\|w^{k}\|^{2}.\]
Adding \(a_{k}\|w^{k}\|^{2}-a_{k+1}\|w^{k+1}\|^{2}\) to \(\mathcal{T}_{[1]}\) and using \(\mathcal{V}_{k}\) from (75), we can show that
\[\mathcal{V}_{k}-\mathcal{V}_{k+1} =\,a_{k}\|w^{k}\|^{2}-a_{k+1}\|w^{k+1}\|^{2}+b_{k}\langle w^{k},x^ {k}-x^{0}\rangle-b_{k+1}\langle w^{k+1},x^{k+1}-x^{0}\rangle\] \[\geq\,\big{(}a_{k}+\frac{\rho b_{k}(1-2\tau_{k})}{\tau_{k}}\big{)} \|w^{k}\|^{2}-\big{(}a_{k+1}+\frac{\rho b_{k}}{\tau_{k}}\big{)}\|w^{k+1}\|^{2} \tag{85}\] \[\quad+\,\frac{\eta b_{k+1}}{\tau_{k}}\langle w^{k+1}-w^{k},\hat{w }^{k+1}\rangle+\eta b_{k+1}\langle\hat{w}^{k+1},w^{k}\rangle.\]
Now, from (80), we have \(x^{k+1}-y^{k}=-\eta\hat{w}^{k+1}+\hat{\eta}_{k}w^{k}\). By the \(L\)-Lipschitz continuity of \(F\), we have \(\|w^{k+1}-\hat{w}^{k+1}\|^{2}=\|Fx^{k+1}-Fy^{k}\|^{2}\leq L^{2}\|x^{k+1}-y^{k}\| ^{2}=L^{2}\|\eta\hat{w}^{k+1}-\hat{\eta}_{k}w^{k}\|^{2}\). Expanding this inequality, and rearranging the result, we obtain
\[0 \geq\,\|w^{k+1}\|^{2}+(1-L^{2}\eta^{2})\|\hat{w}^{k+1}\|^{2}-2\langle w^{k+1} -w^{k},\hat{w}^{k+1}\rangle-2\big{(}1-L^{2}\eta\hat{\eta}_{k}\big{)}\langle\hat{ w}^{k+1},w^{k}\rangle-L^{2}\hat{\eta}_{k}^{2}\|w^{k}\|^{2}.\]
Multiplying this estimate by \(\frac{\eta b_{k+1}}{2\tau_{k}}\) and adding the result to (85), we eventually arrive at
\[\mathcal{V}_{k}-\mathcal{V}_{k+1} \geq\,\Big{(}a_{k}-\frac{L^{2}\eta\hat{\eta}_{k}^{2}b_{k+1}-2\rho b _{k}(1-2\tau_{k})}{2\tau_{k}}\Big{)}\,\|w^{k}\|^{2}+\left(\frac{\eta b_{k+1}- 2\rho b_{k}}{2\tau_{k}}-a_{k+1}\right)\|w^{k+1}\|^{2} \tag{86}\] \[\quad+\,\frac{\eta(1-L^{2}\eta^{2})b_{k+1}}{2\tau_{k}}\|\hat{w}^{k+ 1}\|^{2}-\frac{\eta(1-\tau_{k}-L^{2}\eta\hat{\eta}_{k})b_{k+1}}{\tau_{k}} \langle\hat{w}^{k+1},w^{k}\rangle.\]
Let us choose \(\tau_{k}:=\frac{1}{k+2}\) and \(\hat{\eta}_{k}:=(1-\tau_{k})\eta\) as in (82), and \(a_{k+1}:=\frac{b_{k+1}[\eta-2\rho(1-\tau_{k})]}{2\tau_{k}}=\frac{[(\eta-2\rho)(k+ 2)+2\rho]b_{k+1}}{2}\). Since \(b_{k+1}=\frac{b_{k}}{1-\tau_{k}}\), we have \(b_{k}=b_{0}(k+1)\) and hence \(a_{k}=\frac{b_{0}[(\eta-2\rho)(k+1)+2\rho](k+1)}{2}\).
Using the above choice of parameters and noting that \(L\eta\leq 1\), we can simplify (86) as
\[\mathcal{V}_{k}-\mathcal{V}_{k+1} \geq\,\tfrac{\eta(1-L^{2}\eta^{2})b_{k+1}}{2\tau_{k}}\|\hat{w}^{k+ 1}-(1-\tau_{k})w^{k}\|^{2}\geq 0. \tag{87}\]
Finally, since \(x^{\star}\in\mathrm{zer}(\Phi)\), we have \(\langle w^{k},x^{k}-x^{\star}\rangle\geq-\rho\|w^{k}\|^{2}\). Using this bound and (75), we can show that
\[\mathcal{V}_{k} =\,a_{k}\|w^{k}\|^{2}+b_{k}\langle w^{k},x^{\star}-x^{0}\rangle+b_{ k}\langle w^{k},x^{k}-x^{\star}\rangle\quad\geq\quad(a_{k}-\rho b_{k})\|w^{k}\|^{2}-b_ {k}\|w^{k}\|\|x^{0}-x^{\star}\|\] \[\geq\,\left(a_{k}-\rho b_{k}-\tfrac{(\eta-2\rho)b_{k}^{2}}{4b_{0} }\right)\|w^{k}\|^{2}-\tfrac{b_{0}}{\eta-2\rho}\|x^{0}-x^{\star}\|^{2}\,=\, \tfrac{b_{0}(\eta-2\rho)(k+1)^{2}}{4}\|w^{k}\|^{2}-\tfrac{b_{0}}{\eta-2\rho}\| x^{0}-x^{\star}\|^{2}.\]
Combining this inequality and (87), we get \(\tfrac{b_{0}(\eta-2\rho)(k+1)^{2}}{4}\|w^{k}\|^{2}-\tfrac{b_{0}}{\eta-2\rho}\| x^{0}-x^{\star}\|^{2}\leq\mathcal{V}_{k}\leq\mathcal{V}_{0}=a_{0}\|w^{0}\|^{2}+b_{0} \langle w^{0},x^{0}-x^{0}\rangle=\tfrac{b_{0}\eta}{2}\|w^{0}\|^{2}\). This bound leads to (83).
### The past extra-anchored gradient method for (Ni)
Both (EAG2) and (FEG2) require two evaluations of \(F\) per iteration. In addition, (EAG2) needs two evaluations of \(J_{\eta T}\). To reduce this computation, we can apply Halpern fixed-point iteration to the past extragradient method [113] as done in [132] for (NE). Our recent work [143] has extended [132] to solve (NI) and relaxed assumption from the monotonicity to the co-hypomonotonicity of \(F\). We now survey the results from [27, 130] in this subsection.
**The algorithm.** Starting from \(x^{0}\in\mathrm{dom}(\Phi)\), we set \(y^{-1}:=x^{0}\), and at each iteration \(k\geq 0\), we update
\[\left\{\begin{aligned} & y^{k}&:=\,x^{k}+\tau_{k}(x^{0}-x^{k}) -(\hat{\eta}_{k}-\beta_{k})(Fy^{k-1}+\xi^{k}),\\ & x^{k+1}&:=\,x^{k}+\tau_{k}(x^{0}-x^{k})-\eta(Fy^{k} +\xi^{k+1})+\beta_{k}(Fy^{k-1}+\xi^{k}),\end{aligned}\right.\] (PEAG2)
where \(\xi^{k}\in Tx^{k}\), \(\tau_{k}\in(0,1)\), \(\eta>0\), \(\hat{\eta}_{k}>0\), and \(\beta_{k}>0\) are given parameters, determined later. Clearly, if \(T=0\), then (PEAG2) reduces to the past extra-anchored gradient scheme in [132]. This scheme can be considered as a modification of (FEG2) by replacing \(Fx^{k}\) by \(Fy^{k-1}\) using Popov's trick in [113].
Again, we reuse \(w^{k}:=Fx^{k}+\xi^{k}\in Fx^{k}+Tx^{k}\) and \(\hat{w}^{k}:=Fy^{k-1}+\xi^{k}\) as in (FEG2). Then, (PEAG2) can be rewritten equivalently as
\[\left\{\begin{aligned} & y^{k}&:=\,x^{k}+\tau_{k}(x^{0}-x^{k}) -(\hat{\eta}_{k}-\beta_{k})\hat{w}^{k},\\ & x^{k+1}&:=\,x^{k}+\tau_{k}(x^{0}-x^{k})-\eta\hat{w }^{k+1}+\beta_{k}\hat{w}^{k}.\end{aligned}\right. \tag{88}\]
Obviously, (88) appears to have the same form as the past extra-anchored gradient scheme in [132], but with different parameters. Since \(\xi^{k+1}\in Tx^{k+1}\), we can rewrite (PEAG2) as follows:
\[\left\{\begin{aligned} & y^{k}&:=\,x^{k}+\tau_{k}(x^{0}-x^{k}) -(\hat{\eta}_{k}-\beta_{k})\hat{w}^{k},\\ & x^{k+1}&\in\,J_{\eta T}\left(y^{k}-\eta Fy^{k}+ \hat{\eta}_{k}\hat{w}^{k}\right),\\ &\hat{w}^{k+1}&:=\,\tfrac{1}{\eta}\left(y^{k}+\hat{ \eta}_{k}\hat{w}^{k}-x^{k+1}\right),\end{aligned}\right. \tag{89}\]
where \(x^{0}\in\mathrm{dom}(\Phi)\) is given, \(y^{-1}:=x^{0}\), and \(\hat{w}^{0}\in Fy^{-1}+Tx^{0}\) is arbitrary.
**Remark 7.2**.: Notice that the first accelerated variant of the past extragradient method [113] was proposed in [132] to solve (NE) under the monotonicity of \(F\), that achieves \(\mathcal{O}\left(1/k\right)\) rate on \(\|Fx^{k}\|\). This method was then extended to (NI) in [27] for the \(\rho\)-co-hypomonotonicity of \(\Phi\) such that \(2\sqrt{34}L\rho<1\). However, [130] provided an alternative proof for (PEAG2) using a different choice of parameters and a more relaxed condition \(2\sqrt{34}L\rho<1\) than [27]. In addition, [130] does not require the maximal monotonicity of \(T\) as [27], and the form (89) appears to be different from [27], though they have the same per-iteration complexity with one evaluation of \(F\) and one evaluation of \(J_{\eta T}\) per iteration.
**Convergence analysis.** To establish convergence of (PEAG2), we use the following potential function:
\[\hat{\mathcal{V}}_{k}:=a_{k}\|Fx^{k}+\xi^{k}\|^{2}+b_{k}\langle Fx^{k}+\xi^{k}, x^{k}-x^{0}\rangle+c_{k}\|Fx^{k}-Fy^{k-1}\|^{2}, \tag{90}\]
where \(\xi^{k}\in Tx^{k}\), \(a_{k}>0\), \(b_{k}>0\), and \(c_{k}>0\) are given parameters. Using \(w^{k}:=Fx^{k}+\xi^{k}\) and \(\hat{w}^{k}:=Fy^{k-1}+\xi^{k}\), we can rewrite \(\hat{\mathcal{V}}_{k}=a_{k}\|w^{k}\|^{2}+b_{k}\langle w^{k},x^{k}-x^{0}\rangle+c _{k}\|w^{k}-\hat{w}^{k}\|^{2}\). Now, we are ready to state the convergence of (PEAG2) in the following theorem.
**Theorem 7.3**.: _Assume that \(\Phi\) in (NI) is \(\rho\)-co-hypomonotone, \(F\) is \(L\)-Lipschitz continuous such that \(2\sqrt{34}L\rho<1\), \(\mathrm{zer}(\Phi)\neq\emptyset\), \(\mathrm{dom}(J_{\eta T})=\mathbb{R}^{p}\), and \(\mathrm{ran}(J_{\eta T})\subseteq\mathrm{dom}(F)=\mathbb{R}^{p}\). Let \(\eta:=\sqrt{\frac{2}{17L}}\) be a given stepsize, and \(\{(x^{k},y^{k})\}\) be generated by (PEAG2) using the following parameters:_
\[\tau_{k}:=\frac{1}{k+2},\quad\beta_{k}:=\frac{4\rho(1-\tau_{k})}{1+\tau_{k}}, \quad\text{and}\quad\hat{\eta}_{k}:=(1-\tau_{k})\eta. \tag{91}\]
_Then, for all \(k\geq 0\) and any \(x^{\star}\in\mathrm{zer}(\Phi)\), we have_
\[\|Fx^{k}+\xi^{k}\|^{2}\leq\frac{1}{(k+1)^{2}}\left[\frac{4}{3(\eta-4\rho)^{2} }\|x^{0}-x^{\star}\|^{2}+\frac{2(3\eta-2\rho)}{9(\eta-4\rho)}\|Fx^{0}+\xi^{0} \|^{2}\right],\quad\xi^{k}\in Tx^{k}. \tag{92}\]
_Consequently, we have the last-iterate convergence rate \(\mathcal{O}\left(1/k\right)\) of the residual norm \(\|Fx^{k}+\xi^{k}\|\)._
Proof.: First, using the equivalent form (88) of (PEAG2), we can show that
\[\left\{\begin{aligned} x^{k+1}-x^{k}&=\,-\tau_{k}(x^{ k}-x^{0})-\eta\hat{w}^{k+1}+\beta_{k}w^{k}+\beta_{k}(\hat{w}^{k}-w^{k})\\ x^{k+1}-x^{k}&=\,-\frac{\tau_{k}}{1-\tau_{k}}(x^{k+1 }-x^{0})-\frac{\eta}{1-\tau_{k}}\hat{w}^{k+1}+\frac{\beta_{k}}{1-\tau_{k}}w^{k }+\frac{\beta_{k}}{1-\tau_{k}}(\hat{w}^{k}-w^{k}).\end{aligned}\right.\]
Second, since \(\Phi\) is \(\rho\)-co-hypomonotone, we have \(\langle w^{k+1}-w^{k},x^{k+1}-x^{k}\rangle+\rho\|w^{k+1}-w^{k}\|^{2}\geq 0\). Combining these expressions together, we can derive that
\[\mathcal{T}_{[1]} :=\,\tau_{k}\langle w^{k},x^{k}-x^{0}\rangle-\frac{\tau_{k}}{1- \tau_{k}}\langle w^{k+1},x^{k+1}-x^{0}\rangle\] \[\geq\,\frac{\eta}{1-\tau_{k}}\langle w^{k+1},\hat{w}^{k+1} \rangle-\eta\langle w^{k},\hat{w}^{k+1}\rangle-\rho\|w^{k+1}-w^{k}\|^{2}+ \beta_{k}\|w^{k}\|^{2}\] \[\quad-\,\frac{\beta_{k}}{1-\tau_{k}}\langle w^{k+1},w^{k}\rangle -\frac{\beta_{k}}{1-\tau_{k}}\langle w^{k+1}-(1-\tau_{k})w^{k},\hat{w}^{k}-w^ {k}\rangle.\]
Next, by Young's inequality and assuming that \(\beta_{k}:=\frac{4\rho(1-\tau_{k})}{1+\tau_{k}}\), we can further expand \(\mathcal{T}_{[1]}\) as
\[\mathcal{T}_{[1]} :=\,\tau_{k}\langle w^{k},x^{k}-x^{0}\rangle-\frac{\tau_{k}}{1- \tau_{k}}\langle w^{k+1},x^{k+1}-x^{0}\rangle\] \[\geq\,\frac{\eta}{1-\tau_{k}}\langle w^{k+1},\hat{w}^{k+1} \rangle-\eta\langle w^{k},\hat{w}^{k+1}\rangle-\rho\|w^{k+1}-w^{k}\|^{2}+ \beta_{k}\|w^{k}\|^{2}-\frac{\beta_{k}}{1-\tau_{k}}\langle w^{k+1},w^{k}\rangle\] \[\quad-\,\frac{\beta_{k}}{4(1-\tau_{k})}\|w^{k+1}-(1-\tau_{k})w^{ k}\|^{2}-\frac{\beta_{k}}{1-\tau_{k}}\|\hat{w}^{k}-w^{k}\|^{2}\] \[=\,\frac{\eta}{1-\tau_{k}}\langle w^{k+1},\hat{w}^{k+1} \rangle-\eta\langle w^{k},\hat{w}^{k+1}\rangle-\frac{\rho(2+\tau_{k})}{1+\tau _{k}}\|w^{k+1}\|^{2}+\frac{\rho(2-3\tau_{k}-\tau_{k}^{2})}{1+\tau_{k}}\|w^{k }\|^{2}-\frac{4\rho}{1+\tau_{k}}\|\hat{w}^{k}-w^{k}\|^{2}.\]
Multiplying \(\mathcal{T}_{[1]}\) by \(\frac{b_{k}}{\tau_{k}}\) and assuming \(b_{k+1}=\frac{b_{k}}{1-\tau_{k}}\), then utilizing \(\hat{\mathcal{V}}_{k}\) from (90), we can show that
\[\hat{\mathcal{V}}_{k}-\hat{\mathcal{V}}_{k+1} =\,a_{k}\|w^{k}\|^{2}-a_{k+1}\|w^{k+1}\|^{2}+b_{k}\langle w^{k},x ^{k}-x^{0}\rangle-b_{k+1}\langle w^{k+1},x^{k+1}-x^{0}\rangle\] \[\quad+\,c_{k}\|w^{k}-\hat{w}^{k}\|^{2}-c_{k+1}\|w^{k+1}-\hat{w}^{ k+1}\|^{2}\] \[\geq\,\left[a_{k}+\frac{\rho b_{k}(2-3\tau_{k}-\tau_{k}^{2})}{ \tau_{k}(1+\tau_{k})}\right]\|w^{k}\|^{2}-\left[a_{k+1}+\frac{\rho b_{k}(2+\tau_{ k})}{\tau_{k}(1+\tau_{k})}\right]\|w^{k+1}\|^{2} \tag{93}\] \[\quad+\left[c_{k}-\frac{4\rho b_{k}}{\tau_{k}(1+\tau_{k})}\right] \|w^{k}-\hat{w}^{k}\|^{2}-c_{k+1}\|w^{k+1}-\hat{w}^{k+1}\|^{2}\] \[\quad+\,\frac{\eta b_{k+1}}{\tau_{k}}\langle w^{k+1}-w^{k},\hat{w }^{k+1}\rangle+\eta b_{k+1}\langle\hat{w}^{k+1},w^{k}\rangle.\]
Now, since \(x^{k+1}-y^{k}=-\eta\hat{w}^{k+1}+\hat{\eta}_{k}\hat{w}^{k}\) from (88), utilizing the \(L\)-Lipschitz continuity of \(F\) and Young's inequality, we can prove that
\[\|w^{k+1}-\hat{w}^{k+1}\|^{2} =\,\|Fx^{k+1}-Fy^{k}\|^{2}\leq L^{2}\|x^{k+1}-y^{k}\|^{2}=L^{2}\| \eta\hat{w}^{k+1}-\hat{\eta}_{k}\hat{w}^{k}\|^{2}\] \[\leq\,2L^{2}\|\eta\hat{w}^{k+1}-\hat{\eta}_{k}w^{k}\|^{2}+2L\hat{ \eta}_{k}^{2}\|w^{k}-\hat{w}^{k}\|^{2}.\]
Multiplying this expression by \((1+\omega)\) for \(\omega>0\), using \(M:=2(1+\omega)L^{2}\), and expanding the result, we get
\[0 \geq\,\omega\|w^{k+1}-\hat{w}^{k+1}\|^{2}+\|w^{k+1}\|^{2}+(1-M \eta^{2})\|\hat{w}^{k+1}\|^{2}-2\langle w^{k+1}-w^{k},\hat{w}^{k+1}\rangle\] \[\quad-\,2(1-M\eta\hat{\eta}_{k})\langle\hat{w}^{k+1},w^{k} \rangle-M\hat{\eta}_{k}^{2}\|w^{k}\|^{2}-M\hat{\eta}_{k}^{2}\|w^{k}-\hat{w}^{k} \|^{2}.\]
Further multiplying this inequality by \(\frac{\eta b_{k+1}}{2\tau_{k}}\) and adding the result to (93), we obtain
\[\begin{split}\hat{\mathcal{V}}_{k}-\hat{\mathcal{V}}_{k+1}& \geq\,\left[c_{k}-\frac{4\rho b_{k}}{\tau_{k}(1+\tau_{k})}-\frac{ Mb_{k+1}\eta_{k}^{2}}{2\tau_{k}}\right]\|w^{k}-\hat{w}^{k}\|^{2}+\left(\frac{ \omega\eta b_{k+1}}{2\tau_{k}}-c_{k+1}\right)\|w^{k+1}-\hat{w}^{k+1}\|^{2}\\ &\quad+\,\left[a_{k}+\frac{\rho b_{k}(2-3\tau_{k}-\tau_{k}^{2})}{ \tau_{k}(1+\tau_{k})}-\frac{Mb_{k+1}\eta_{k}^{2}\eta_{k}^{2}}{2\tau_{k}}\right] \|w^{k}\|^{2}+\left[\frac{\eta b_{k+1}}{2\tau_{k}}-\frac{\rho b_{k}(2+\tau_{k} )}{\tau_{k}(1+\tau_{k})}-a_{k+1}\right]\|w^{k+1}\|^{2}\\ &\quad+\,\frac{\eta(1-M\eta^{2})b_{k+1}}{2\tau_{k}}\|\hat{w}^{k+ 1}\|^{2}-\frac{\eta(1-\tau_{k}-M\eta\hat{\eta}_{k})b_{k+1}}{\tau_{k}}\langle \hat{w}^{k+1},w^{k}\rangle.\end{split} \tag{94}\]
Since \(\tau_{k}:=\frac{1}{k+2}\) and \(\hat{\eta}_{k}:=(1-\tau_{k})\eta\) due to (91), if we choose \(a_{k}:=\frac{b_{k}}{2}\left(\eta(k+1)-4\rho k+\frac{2\rho(k-1)}{k+3}\right)\), \(b_{k}\) such that \(b_{k+1}(1-\tau_{k})=b_{k}\), and \(c_{k}:=\frac{b_{k}}{2}\left(M\eta^{3}(k+1)+\frac{8\rho(k+2)^{2}}{k+3}\right)\), then (94) leads to
\[\begin{split}\hat{\mathcal{V}}_{k}-\hat{\mathcal{V}}_{k+1}& \geq\,\frac{\eta(1-M\eta^{2})b_{k+1}}{2\tau_{k}}\|\hat{w}^{k+1}-(1- \tau_{k})w^{k}\|^{2}+\frac{2\rho b_{k+1}}{2(k+4)}\|w^{k+1}\|^{2}\\ &\quad+\,\frac{b_{k+1}(k+2)}{2}\left(\omega\eta-M\eta^{3}-\frac{ 8\rho(k+3)^{2}}{(k+2)(k+4)}\right)\|w^{k+1}-\hat{w}^{k+1}\|^{2}.\end{split} \tag{95}\]
If \(\omega>0\) satisfies \(\omega\eta\geq M\eta^{3}+\frac{8\rho(k+3)^{2}}{(k+2)(k+4)}\) and \(1-M\eta^{2}\geq 0\), then (95) implies that \(\hat{\mathcal{V}}_{k+1}\leq\hat{\mathcal{V}}_{k}\) for all \(k\geq 0\). The first condition holds if \(\omega\eta\geq M\eta^{3}+9\rho\). However, to assure that \(a_{k}>0\), we require \(\eta>4\rho\). Overall, if
\[2(1+\omega)L^{2}\eta^{2}\leq 1,\quad\omega\eta\geq 2(1+\omega)L^{2}\eta^{3}+9 \rho,\quad\text{and}\quad\eta>4\rho, \tag{96}\]
then (95) leads to \(\hat{\mathcal{V}}_{k+1}\leq\hat{\mathcal{V}}_{k}\) for all \(k\geq 0\).
For simplicity, we choose \(\omega:=\frac{13}{4}\) and \(\eta:=\frac{1}{L\sqrt{2(1+\omega)}}=\frac{\sqrt{2}}{\sqrt{17L}}\). Then, the last two conditions of (96) hold if \(L\rho\leq\frac{\omega-1}{9\sqrt{2(1+\omega)}}=\frac{1}{2\sqrt{34}}\) and \(L\rho<\frac{1}{4\sqrt{2(1+\omega)}}=\frac{1}{2\sqrt{34}}\), respectively. Thus if \(2\sqrt{34}L\rho<1\), then (96) holds.
Since \(\tau_{k}:=\frac{1}{k+2}\), we get \(b_{k}=b_{0}(k+1)\) for some \(b_{0}>0\). Moreover, since \(x^{\star}\in\text{zer}(\Phi)\), by the \(\rho\)-co-hypomonotonicity of \(\Phi\), we have \(\langle w^{k},x^{k}-x^{\star}\rangle\geq-\rho\|w^{k}\|^{2}\). Using this expression, (90), Young's inequality, \(b_{k}=b_{0}(k+1)\), and the choice of \(a_{k}\), we can prove that
\[\hat{\mathcal{V}}_{k}\,\geq\,\frac{3b_{0}(\eta-4\rho)(k+1)^{2}}{4}\|w^{k}\|^{2 }-\tfrac{b_{0}}{\eta-4\rho}\|x^{0}-x^{\star}\|^{2}.\]
Finally, since \(y^{-1}=x^{0}\), we have \(\hat{\mathcal{V}}_{0}=a_{0}\|w^{0}\|^{2}=b_{0}\left(\frac{\eta}{2}-\frac{ \rho}{3}\right)\|Fx^{0}+\xi^{0}\|^{2}\), leading to \(\hat{\mathcal{V}}_{k}\leq\hat{\mathcal{V}}_{0}=b_{0}\left(\frac{\eta}{2}-\frac{ \rho}{3}\right)\|Fx^{0}+\xi^{0}\|^{2}\) from (95). Combining this expression and the lower bound of \(\hat{\mathcal{V}}_{k}\) above, we get
\[\|w^{k}\|^{2}\,\leq\,\tfrac{1}{(k+1)^{2}}\left[\tfrac{4}{3(\eta-4\rho)^{2}}\|x ^{0}-x^{\star}\|^{2}+\tfrac{2(3\eta-2\rho)}{9(\eta-4\rho)}\|Fx^{0}+\xi^{0}\|^{ 2}\right],\]
which proves (92) by virtue of \(w^{k}:=Fx^{k}+\xi^{k}\in Fx^{k}+Tx^{k}\).
**Remark 7.3**.: Theorem 7.3 only provides one possibility for the stepsize \(\eta\), which is \(\eta:=\frac{\sqrt{2}}{\sqrt{17L}}\). However, one can revise our proof to provide a range of \(\eta\) as in (FEG2) or (EAG2).
## 8 Nesterov's Accelerated Extragradient-Type Methods
**Introduction.** Nesterov's accelerated method [101] is an outstanding achievement in convex optimization over the past few decades. While the method was invented in 1983, its popularity began with two pioneering works [103] and [13]. Since then, such a technique has been widely studied and applied to many problems in various fields, including proximal-point, coordinate gradient, stochastic gradient, operator splitting, conditional gradient, Newton-type, and high-order methods. Although the majority of literature on accelerated methods pertains to convex optimization, extensions to monotone equations and inclusions have recently become an interesting research topic. Early works in this direction were conducted by, e.g., [5, 8, 21, 22, 70, 85, 86]. Unlike convex optimization, developing accelerated methods for monotone inclusions of the form (NI) requires a fundamental change in forming an appropriate potential function. Existing works often rely on proximal-point methods, which were extended to accelerated schemes in [58]. Another approach,
such as in [59, 70], is to apply the "performance estimation problem" technique developed in [45]. Nesterov's accelerated methods for different problem classes have been proven to be "optimal", meaning their upper bounds of convergence rates or complexity match the respective lower bounds in a certain sense, see, e.g., [102, 141].
Note that Nesterov's accelerated method can also be viewed as a discretization of an appropriate dynamical system [125], as often seen in classical gradient methods. Exploring this perspective, several new variants and extensions have been extensively studied in the literature, see, e.g., [7, 21, 22, 121, 140]. In addition, connections between Nesterov's and other methods have also been discussed. For example, [6] shows that Nesterov's accelerated methods are equivalent to Ravine's methods, which were proposed in 1961. Recently, [106, 129] have shown the relations between Nesterov's accelerated schemes and Halpern fixed-point iterations in fixed-point theory [60]. These methods are indeed equivalent in certain settings. Exploiting this perspective, [129, 130] have developed several Nesterov's accelerated variants to solve (NE) and (NI), including extragradient methods. In this section, we will survey recent results from these works.
### Nesterov's accelerated extragradient method for (Ni)
**The algorithm.** The first Nesterov's accelerated extragradient method proposed in [129, 130] can be written as follows. Starting from \(y^{0}\in\mathrm{dom}(\Phi)\), set \(z^{0}:=y^{0}\) and \(w^{-1}:=0\), and at each iteration \(k\geq 0\), we update
\[\left\{\begin{array}{lcl}x^{k}&\in&J_{\eta T}\left(y^{k}-\eta Fy^{k}+\hat {\eta}_{k}w^{k-1}\right),\\ w^{k}&:=&\frac{1}{\eta}(y^{k}-x^{k}+\hat{\eta}_{k}w^{k-1})-(Fy^{k}-Fx^{k}),\\ z^{k+1}&:=&x^{k}-\gamma w^{k},\\ y^{k+1}&:=&z^{k+1}+\theta_{k}(z^{k+1}-z^{k})+\nu_{k}(y^{k}-z^{k+1}),\end{array}\right.\] (AEG)
where \(\eta\), \(\hat{\eta}_{k}\), \(\gamma\), \(\theta_{k}\) and \(\nu_{k}\) are given parameters, which will be determined later, and \(J_{\eta T}\) is the resolvent of \(\eta T\).
Now, for \(\xi^{k}\in Tx^{k}\), we can easily show that \(w^{k}=Fx^{k}+\xi^{k}\). If we additionally denote by \(\hat{w}^{k}:=Fy^{k}+\xi^{k}\) for \(\xi^{k}\in Tx^{k}\), then by starting from \(x^{0}=z^{0}=y^{0}\), we can rewrite (AEG) equivalently to
\[\left\{\begin{array}{lcl}z^{k+1}&:=&x^{k}-\gamma w^{k},\\ y^{k+1}&:=&z^{k+1}+\theta_{k}(z^{k+1}-z^{k})+\nu_{k}(y^{k}-z^{k+1}),\\ x^{k+1}&:=&y^{k+1}-\eta\hat{w}^{k+1}+\hat{\eta}_{k+1}w^{k}.\end{array}\right. \tag{97}\]
Clearly, if \(T=0\), then (97) exactly reduces to the one in [129]. However, since we have not yet seen an obvious connection between (AEG) and (EG2), we can eliminate \(z^{k}\) to obtain
\[\left\{\begin{array}{lcl}x^{k}&\in&J_{\eta T}\left(y^{k}-\eta Fy^{k}\,+\, \hat{\eta}_{k}w^{k-1}\right),\\ y^{k+1}&:=&x^{k}-\beta_{k}(Fx^{k}-Fy^{k})\,+\,\theta_{k}(x^{k}-x^{k-1})\,+\, \hat{\beta}_{k}(y^{k}-x^{k})+\tilde{\beta}_{k}w^{k-1},\\ w^{k}&:=&\frac{1}{\eta}(y^{k}-x^{k}+\hat{\eta}_{k}w^{k-1})-(Fy^{k}-Fx^{k}), \end{array}\right. \tag{98}\]
where \(\beta_{k}:=\gamma(1+\theta_{k}-\nu_{k})\), \(\hat{\beta}_{k}:=\nu_{k}-\frac{\beta_{k}}{\eta}\), and \(\tilde{\beta}_{k}:=\gamma\theta_{k}-\frac{\beta_{k}\hat{\eta}_{k}}{\eta}\). In this case (98) can be viewed as an accelerated variant of (EG2) with correction terms (see [85]).
**Convergence analysis.** The main tool to establish convergence of (AEG) is the following potential function:
\[\mathcal{P}_{k}:=a_{k}\|w^{k-1}\|^{2}+b_{k}\langle w^{k-1},z^{k}-y^{k}\rangle+ \|z^{k}+t_{k}(y^{k}-z^{k})-x^{\star}\|^{2}, \tag{99}\]
where \(w^{k}:=Fx^{k}+\xi^{k}\) for \(\xi^{k}\in Tx^{k}\), \(a_{k}>0\), \(b_{k}>0\), and \(t_{k}>0\) are given parameters.
Now, we can establish a convergence rate of (AEG) in the following theorem.
**Theorem 8.1**.: _Suppose that \(\Phi\) in (NI) is \(\rho\)-co-hypomonotone, \(F\) is \(L\)-Lipschitz continuous such that \(2L\rho<1\), \(x^{\star}\in\mathrm{zer}(\Phi)\), \(\mathrm{dom}(J_{\eta T})=\mathbb{R}^{p}\), and \(\mathrm{ran}(J_{\eta T})\subseteq\mathrm{dom}(F)=\mathbb{R}^{p}\). Let \(\gamma>0\) be such that \(L(2\rho+\gamma)\leq 1\)\((\)e.g., \(\gamma:=\frac{1}{L}-2\rho>0)\) and \(\{(x^{k},y^{k},z^{k})\}\) be generated by (AEG) using_
\[t_{k}:=k+2,\quad\eta:=\gamma+2\rho,\quad\hat{\eta}_{k}=\frac{(t_{k}-1)\eta}{t_ {k}},\quad\theta_{k}:=\frac{t_{k}-1}{t_{k+1}},\quad\text{and}\quad\nu_{k}:= \frac{t_{k}}{t_{k+1}}. \tag{100}\]
_Then, the following bound holds:_
\[\|Fx^{k}+\xi^{k}\|\leq\frac{2\|y^{0}-x^{\star}\|}{\gamma(k+2)},\quad\text{where} \quad\xi^{k}\in Tx^{k}. \tag{101}\]
_Consequently, we have \(\|Fx^{k}+\xi^{k}\|=\mathcal{O}\left(\frac{1}{k}\right)\) showing \(\mathcal{O}\left(\frac{1}{k}\right)\) convergence rate of (AEG)._
Proof.: First, by inserting \((t_{k}-1)z^{k+1}-(t_{k}-1)z^{k+1}\), it is obvious to show that
\[\|z^{k}+t_{k}(y^{k}-z^{k})-x^{\star}\|^{2} = \|z^{k+1}-x^{\star}\|^{2}+(t_{k}-1)^{2}\|z^{k+1}-z^{k}\|^{2}+t_{k }^{2}\|y^{k}-z^{k+1}\|^{2}\] \[+\ 2(t_{k}-1)\langle z^{k+1}-z^{k},z^{k+1}-x^{\star}\rangle+2t_{k }\langle y^{k}-z^{k+1},z^{k+1}-x^{\star}\rangle\] \[+\ 2(t_{k}-1)t_{k}\langle y^{k}-z^{k+1},z^{k+1}-z^{k}\rangle.\]
Next, using \(y^{k+1}-z^{k+1}=\theta_{k}(z^{k+1}-z^{k})+\nu_{k}(y^{k}-z^{k+1})\) from (AEG), we have
\[\|z^{k+1}+t_{k+1}(y^{k+1}-z^{k+1})-x^{\star}\|^{2} = \|z^{k+1}-x^{\star}\|^{2}+t_{k+1}^{2}\theta_{k}^{2}\|z^{k+1}-z^{k }\|^{2}+t_{k+1}^{2}\nu_{k}^{2}\|y^{k}-z^{k+1}\|^{2}\] \[+\ 2t_{k+1}\theta_{k}\langle z^{k+1}-z^{k},z^{k+1}-x^{\star} \rangle+2t_{k+1}\nu_{k}\langle y^{k}-z^{k+1},z^{k+1}-x^{\star}\rangle\] \[+\ 2t_{k+1}^{2}\nu_{k}\theta_{k}\langle y^{k}-z^{k+1},z^{k+1}-z^{k}\rangle.\]
Combining the last two expressions, we can show that
\[\mathcal{T}_{[1]} := \|z^{k}+t_{k}(y^{k}-z^{k})-x^{\star}\|^{2}-\|z^{k+1}+t_{k+1}(y^{ k+1}-z^{k+1})-x^{\star}\|^{2}\] \[= \left[(t_{k}-1)^{2}-t_{k+1}^{2}\theta_{k}^{2}\right]\|z^{k+1}-z^{ k}\|^{2}+(t_{k}^{2}-\nu_{k}^{2}t_{k+1}^{2})\|y^{k}-z^{k+1}\|^{2}\] \[+\ 2(t_{k}-1-t_{k+1}\theta_{k})\langle z^{k+1}-z^{k},z^{k+1}-x^{ \star}\rangle\] \[+\ 2(t_{k}-t_{k+1}\nu_{k})\langle y^{k}-z^{k+1},z^{k+1}-x^{\star}\rangle\] \[+\ 2\left[t_{k}(t_{k}-1)-t_{k+1}^{2}\theta_{k}\nu_{k}\right] \langle y^{k}-z^{k+1},z^{k+1}-z^{k}\rangle.\]
From \(\mathcal{T}_{[1]}\), (99), and \(z^{k+1}-y^{k+1}=-\theta_{k}(z^{k+1}-z^{k})-\nu_{k}(y^{k}-z^{k+1})\) in (AEG), we can further derive
\[\mathcal{P}_{k}-\mathcal{P}_{k+1} = a_{k}\|w^{k-1}\|^{2}-a_{k+1}\|w^{k}\|^{2}+\left[(t_{k}-1)^{2}-t_{ k+1}^{2}\theta_{k}^{2}\right]\|z^{k+1}-z^{k}\|^{2}+(t_{k}^{2}-\nu_{k}^{2}t_{k+1}^{2}) \|y^{k}-z^{k+1}\|^{2}\] \[+\ b_{k}\langle w^{k}-w^{k-1},z^{k+1}-z^{k}\rangle+b_{k+1}\langle \nu_{k}w^{k}-\theta_{k}w^{k-1},y^{k}-z^{k+1}\rangle\] \[+\ (b_{k+1}\theta_{k}-b_{k})\left[\langle w^{k},z^{k+1}-z^{k}\rangle +\langle w^{k-1},y^{k}-z^{k+1}\rangle\right]\] \[+\ 2(t_{k}-1-t_{k+1}\theta_{k})\langle z^{k+1}-z^{k},z^{k+1}-x^{ \star}\rangle+2(t_{k}-t_{k+1}\nu_{k})\langle y^{k}-z^{k+1},z^{k+1}-x^{\star}\rangle\] \[+\ 2\left[t_{k}(t_{k}-1)-t_{k+1}^{2}\theta_{k}\nu_{k}\right] \langle y^{k}-z^{k+1},z^{k+1}-z^{k}\rangle.\]
We first choose \(t_{k}\), \(\nu_{k}\), \(\theta_{k}\), and \(b_{k}\) such that
\[t_{k}-t_{k+1}\nu_{k} = 0,\qquad\ t_{k}(t_{k}-1)-\nu_{k}\theta_{k}t_{k+1}^{2} = 0, \tag{103}\] \[t_{k}-1-t_{k+1}\theta_{k} = 0,\quad\text{and}\ \ b_{k+1}\theta_{k}-b_{k} = 0.\]
These conditions lead to \(\theta_{k}=\frac{t_{k}-1}{t_{k+1}}\) and \(\nu_{k}=\frac{t_{k}}{t_{k+1}}\) as in (100), and \(b_{k+1}:=\frac{b_{k}}{\theta_{k}}=\frac{b_{k}t_{k+1}}{t_{k}-1}\).
Now, using (103), (102) reduces to
\[\mathcal{P}_{k}-\mathcal{P}_{k+1} = a_{k}\|w^{k-1}\|^{2}-a_{k+1}\|w^{k}\|^{2}+b_{k}\langle w^{k}-w^{ k-1},z^{k+1}-z^{k}\rangle \tag{104}\] \[+\ b_{k+1}\langle\nu_{k}w^{k}-\theta_{k}w^{k-1},y^{k}-z^{k+1}\rangle.\]
By the \(\rho\)-co-hypomonotonicity of \(\Phi\) and \(z^{k+1}=x^{k}-\gamma w^{k}\) from (AEG), we have \(\langle w^{k}-w^{k-1},z^{k+1}-z^{k}\rangle=\langle w^{k}-w^{k-1},x^{k}-x^{k-1} \rangle-\gamma\|w^{k}-w^{k-1}\|^{2}\geq-(\rho+\gamma)\|w^{k}-w^{k-1}\|^{2}\). Therefore, we obtain
\[\langle w^{k}-w^{k-1},z^{k+1}-z^{k}\rangle \geq -(\gamma+\rho)\left[\|w^{k}\|^{2}+\|w^{k-1}\|^{2}-2\langle w^{k}, w^{k-1}\rangle\right]. \tag{105}\]
Since \(\hat{w}^{k}:=Fy^{k}+\xi^{k}\), we have \(Fy^{k}-Fx^{k}=\hat{w}^{k}-w^{k}\). Using this relation and (AEG), we get \(\frac{\eta_{k}}{\eta}w^{k-1}+\frac{1}{\eta}(y^{k}-x^{k})-\hat{w}^{k}=0\), leading to \(x^{k}-y^{k}=\hat{\eta}_{k}w^{k-1}-\eta\hat{w}^{k}\). Combining this expression and the second line of (AEG), we have \(y^{k}-z^{k+1}=\gamma w^{k}+\eta\hat{w}^{k}-\hat{\eta}_{k}w^{k-1}\), leading to
\[\begin{split}\langle\nu_{k}w^{k}-\theta_{k}w^{k-1},y^{k}-z^{k+1} \rangle&=\,\langle\nu_{k}w^{k}-\theta_{k}w^{k-1},\gamma w^{k}+ \eta\hat{w}^{k}-\hat{\eta}_{k}w^{k-1}\rangle\\ &=\,\nu_{k}\gamma\|w^{k}\|^{2}+\theta_{k}\hat{\eta}_{k}\|w^{k-1} \|^{2}+\nu_{k}\eta\langle w^{k},\hat{w}^{k}\rangle\\ &\quad-\,\eta\theta_{k}\langle w^{k-1},\hat{w}^{k}\rangle-(\nu_{ k}\hat{\eta}_{k}+\gamma\theta_{k})\langle w^{k-1},w^{k}\rangle.\end{split} \tag{106}\]
Substituting (105) and (106) into (104), and noting that \(b_{k}=b_{k+1}\theta_{k}\), we can prove that
\[\begin{split}\mathcal{P}_{k}-\mathcal{P}_{k+1}\,\geq& \,[a_{k}+b_{k}\left(\hat{\eta}_{k}-\gamma-\rho\right)]\|w^{k-1}\|^{2}+\left[b_ {k}\left(\frac{\gamma\nu_{k}}{\theta_{k}}-\gamma-\rho\right)-a_{k+1}\right] \|w^{k}\|^{2}\\ &+\,\eta b_{k}\left(\frac{\nu_{k}}{\theta_{k}}-1\right)\langle w ^{k},\hat{w}^{k}\rangle+\eta b_{k}\langle w^{k}-w^{k-1},\hat{w}^{k}\rangle-b_ {k}\left(\frac{\hat{\eta}b\nu_{k}}{\theta_{k}}-\gamma-2\rho\right)\langle w^{k -1},w^{k}\rangle.\end{split} \tag{107}\]
Now, by the Lipschitz continuity of \(F\), we have \(\|\hat{w}^{k}-w^{k}\|^{2}=\|Fy^{k}-Fx^{k}\|^{2}\leq L^{2}\|x^{k}-y^{k}\|^{2}= L^{2}\|\eta\hat{w}^{k}-\hat{\eta}_{k}w^{k-1}\|^{2}\). Expanding this expression and rearranging terms, we can deduce that
\[0\,\geq\,\|w^{k}\|^{2}+(1-L^{2}\eta^{2})\|\hat{w}^{k}\|^{2}-2\left(1-L^{2}\eta \hat{\eta}_{k}\right)\langle w^{k},\hat{w}^{k}\rangle-2L^{2}\eta\hat{\eta}_{ k}\langle w^{k}-w^{k-1},\hat{w}^{k}\rangle-L^{2}\hat{\eta}_{k}^{2}\|w^{k-1}\|^{2}.\]
Multiplying this expression by \(\frac{b_{k}}{2L^{2}\hat{\eta}_{k}}\) and adding the result to (107), we get
\[\begin{split}\mathcal{P}_{k}-\mathcal{P}_{k+1}\,\geq& \,\left[a_{k}+\frac{b_{k}}{2}\left(\hat{\eta}_{k}-2\gamma-2\rho \right)\right]\|w^{k-1}\|^{2}+\frac{b_{k}(1-L^{2}\eta^{2})}{2L^{2}\hat{\eta}_{ k}}\|\hat{w}^{k}\|^{2}\\ &+\,\left[b_{k}\left(\frac{\gamma\nu_{k}}{\theta_{k}}+\frac{1}{2L^ {2}\hat{\eta}_{k}}-\gamma-\rho\right)-a_{k+1}\right]\|w^{k}\|^{2}\\ &+\,b_{k}\left(\frac{\eta\nu_{k}}{\theta_{k}}-\frac{1}{L^{2}\hat{ \eta}_{k}}\right)\langle w^{k},\hat{w}^{k}\rangle-b_{k}(\frac{\hat{\eta}_{k}\nu _{k}}{\theta_{k}}-2\rho-\gamma)\langle w^{k-1},w^{k}\rangle.\end{split} \tag{108}\]
Now we choose \(\eta:=\gamma+2\rho\) and \(\hat{\eta}_{k}:=\frac{\eta\theta_{k}}{\nu_{k}}=\frac{\eta(t_{k}-1)}{t_{k}}\) as in (100). Then, using \(\nu_{k}=\frac{t_{k}}{t_{k+1}}\), \(\theta_{k}=\frac{t_{k}-1}{t_{k+1}}\), \(b_{k+1}=\frac{b_{k}t_{k+1}}{t_{k}-1}\), and \(\eta:=\gamma+2\rho\), we can further bound (108) as
\[\mathcal{P}_{k}-\mathcal{P}_{k+1}\,\geq\,\left(a_{k}-\frac{b_{k}(\gamma t_{k}+ \eta)}{2t_{k}}\right)\|w^{k-1}\|^{2}+\left[\frac{b_{k}(\gamma t_{k}+\gamma+\eta )}{2(t_{k}-1)}-a_{k+1}\right]\|w^{k}\|^{2}+\frac{b_{k}(1-L^{2}\eta^{2})t_{k}}{2 L^{2}\eta(t_{k}-1)}\|w^{k}-\hat{w}^{k}\|^{2}. \tag{109}\]
Next, if we assume that
\[1-L^{2}\eta^{2}\geq 0,\quad a_{k}-\frac{b_{k}(\gamma t_{k}+\eta)}{2t_{k}}\geq 0, \quad\text{and}\quad\frac{b_{k}\left(\gamma t_{k}+\gamma+\eta\right)}{2(t_{k}-1)} -a_{k+1}\geq 0, \tag{110}\]
then (109) reduces to \(\mathcal{P}_{k}\geq\mathcal{P}_{k+1}\) for all \(k\geq 0\).
To guarantee (110), we note that the first condition of (110) is equivalent to \(L\eta=L(\gamma+2\rho)\leq 1\). If \(2L\rho<1\), then we can always choose \(0<\gamma\leq\frac{1}{L}-2\rho\) such that \(1-L^{2}\eta^{2}\geq 0\). This is the first condition in Theorem 8.1. If we choose \(a_{k}:=\frac{b_{k}(\gamma t_{k}+\eta)}{2t_{k}}\), then the second condition of (110) automatically holds. The third condition of (110) becomes \(a_{k+1}\leq\frac{b_{k+1}(\gamma t_{k+1}+\eta)}{2t_{k+1}}\) due to \(b_{k+1}=\frac{b_{k}t_{k+1}}{t_{k}-1}\) and \(t_{k}=k+2\). By the choice of \(a_{k}\), this condition holds with equality. Moreover, for \(t_{k}:=k+2\), we get \(b_{k}:=\frac{b_{0}(k+1)(k+2)}{2}\) for any \(b_{0}>0\), and thus \(a_{k}=\frac{b_{k}(\gamma t_{k}+\eta)}{2t_{k}}=\frac{b_{0}(k+1)(\gamma k+3\gamma+2 \rho)}{4}\).
Utilizing \(z^{k}=x^{k-1}-\gamma w^{k-1}\) from(AEG) and \(\langle w^{k-1},x^{k-1}-x^{\star}\rangle\geq-\rho\|w^{k-1}\|^{2}\) due to the \(\rho\)-co-hypomonotonicity of \(\Phi\), we can derive that
\[\begin{split}\mathcal{P}_{k}\,=&\,\|z^{k}+t_{k}(y^{k}-z^{k} )-x^{\star}-\frac{b_{k}}{2t_{k}}w^{k-1}\|^{2}+\left(a_{k}-\frac{b_{k}^{2}}{4t_{k} ^{2}}-\frac{\gamma b_{k}}{t_{k}}\right)\|w^{k-1}\|^{2}+\frac{b_{k}}{t_{k}} \langle w^{k-1},x^{k-1}-x^{\star}\rangle\\ \geq&\,\|z^{k}+t_{k}(y^{k}-z^{k})-\frac{b_{k}}{2t_{k} }w^{k-1}-x^{\star}\|^{2}+\left(a_{k}-\frac{b_{k}^{2}}{4t_{k}^{2}}-\frac{( \gamma+\rho)b_{k}}{t_{k}}\right)\|w^{k-1}\|^{2}.\end{split} \tag{111}\]
Finally, since \(\mathcal{P}_{k+1}\leq\mathcal{P}_{k}\) as shown above, by induction we have \(\mathcal{P}_{k}\leq\mathcal{P}_{0}=a_{0}\|w^{-1}\|^{2}+b_{0}\langle w^{-1},z^{0}-y^{0 }\rangle+\|z^{0}+t_{0}(y^{0}-z^{0})-x^{\star}\|^{2}=\|y^{0}-x^{\star}\|^{2}\) due to \(z^{0}=y^{0}\). Combining this expression and (111), and then using the explicit form
If we set \(b_{0}:=2\gamma\), then this estimate reduces to \(\|w^{k}\|^{2}\leq\frac{4\|y^{0}-x^{\star}\|^{2}}{\gamma^{2}(k+2)^{2}}\), which is exactly (101).
### Nesterov's accelerated past-extragradient method for (NI)
**The algorithm.** Alternative to Nesterov's accelerated extragradient method (AEG), [129, 130] also develop Nesterov's accelerated past-extragradient methods to solve (NI). We now present this method as follows. Starting from \(x^{0}\in\mathrm{dom}(\Phi)\), we set \(\hat{w}^{-1}:=0\) and \(z^{0}:=y^{0}\), and at each iteration \(k\geq 0\), we update
\[\left\{\begin{aligned} x^{k}&\quad\in\;J_{\eta T} \left(y^{k}-\eta Fy^{k}+\hat{\eta}_{k}\hat{w}^{k-1}\right),\\ \hat{w}^{k}&:=\tfrac{1}{\eta}(y^{k}-x^{k}+\hat{ \eta}_{k}\hat{w}^{k-1}),\\ z^{k+1}&:=x^{k}-\gamma\hat{w}^{k},\\ y^{k+1}&:=z^{k+1}+\theta_{k}(z^{k+1}-z^{k})+\nu_{k} (y^{k}-z^{k+1}),\end{aligned}\right.\] (APEG)
where \(\theta_{k}\), \(\nu_{k}\), \(\eta\), and \(\gamma\) are given parameters, determined later. Compared to (AEG), we have replaced \(Fx^{k}\) by \(Fy^{k-1}\) in (APEG) to save one evaluation of \(F\). Now, if we eliminate \(z^{k}\) from (APEG), then we obtain
\[\left\{\begin{aligned} x^{k}&\quad\in\;J_{\eta T} \left(y^{k}-\eta Fy^{k}+\hat{\eta}_{k}\hat{w}^{k-1}\right),\\ y^{k+1}&:=x^{k}+\theta_{k}(x^{k}-x^{k-1})+\hat{ \beta}_{k}(y^{k}-x^{k})+\tilde{\beta}_{k}\hat{w}^{k-1},\\ \hat{w}^{k}&:=\tfrac{1}{\eta}(y^{k}-x^{k}+\hat{\eta} _{k}\hat{w}^{k-1}),\end{aligned}\right. \tag{112}\]
where \(\hat{\beta}_{k}:=\nu_{k}-\tfrac{\gamma}{\eta}(1+\theta_{k}-\nu_{k})\), and \(\tilde{\beta}_{k}:=\gamma\theta_{k}-\tfrac{\gamma\hat{\eta}_{k}}{\eta}(1+ \theta_{k}-\nu_{k})\). Clearly, if \(\hat{\eta}_{k}=\hat{\beta}_{k}=\tilde{\beta}_{k}=0\), and \(\theta_{k}=1\), then we obtain the reflected-forward-backward splitting method (RFBS2) from [30, 87]. Hence, we can view (112) as an accelerated variant of (RFBS2)
**Convergence analysis.** To analyze the convergence of (APEG), we use the following potential function:
\[\hat{\mathcal{P}}_{k}:=\,a_{k}\|w^{k-1}\|^{2}+b_{k}\langle w^{k-1},z^{k}-y^{k }\rangle+\|z^{k}+t_{k}(y^{k}-z^{k})-x^{\star}\|^{2}+c_{k}\|w^{k-1}-\hat{w}^{k-1 }\|^{2}. \tag{113}\]
where \(w^{k}:=Fx^{k}+\xi^{k}\), \(\hat{w}^{k}:=Fy^{k-1}+\xi^{k}\) for \(\xi^{k}\in Tx^{k}\), \(a_{k}>0\), \(b_{k}>0\), \(c_{k}>0\), and \(t_{k}>0\) are given parameters, determined later. Now, we can state the convergence of (APEG) in the following theorem.
**Theorem 8.2**.: _Suppose that \(\Phi\) in (NI) is \(\rho\)-co-hypomonotone, \(F\) is \(L\)-Lipschitz continuous such that \(8\sqrt{3}L\rho<1\), \(x^{\star}\in\mathrm{zer}(\Phi)\), \(\mathrm{dom}(J_{\eta T})=\mathbb{R}^{p}\), and \(\mathrm{ran}(J_{\eta T})\subseteq\mathrm{dom}(F)=\mathbb{R}^{p}\). Let \(\gamma>0\) be such that \(16L^{2}\left[3(3\gamma+2\rho)^{2}+\gamma(2\gamma+\rho)\right]\leq 1\), which always exists, and \(\{(x^{k},y^{k},z^{k})\}\) be generated by (APEG) using_
\[t_{k}:=k+2,\ \ \eta:=2(3\gamma+2\rho),\ \ \hat{\eta}_{k}=\frac{(t_{k}-1)\eta}{t_{k }},\ \ \theta_{k}:=\frac{t_{k}-1}{t_{k+1}},\text{ and }\ \nu_{k}:=\frac{t_{k}}{t_{k+1}}. \tag{114}\]
_Then, for \(k\geq 0\), the following bound holds:_
\[\|Fx^{k}+\xi^{k}\|^{2}\leq\frac{4\|y^{0}-x^{\star}\|^{2}}{\gamma^{2}(k+2)(k+4 )},\ \ \text{where}\ \ \xi^{k}\in Tx^{k}. \tag{115}\]
_Consequently, we have the last-iterate convergence rate as \(\|Fx^{k}+\xi^{k}\|=\mathcal{O}\left(\frac{1}{k}\right)\)._
Proof.: Similar to the proof of (104) from Theorem 8.1, but using (APEG), (113), and (114), we get
\[\begin{split}\hat{\mathcal{P}}_{k}-\hat{\mathcal{P}}_{k+1}& =\,a_{k}\|w^{k-1}\|^{2}-a_{k+1}\|w^{k}\|^{2}+c_{k}\|w^{k-1}-\hat{w}^ {k-1}\|^{2}-c_{k+1}\|w^{k}-\hat{w}^{k}\|^{2}\\ &\quad+\,b_{k}\langle w^{k}-w^{k-1},z^{k+1}-z^{k}\rangle+b_{k+1} \langle\nu_{k}w^{k}-\theta_{k}w^{k-1},y^{k}-z^{k+1}\rangle.\end{split} \tag{116}\]
Since \(\langle w^{k}-w^{k-1},x^{k}-x^{k-1}\rangle\geq-\rho\|w^{k}-w^{k-1}\|^{2}\) due to the \(\rho\)-co-hypomonotonicity of \(\Phi\), \(z^{k+1}=x^{k}-\gamma\hat{w}^{k}=x^{k}-\gamma w^{k}+\gamma(w^{k}-\hat{w}^{k})\) from (APEG), and both the Cauchy-Schwarz and Young inequalities, we can derive
\[\begin{split}\langle w^{k}-w^{k-1},z^{k+1}-z^{k}\rangle& =\,\langle w^{k}-w^{k-1},x^{k}-x^{k-1}\rangle-\gamma\langle w^{k}-w^{k-1},\hat {w}^{k}-\hat{w}^{k-1}\rangle\\ &\geq\,\gamma\langle w^{k}-w^{k-1},(w^{k}-\hat{w}^{k})-(w^{k-1}- \hat{w}^{k-1})\rangle-(\gamma+\rho)\|w^{k}-w^{k-1}\|^{2}\\ &\geq\,-\left(2\gamma+\rho\right)\|w^{k}-w^{k-1}\|^{2}-\tfrac{ \gamma}{2}\|w^{k}-\hat{w}^{k}\|^{2}-\tfrac{\gamma}{2}\|w^{k-1}-\hat{w}^{k-1}\|^{2}. \end{split} \tag{117}\]
Now, combining \(x^{k}=y^{k}-\eta\hat{w}^{k}+\hat{\eta}_{k}\hat{w}^{k-1}\) from (APEG) and its third line, we obtain \(y^{k}-z^{k+1}=(\gamma+\eta)\hat{w}^{k}-\hat{\eta}_{k}\hat{w}^{k-1}=\gamma w^{k}+ \eta\hat{w}^{k}-\hat{\eta}_{k}w^{k-1}+\gamma(\hat{w}^{k}-w^{k})-\hat{\eta}_{k}( \hat{w}^{k-1}-w^{k-1})\). Using this expression, and both the Cauchy-Schwarz and Young inequalities again, for any \(\beta>0\), we can prove that
\[\begin{split}\mathcal{T}_{[3]}&:=\,\langle\nu_{k}w ^{k}-\theta_{k}w^{k-1},y^{k}-z^{k+1}\rangle\\ &=\,\langle\nu_{k}w^{k}-\theta_{k}w^{k-1},\gamma w^{k}+\eta\hat{ w}^{k}-\hat{\eta}_{k}w^{k-1}\rangle+\langle\nu_{k}w^{k}-\theta_{k}w^{k-1}, \gamma(\hat{w}^{k}-w^{k})-\hat{\eta}_{k}(\hat{w}^{k-1}-w^{k-1})\rangle\\ &\geq\,\,\gamma\nu_{k}\|w^{k}\|^{2}+\hat{\eta}_{k}\theta_{k}\|w^ {k-1}\|^{2}-(\hat{\eta}_{k}\nu_{k}+\gamma\theta_{k})\langle w^{k},w^{k-1} \rangle-\eta\theta_{k}\langle\hat{w}^{k},w^{k-1}\rangle+\eta\nu_{k}\langle w^ {k},\hat{w}^{k}\rangle\\ &\quad-\,\tfrac{\beta}{2\nu_{k}}\|\nu_{k}w^{k}-\theta_{k}w^{k-1} \|^{2}-\tfrac{\gamma^{2}\nu_{k}}{\beta}\|w^{k}-\hat{w}^{k}\|^{2}-\tfrac{\hat{ \eta}_{k}^{2}\nu_{k}}{\beta}\|w^{k-1}-\hat{w}^{k-1}\|^{2}.\end{split} \tag{118}\]
Expanding (117) and (118), and then substituting their results into (116) with \(b_{k+1}\theta_{k}=b_{k}\) from (103), we can derive that
\[\hat{\mathcal{P}}_{k}-\hat{\mathcal{P}}_{k+1}\geq \left[c_{k}-b_{k}\left(\tfrac{2}{2}+\tfrac{\hat{\eta}_{k}^{2}\nu_ {k}}{\beta\theta_{k}}\right)\right]\|w^{k-1}-\hat{w}^{k-1}\|^{2}-\left[c_{k+1} +b_{k}\left(\tfrac{\gamma}{2}+\tfrac{\gamma^{2}\nu_{k}}{\beta\theta_{k}} \right)\right]\|w^{k}-\hat{w}^{k}\|^{2}\] \[+\,\left[a_{k}-b_{k}\left(2\gamma+\rho+\tfrac{\beta\theta_{k}}{2 \nu_{k}}-\hat{\eta}_{k}\right)\right]\|w^{k-1}\|^{2}+\left[b_{k}\left(\tfrac{( 2\gamma-\beta)\nu_{k}}{2\theta_{k}}-2\gamma-\rho\right)-a_{k+1}\right]\|w^{k}\| ^{2} \tag{119}\] \[+\,b_{k}\left(3\gamma+2\rho+\beta-\tfrac{\nu_{k}\hat{\eta}_{k}}{ \theta_{k}}\right)\langle w^{k},w^{k-1}\rangle+\eta b_{k}\langle\hat{w}^{k},w ^{k}-w^{k-1}\rangle+\eta b_{k}\left(\tfrac{\nu_{k}}{\theta_{k}}-1\right) \langle w^{k},\hat{w}^{k}\rangle.\]
Next, by the Lipschitz continuity of \(F\), we can easily show that \(\|w^{k}-\hat{w}^{k}\|^{2}=\|Fx^{k}-Fy^{k}\|^{2}\leq L^{2}\|x^{k}-y^{k}\|^{2}=L ^{2}\|\eta\hat{w}^{k}-\hat{\eta}_{k}\hat{w}^{k-1}\|^{2}\). Hence, for any \(\omega>0\), by Young's inequality, this expression leads to \(0\geq\omega\|w^{k}-\hat{w}^{k}\|^{2}+\|w^{k}-\hat{w}^{k}\|^{2}-2(1+\omega)L^{2 }\|\eta\hat{w}^{k}-\hat{\eta}_{k}w^{k-1}\|^{2}-2(1+\omega)L^{2}\hat{\eta}_{k} ^{2}\|w^{k-1}-\hat{w}^{k-1}\|^{2}\). If we set \(M:=2(1+\omega)L^{2}\), then by expanding the last inequality, we get
\[\begin{split} 0&\geq\,\omega\|w^{k}-\hat{w}^{k}\|^{2}+\|w^{k} \|^{2}+(1-M\eta^{2})\|\hat{w}^{k}\|^{2}-2(1-M\eta\hat{\eta}_{k})\langle w^{k}, \hat{w}^{k}\rangle\\ &\quad-\,2M\eta\hat{\eta}_{k}\langle\hat{w}^{k},w^{k}-w^{k-1} \rangle-M\hat{\eta}_{k}^{2}\|w^{k-1}\|^{2}-M\hat{\eta}_{k}^{2}\|w^{k-1}-\hat{w} ^{k-1}\|^{2}.\end{split}\]
Multiplying this expression by \(\tfrac{b_{k}}{2M\hat{\eta}_{k}}\) and adding the result to (119), we arrive at
\[\hat{\mathcal{P}}_{k}-\hat{\mathcal{P}}_{k+1}\geq \,\left[c_{k}-b_{k}\left(\tfrac{\gamma}{2}+\tfrac{\hat{\eta}_{k}^{ 2}\nu_{k}}{\beta\theta_{k}}+\tfrac{\hat{\eta}_{k}}{2}\right)\right]\|w^{k-1}- \hat{w}^{k-1}\|^{2}+\left[b_{k}\left(\tfrac{\omega}{2M\hat{\eta}_{k}}-\tfrac{ \gamma}{2}-\tfrac{\gamma^{2}\nu_{k}}{\beta\theta_{k}}\right)-c_{k+1}\right]\|w^{k }-\hat{w}^{k}\|^{2}\] \[+\,\,\left[a_{k}-b_{k}\left(2\gamma+\rho+\tfrac{\beta\theta_{k}}{2 \nu_{k}}-\tfrac{\hat{\eta}_{k}}{2}\right)\right]\|w^{k-1}\|^{2}+\tfrac{(1-M \eta^{2})b_{k}}{2M\hat{\eta}_{k}}\|\hat{w}^{k}\|^{2}\] \[+\,\,\left[b_{k}\left(\tfrac{1}{2M\hat{\eta}_{k}}+\tfrac{(2\gamma- \beta)\nu_{k}}{2\theta_{k}}-2\gamma-\rho\right)-a_{k+1}\right]\|w^{k}\|^{2}-b_ {k}\left(\tfrac{1}{M\hat{\eta}_{k}}-\tfrac{\eta\nu_{k}}{\theta_{k}}\right) \langle w^{k},\hat{w}^{k}\rangle\] \[+\,\,b_{k}\left(3\gamma+2\rho+\beta-\tfrac{\nu_{k}\hat{\eta}_{k}}{ b_{k}}\right)\langle w^{k},w^{k-1}\rangle.\]
If we choose \(\beta:=3\gamma+2\rho\), and \(\eta:=2(3\gamma+2\rho)=2\beta\) and \(\hat{\eta}_{k}:=\tfrac{\eta\theta_{k}}{\nu_{k}}\), then \(3\gamma+2\rho+\beta-\tfrac{\nu_{k}\hat{\eta}_{k}}{\theta_{k}}=0\). Moreover, we can simplify the last expression as
\[\hat{\mathcal{P}}_{k}-\hat{\mathcal{P}}_{k+1}\geq \,\left[b_{k}\left(\tfrac{(\eta+4\gamma)\nu_{k}}{4\theta_{k}}-2 \gamma-\rho\right)-a_{k+1}\right]\|w^{k}\|^{2}+\left[a_{k}-b_{k}\left(2\gamma+ \rho-\tfrac{\eta\theta_{k}}{4\nu_{k}}\right)\right]\|w^{k-1}\|^{2}\] \[+\,\,\left[\tfrac{b_{k}}{2\theta_{k}}\left(\tfrac{(1-2L^{2}\eta^{2} )\nu_{k}}{2L^{2}\eta}-\gamma\theta_{k}-\tfrac{4\gamma^{2}\nu_{k}}{\eta} \right)-c_{k+1}\right]\|w^{k}-\hat{w}^{k}\|^{2} \tag{120}\] \[+\,\,\left[c_{k}-\tfrac{b_{k}}{2}\left(\gamma+\tfrac{5\eta\theta_{k }}{\nu_{k}}\right)\right]\|w^{k-1}-\hat{w}^{k-1}\|^{2}.\]
Now, if we assume that \(1-M\eta^{2}\geq 0\), and
\[\begin{split} a_{k}-b_{k}\left(2\gamma+\rho-\tfrac{\eta\theta_{k}}{4 \nu_{k}}\right)&\geq\,0,\qquad\qquad b_{k}\left(\tfrac{(\eta+4 \gamma)\nu_{k}}{4\theta_{k}}-2\gamma-\rho\right)-a_{k+1}\geq\,0,\\ c_{k}-\tfrac{b_{k}}{2}\left(\gamma+\tfrac{5\eta\theta_{k}}{\nu_ {k}}\right)&\geq\,0,\quad\text{and}\quad\tfrac{b_{k}}{2\theta_{k}} \left(\tfrac{(1-2L^{
hold. The two last conditions of (121) hold if \(\frac{1-2L^{2}\eta^{2}}{2L^{2}\eta}\geq 5\eta+2\gamma+\frac{4\gamma^{2}}{\eta}\). If we choose \(\omega:=5\), then using \(M=2(1+\omega)L^{2}\), this condition is equivalent to \(16L^{2}\left[3(3\gamma+2\rho)^{2}+\gamma(2\gamma+\rho)\right]\leq 1\). Clearly, if \(8\sqrt{3}L\rho<1\), then we can always find \(\gamma>0\) such that the last condition is satisfied. In addition, the condition \(1-M\eta^{2}\geq 0\) is equivalent to \(M\eta^{2}=48L^{2}(3\gamma+2\rho)^{2}\leq 1\), which automatically holds.
Using \(z^{k}=x^{k-1}-\gamma\hat{w}^{k-1}=x^{k-1}-\gamma w^{k-1}+\gamma(w^{k-1}-\hat{w }^{k-1})\) from (APEG), \(\langle w^{k-1},x^{k-1}-x^{\star}\rangle\geq-\rho\|w^{k-1}\|^{2}\), and \(-\langle w^{k-1},\hat{w}^{k-1}\rangle=-\langle w^{k-1},\hat{w}^{k-1}-w^{k-1} \rangle-\|w^{k-1}\|^{2}\geq-\frac{3}{2}\|w^{k-1}\|^{2}-\frac{1}{2}\|w^{k-1}- \hat{w}^{k-1}\|^{2}\), we have
\[\hat{\mathcal{P}}_{k} = \|z^{k}+t_{k}(y^{k}-z^{k})-x^{\star}-\tfrac{b_{k}}{2t_{k}}w^{k-1} \|^{2}+c_{k}\|w^{k-1}-\hat{w}^{k-1}\|^{2} \tag{122}\] \[+ \left(a_{k}-\tfrac{b_{k}^{2}}{4t_{k}^{2}}\right)\|w^{k-1}\|^{2}+ \tfrac{b_{k}}{t_{k}}\langle w^{k-1},x^{k-1}-x^{\star}\rangle-\tfrac{\gamma b_ {k}}{t_{k}}\langle w^{k-1},\hat{w}^{k-1}\rangle\] \[\geq \left(a_{k}-\tfrac{b_{k}^{2}}{4t_{k}^{2}}-\tfrac{(3\gamma+2\rho) b_{k}}{2t_{k}}\right)\|w^{k-1}\|^{2}+\left(c_{k}-\tfrac{\gamma b_{k}}{2t_{k}} \right)\|w^{k-1}-\hat{w}^{k-1}\|^{2}.\]
Since \(b_{k}=\frac{b_{0}(k+1)(k+2)}{2}\) and \(a_{k}=\frac{b_{k}(2\gamma t_{k}+\gamma)}{4t_{k}}=\frac{b_{0}(k+1)(\gamma k+5 \gamma+2\rho)}{4}\), by choosing \(c_{k}:=\frac{b_{k}(\gamma\mu_{k}+5\eta b_{k})}{2\nu_{k}}=\frac{b_{0}(k+1)([3 1\gamma+20\rho)(k+1)+\gamma]}{4}\), we can show from the last inequality that
\[\hat{\mathcal{P}}_{k} \geq \left(a_{k}-\tfrac{b_{k}^{2}}{4t_{k}^{2}}-\tfrac{(3\gamma+2\rho) b_{k}}{2t_{k}}\right)\|w^{k-1}\|^{2}+\left(c_{k}-\tfrac{\gamma b_{k}}{t_{k}} \right)\|w^{k-1}-\hat{w}^{k-1}\|^{2}\] \[= \tfrac{b_{0}(k+1)([4\gamma-b_{0})(k+2)+b_{0}]}{16}\|w^{k-1}\|^{2} +\tfrac{b_{0}(k+1)([31\gamma+20\rho)(k+1)-\gamma]}{4}\|w^{k-1}-\hat{w}^{k-1}\|^ {2}.\]
Finally, if we choose \(b_{0}:=2\gamma\), then \(\hat{\mathcal{P}}_{k}\geq\frac{\gamma^{2}(k+1)(k+3)}{4}\|w^{k-1}\|^{2}\). Since \(\hat{\mathcal{P}}_{k}\leq\hat{\mathcal{P}}_{0}=a_{0}\|w^{-1}\|^{2}+b_{0} \langle w^{-1},z^{0}-y^{0}\rangle+\|z^{0}+t_{0}(y^{0}-z^{0})-x^{\star}\|^{2}+c _{0}\|w^{-1}-\hat{w}^{-1}\|^{2}\), and \(y^{0}=z^{0}\) and \(w^{-1}=\hat{w}^{-1}=0\), we get \(\hat{\mathcal{P}}_{k}\leq\hat{\mathcal{P}}_{0}=\|y^{0}-x^{\star}\|^{2}\). Putting these steps together, we can conclude that \(\|w^{k-1}\|^{2}\leq\frac{4\|y^{0}-x^{\star}\|^{2}}{\gamma^{2}(k+1)(k+3)}\), which is exactly (115).
**Remark 8.1**.: The condition \(16L^{2}\left[3(3\gamma+2\rho)^{2}+\gamma(2\gamma+\rho)\right]\leq 1\) in Theroem 8.2 covers a range of \(\gamma\) by solving a quadratic inequation in \(\gamma\). Compared (APEG) to the accelerated extragradient methods in [21] for (NE), we can see that [21] uses all variable stepsizes, while (APEG) allows \(\gamma\) and \(\eta\) to be constant. However, (APEG) only achieves Big-O convergence rates instead of small-O convergence rates as in [21].
## 9 Conclusion and Further Remarks
In this paper, we have provided a survey of classical and recent results on the sublinear convergence rates of the extragradient (EG) method and its variants. We presents the full proofs of all the results discussed in the paper, where many proofs are new in certain aspects. Classical convergence results of EG-type algorithms typically rely on monotonicity assumptions, while recent developments extend EG-type methods to weak-Minty solutions and co-hypomonotone settings. In addition, last-iterate convergence rates have been investigated for several EG variants, though this research remains incomplete. Various extensions to stochastic and randomized models have also been studied. EG-type methods have been widely applied in machine learning, particularly in GANs, online learning, reinforcement learning, and robust optimization. These algorithms have shown their efficiency in practice, especially for constant and adaptive stepsize variants.
Accelerated variants of EG have also attracted significant attention, including methods relying on Halpern's fixed-point iteration and Nesterov's accelerated techniques. While several works have focused on theoretical aspects of EGs such as iteration-complexity and last-iterate convergence rates, the practical performance of accelerated EG variants remains limited and requires further investigation. It is still unclear whether accelerated variants of EG can outperform their classical counterparts, which opens up a new research question for our future work. In addition, establishing tighter convergence rates (e.g., small-o rates) as well as convergence of sequences remains largely open for several variants discussed in this paper.
EGs have been extensively studied for several decades, with numerous researchers making remarkable contributions to the field. The theory, algorithms, and applications of EGs have been expanded to various fields, including economics and machine learning. However, given the breadth of the literature on EGs, this paper can only survey a small proportion of recent works on sublinear convergence rates for both non-accelerated and accelerated variants in deterministic settings. We have no means to fully cover many other works, including classical and recent developments. We hope that this paper will provide a useful starting
point for us to continue exploring recent literature on minimax problems and their extensions. We also wish to survey prominent applications of minimax problems and nonlinear inclusions in different fields.
**Data availability.** The author confirms that this paper does not contain any data.
**Acknowledgements.** This work is partially supported by the National Science Foundation (NSF), grant no. NSF-RTG DMS-2134107 and the Office of Naval Research (ONR), grant No. N00014-20-1-2088.
|
2301.01189 | On the long-term archiving of research data | Accessing research data at any time is what FAIR (Findable Accessible
Interoperable Reusable) data sharing aims to achieve at scale. Yet, we argue
that it is not sustainable to keep accumulating and maintaining all datasets
for rapid access, considering the monetary and ecological cost of maintaining
repositories. Here, we address the issue of cold data storage: when to dispose
of data for offline storage, how can this be done while maintaining FAIR
principles and who should be responsible for cold archiving and long-term
preservation. | Cyril Pernet, Claus Svarer, Ross Blair, John D. Van Horn, Russell A. Poldrack | 2023-01-03T16:42:27Z | http://arxiv.org/abs/2301.01189v1 | # On the long-term archiving of research data
###### Abstract
Accessing research data at any time is what FAIR (Findable Accessible Interoperable Reusable) data sharing aims to achieve at scale. Yet, we argue that it is not sustainable to keep accumulating and maintaining all datasets for rapid access, considering the monetary and ecological cost of maintaining repositories. Here, we address the issue of cold data storage: when to dispose of data for offline storage, how can this be done while maintaining FAIR principles and who should be responsible for cold archiving and long-term preservation.
1 Neurobiology Research Unit, Rigshospitalet, Copenhagen, Denmark
2 Department of Psychology & Stanford Center for Reproducible Neuroscience, Stanford University, California, USA
3 Department of Psychology & School of Data Science, University of Virginia, Virginia, USA
* corresponding author: [email protected]
data sharing, FAIR, cost, long-term archiving
**Statements and Declarations**: The authors declare no conflict of interest
**Data and code availability**: the raw OpenNeuro dataset count and code used are openly accessible @[https://github.com/CPernet/OpenNeuroCount](https://github.com/CPernet/OpenNeuroCount)
## Introduction
One of the goals of data curation is to ensure that data are findable and accessible to both designated users and reusers, on a day-to-day basis. The frequency of access to data is what defines their temperature, with 'hot' data being data used constantly, 'warm' data as data that needs regular access, and cold data being data with little usage. While efforts have been made to provide large-scale warm data repositories, there is little discussion about what to do with data as they become colder over time, which is also related to general recommendations on data retention policies and long-term organisational sustainability (NSTC, 2022).
Digital Data Curation includes the preservation, storage and disposal of data (Higgins, 2008). By disposal, it is usually assumed the data have not been selected for long-term curation and preservation, and that data are either transferred to a separate archive or destroyed. Here we propose to add to the 'disposal' category, the possibility of 'cold' archival storage (and thus never destroyed). Because data in cold storage are unlikely to serve users on a day-to-day basis, it naturally sits within this part of the data curation life cycle. Unlike other 'disposed' data that are destroyed or not selected for curation, 'cold data' are fully curated data. Cold archival storage is necessary to ensure long-term preservation and retention, while also reducing the unnecessary cost of maintaining them as warm data.
#### When to dispose of research data?
Usage frequency is what primarily defines data temperatures. Like molecules, the more agitation there is, the hotter the temperature. Data disposal in this context becomes about defining a frequency threshold at which one should dispose of the data, and such a threshold can be understood as a trade-off between the monetary and ecological cost of warm storage vs. the expected utility of the data. As an example, we analysed the average download counts (download byte size/dataset size) for 167 unique datasets on the OpenNeuro platform (Markiewicz, et al., 2021) deposited since January 2020. Counts were realigned from the time of deposit, with \(\sim\)74% of datasets having been deposited for at least 24 months, and the smallest duration being 14 months (figure 1). From the total download counts, we can observe outliers accounting for more than 16% of the downloads and representing the top \(\sim\)6% of datasets. When plotting those data over time, we can observe that for those highly accessed data, the download count does not slow down, while datasets with an average or low total download count, we observe a downward trend.
When it comes to decisions about which data to preserve, what matters most is their utility. Data can be downloaded often because their quality, richness and complexity allow them to be usefully further analysed or repurposed to address new scientific questions. A good example of this is given by INDI datasets (1000 Functional Connectomes Project, ADHD-200, Autism Brain Imaging Data Exchange (ABIDE) and Consortium for Reliability and Reproducibility (CoRR) data) which have been re-analyzed many times leading to high-quality publications (913 as of March 2017), with an estimated savings of over one million dollars (Milham et al., 2018). Datasets can also be downloaded due to their simplicity, making them useful for e.g., method development or teaching, cases for which metrics are lacking. We believe those scenarios to be likely because the most downloaded datasets on OpenNeuro are not just the largest, some are also small-size datasets (as computed in figure 1, the most downloaded datasets are on average \(\sim\)54GB [min 5.6Mb max 847Gb] vs \(\sim\)34Gb [min 311Mb max 171Gb] for the least downloaded ones, one-sided t-test t(26.35)\(=\)0.9 p\(=\)0.18).
Figure 1: Average dataset download count over time from January 2020 on [https://openneuro.org/](https://openneuro.org/). On the left side is shown the download counts for the high access datasets (\(>\)75 total downloads, in red), those with an average access count (between 24 and 35 total downloads, in green) and those with a low access count (\(<\)18 total downloads, in blue), with thick lines representing a 2nd order polynomial fit. At the bottom, bar plots of the total number of datasets included each month. The right side shows the total download counts (circles and crosses represent individual datasets, crossed are outliers based on the median absolute deviation, the grey bars represent the mean, 31, with its 95% Highest Density Confidence Interval [24 35], and the blue area is the non-parametric kernel density estimate of the total data count).
Usage frequency is, however, only a proxy of utility and it is self-evident that some data can be extremely valuable in very specific contexts while being rarely accessed (e.g. data on the sequence of a little-studied virus might become very important if that virus goes on to later result in a epidemic). While usage frequency can be used to decide when to dispose of data, utility is thus the concept that enables decisions about when to dispose of data and also separates data destruction from cold archiving; it cannot be approximated by usage frequency alone. Since data utility is highly field-specific and context-specific, we cannot provide general recommendations on what constitutes useful data. Given that the data discussed here regards data that have been curated and given the cost of data collection and curation, we can, however, guard against destruction and suggest cold archiving by default. Going back to the OpenNeuro dataset examples, we could consider after 24-to-36 months, moving to cold archiving the lower end of the datasets (~38%). The question that follows is whether cold data can still remain Findable, Accessible, Interoperable and Reusable (Wilkinson et al., 2016).
### Cold FAIR data
In the context of web-based repositories, findability is highly dependent on globally unique identifiers (GUIDs), also called unique persistent identifiers (PIDs), and datasets are typically assigned such identifiers, often in the form of a Digital Object Identifier (DOI). As one moves data to cold archiving, it is essential to maintain metadata information, now indicating the data's temperature. As FAIR also focuses on machine actionability and cold archiving has been more often than not a human enterprise, a clear mechanism must be in place to request such data if needed, indicating how to request and what delays to expect. For cloud serviced repository, this can be as simply as changing data cloud tiering to cold, with the consequence of having a more expensive and longer retrieval time if access is required, but an overall lower ecological and financial cost since those data are rarely accessed.
As one thinks about long-term data preservation, we have to keep in mind that, in general, neither digital media nor institutions are reliably durable and it is thus necessary to plan for data longevity, particularly with regard to how the data will be preserved if the repository were to cease operations. This is exemplified by the fMRI Data Center project (Van Horn & Gazzaniga, 2013) which curated and archived data from over 100 complete fMRI studies between 2000 and 2006 but had to cease its activities due to the sunsetting of the NIH Human Brain Project (HBP) initiative and the subsequent lack of new funding opportunities to support continued operations. Current repositories must learn from this and implement solutions. Compared to 2006, there are two major differences. First, there exist multiple international repositories for data sharing within the neuroimaging field while fMRIDC was the only repository of its kind. Second, most neuroimaging repositories, if not all, now rely on versions of the same dataset organisation, BIDS (Brain Imaging Data Structure; Gorgolweski et al., 2016) alleviating the need for repository-centric curation models. These two elements make it easier to facilitate the transfer of datasets between repositories with little management cost, providing licencing or data usage agreements are compliant with the repository policy. It would, thus, be wise to have some coordination between repositories allowing moving datasets between them. Just as important as being able to transfer data, new tools must be developed to translate metadata schemas between repositories allowing all datasets to be findable on any repository, while pointing to where they are accessible, creating a redundant and federated network of datasets' metadata. As a repository is closing, not all data would need to be moved, only the hottest one, while others could be cold archived. Cold data can be returned to the data creators, or stored somewhere else depending on agreements that must here be in place (i.e. repositories should have that information in their data policies) with updated metadata available through partner repositories.
Interoperability and Reusability depend primarily on the data format used and data integrity. For data formats, for hot and warm data alike, we can only recommend open and well-documented formats allowing data to be retrieved and read by anyone at any time. Reusability also depends on data integrity, which is an important aspect of any data storage system. Four sources of risk have been proposed and operationalised: storage hardware; physical environment; curating institution; global environment (Altman & Landau, 2020) and all those factors must be considered. Hot and warm data integrity is often ensured by simultaneously using multiple copies of the data, for instance having regular backup copies on physically separated servers, themself using redundant arrays of independent disks, thus protecting data from drive failure. Cold archiving, however, often relies on a single copy
and thus bit-level information integrity is of greater concern. A typical recommendation for cold storage is to use magnetic tape, in particular, Linear Tape Open (LTO) as those types of tapes can contain 10-14 TByte of data for about $50, with a bit-wise error rate of 1 in 10\({}^{19}\) and up to 30-year shelf-life. These should be stored in a secured, flood-resistant and temperature-controlled environment ensuring long-term preservation. Planning for such activity in case of repository shutdown is thus necessary, with future data access planned to be managed by institutions like university libraries, possibly in relation from services allowing cloud retrieval (as e.g. Amazon Glacier Deep Archive, Google or Azure storage archive) thus allowing independent storage and retrieval when needed.
## Conclusion
Researchers happily take responsibility for data creation and increasingly they are taking up the challenging but necessary task of data sharing. As data sharing becomes the norm, it is essential to clarify responsibilities about who ensures access to the data and in what form. Repositories should clearly state for how long data are guaranteed or just likely to be shared, state who decides when data should move to cold storage (using which criteria) and who is responsible for cold storage (Currie and Kilbride, 2021). Once those have been decided, simple steps can be made to keep data FAIR: i) keep metadata alive by making them available through multiple sources with the PID ii) have mechanisms in place to retrieve cold data and (iii) have procedures ensuring cold data physical integrity.
## Acknowledgements
C.R.P. is supported by the Novo Nordisk Fonden NNF200OC0063277.
|
2307.05061 | Maximizing Social Welfare in Score-Based Social Distance Games | Social distance games have been extensively studied as a coalition formation
model where the utilities of agents in each coalition were captured using a
utility function u that took into account distances in a given social network.
In this paper, we consider a non-normalized score-based definition of social
distance games where the utility function u_v depends on a generic scoring
vector v, which may be customized to match the specifics of each individual
application scenario.
As our main technical contribution, we establish the tractability of
computing a welfare-maximizing partitioning of the agents into coalitions on
tree-like networks, for every score-based function u_v. We provide more
efficient algorithms when dealing with specific choices of u_v or simpler
networks, and also extend all of these results to computing coalitions that are
Nash stable or individually rational. We view these results as a further strong
indication of the usefulness of the proposed score-based utility function: even
on very simple networks, the problem of computing a welfare-maximizing
partitioning into coalitions remains open for the originally considered
canonical function u. | Robert Ganian, Thekla Hamm, Dušan Knop, Sanjukta Roy, Šimon Schierreich, Ondřej Suchý | 2023-07-11T07:10:19Z | http://arxiv.org/abs/2307.05061v1 | # Maximizing Social Welfare in Score-Based
###### Abstract
Social distance games have been extensively studied as a coalition formation model where the utilities of agents in each coalition were captured using a utility function \(\mathsf{u}\) that took into account distances in a given social network. In this paper, we consider a non-normalized score-based definition of social distance games where the utility function \(u^{\vec{s}}\) depends on a generic scoring vector \(\overline{\mathsf{s}}\), which may be customized to match the specifics of each individual application scenario.
As our main technical contribution, we establish the tractability of computing a welfare-maximizing partitioning of the agents into coalitions on tree-like networks, for every score-based function \(u^{\vec{s}}\). We provide more efficient algorithms when dealing with specific choices of \(u^{\vec{s}}\) or simpler networks, and also extend all of these results to computing coalitions that are Nash stable or individually rational. We view these results as a further strong indication of the usefulness of the proposed score-based utility function: even on very simple networks, the problem of computing a welfare-maximizing partitioning into coalitions remains open for the originally considered canonical function \(\mathsf{u}\).
## 1 Introduction
Coalition formation is a central research direction within the fields of algorithmic game theory and computational social choice. While there are many different scenarios where agents aggregate into coalitions, a pervasive property of such coalitions is that the participating agents exhibit _homophily_, meaning that they prefer to be in coalitions with other agents which are similar to them. It was this observation that motivated Branzei and Larson to introduce the notion of _social distance games_ (SDG) as a basic model capturing the homophilic behavior of agents in a social network [15].
Branzei and Larson's SDG model consisted of a graph \(G=(V,E)\) representing the social network, with \(V\) being the agents and \(E\) representing direct relationships or connections between the agents. To capture the utility of an agent \(v\) in a coalition \(C\subseteq V\), the model considered a single function: \(u(v,C)=\frac{1}{|C|}\cdot\sum_{w\in C\setminus\{v\}}\frac{1}{d_{C}(v,w)}\) where \(d_{C}(v,w)\) is the distance between \(v\) and \(w\) inside \(C\).
Social distance games with the aforementioned utility function \(\mathsf{u}\) have been the focus of extensive study to date, with a number of research papers specifically targeting algorithmic and complexity-theoretic aspects of forming coalitions with maximum social welfare [2, 3, 4, 29]. Very recently, Flammini et al. [22, 23] considered a generalization of \(\mathsf{u}\) via an adaptive real-valued scoring vector which weights the contributions to an agent's utility according to the distances of other agents in the coalition, and studied the price of anarchy and stability for non-negative scoring vectors. However, research to date has not revealed any polynomially tractable fragments for the problem of computing coalition structures
with maximum social welfare (with or without stability-based restrictions on the behavior of individual agents), except for the trivial cases of complete (bipartite) graphs [15] and trees [36].
Our Contribution.The undisputable appeal of having an adaptive scoring vector--as opposed to using a single canonical utility function u--lies in the fact that it allows us to capture many different scenarios with different dynamics of coalition formation. However, it would also be useful for such a model to be able to assign negative scores to agents at certain (larger) distances in a coalition. For instance, guests at a gala event may be keen to accept the presence of friends-of-friends (i.e., agents at distance 2) at a table, while friends-of-friends may be less welcome in private user groups on social networks, and the presence of complete strangers in some scenarios may even be socially unacceptable.
Here, we propose the study of social distance games with a family of highly generic non-normalized score-based utility functions. Our aim here is twofold. First of all, these should allow us to better capture situations where agents at larger distances are unwelcome or even unacceptable for other agents. At the same time, we also want to obtain algorithms capable of computing welfare-maximizing coalition structures in such general settings, at least on well-structured networks.
Our model considers a graph \(G\) accompanied with an integer-valued, fixed but adaptive _scoring vector_\(\widetilde{\mathrm{s}}\) which captures how accepting agents are towards other agents based on their pairwise distance.1 The utility function \(u^{\widetilde{\mathrm{s}}}(v,C)\) for an agent \(v\) in coalition \(C\) is then simply defined as \(u^{\widetilde{\mathrm{s}}}(v,C)=\sum_{w\in C\setminus\{v\}}\widetilde{\mathrm{ s}}(d_{C}(v,w))\); we explicitly remark that, unlike previous models, this is not normalized with respect to the coalition size. As one possible example, a scoring vector of \((1,0,-1)\) could be used in scenarios where agents are welcoming towards friends, indifferent to friends-of-friends, slightly unhappy about friends-of-friends-of-friends (i.e., agents at distance 3), and unwilling to group up with agents who are at distance greater than 3 in \(G\). A concrete example which also illustrates the differences to previous SDG models is provided in Figure 1.
Footnote 1: Formal definitions are provided in the Preliminaries.
While non-normalized scoring functions have not previously been considered for social distance games, we view them a natural way of modeling agent utilities; in fact, similar ideas have been successfully used in models for a variety of other phenomena including, e.g., committee voting [21], resource allocation [14, 13] and Bayesian network structure learning [25, 37]. Crucially, it is not difficult to observe that many of the properties originally established by Branzei and Larson for SDGs also hold for our non-normalized score-based model with every choice of \(\widetilde{\mathrm{s}}\), such as the small-world property [15, 28] and
Figure 1: A social network illustrating the difference of maximising social welfare in our model compared to previous SDG models. (1) In Branzei and Larson’s SDG model, the welfare-maximum outcome is the grand coalition. (2) A welfare-maximum outcome in the normalized model of Flammini et al. with a scoring vector of \((1,0,0,0)\) is marked with dashed lines, while the same scoring vector in our non-normalized model produces the grand coalition. (3) A scoring vector of \(\widetilde{\mathrm{s}}=(1,0,-1)\) in our model produces the welfare-maximizing outcome marked with bold lines, with a welfare of 18. (4) A ‘less welcoming’ scoring vector of \(\widetilde{\mathrm{s}}=(1,-3)\) leads to the welfare maximizing dash-circled partition with a welfare of 14 (compared to only 12 for the bold-circled one).
the property that adding an agent with a close (distant) connection to a coalition positively (negatively) impacts the utilities of agents [15]. In addition, the proposed model can also directly capture the notion of _enemy aversion_ with symmetric preferences [5, 35] by setting \(\overline{s}=(1)\).
Aside from the above, a notable benefit of the proposed model lies on the complexity-theoretic side of things. Indeed, a natural question that arises in the context of SDG is whether we can compute an outcome--a partitioning of the agents into coalitions--which maximizes the social welfare (defined as the sum of the utilities of all agents in the network). This question has been studied in several contexts, and depending on the setting one may also require the resulting coalitions to be stable under _individual rationality_ (meaning that agents will not remain in coalitions if they have negative utility) or _Nash stability_ (meaning that agents may leave to join a different coalition if it would improve their utility). But in spite of the significant advances in algorithmic aspects of other coalition formation problems in recent years [10, 17, 24, 17], we lack any efficient algorithm capable of producing such a welfare-optimal partitioning when using the utility function u even for the simplest types of networks.
To be more precise, when viewed through the refined lens of _parameterized complexity_[18, 20] that has recently become a go-to paradigm for such complexity-theoretic analysis, no tractable fragments of the problem are known. More precisely, the problem of computing a welfare-maximizing outcome under any of the previously considered models is not even known to admit an XP algorithm when parameterized by the minimum size of a vertex cover in the social network \(G\)--implying a significant gap towards potential fixed-parameter tractability. This means that the complexity of welfare-maximization under previous models remains wide open even under the strongest non-trivializing restriction of the network.
As our main technical contribution, we show that non-normalized score-based utility functions do not suffer from this drawback and can in fact be computed efficiently under fairly mild restrictions on \(G\). Indeed, as our first algorithmic result we obtain an XP algorithm that computes a welfare-maximizing partitioning of the agents into coalitions parameterized by the treewidth of \(G\), and we strengthen this algorithm to also handle additional restrictions on the coalitions in terms of individual rationality or Nash stability. As with numerous treewidth-based algorithms, we achieve this result via leaf-to-root dynamic programming along a tree-decomposition. However, the records we keep during the dynamic program are highly non-trivial and require an advanced branching step to correctly pre-computed the distances in the stored records. We remark that considering networks of small treewidth is motivated not only by the fundamental nature of this structural graph measure, but also by the fact that many real-world networks exhibit bounded treewidth [34].
In the next part of our investigation, we show that when dealing with simple scoring functions or bounded-degree networks, these results can be improved to fixed-parameter algorithms for welfare-maximization (including the cases where we require the coalitions to be individually rational or Nash stable). This is achieved by combining structural insights into the behavior of such coalitions with a different dynamic programming approach. Furthermore, we also use an entirely different technique based on quadratic programming to establish the fixed-parameter tractability of all 3 problems under consideration w.r.t. the minimum size of a vertex cover in \(G\). Finally, we conclude with some interesting generalizations and special cases of our model and provide some preliminary results in these directions.
## 2 Preliminaries
We use \(\mathbb{N}\) to denote the set of natural numbers, i.e., positive integers, and \(\mathbb{Z}\) for the set of integers. For \(i\in\mathbb{N}\), we let \([i]=\{1,\ldots,i\}\) and \([i]_{0}=[i]\cup\{0\}\). We assume basic familiarity with graph-theoretic terminology [19].
Social Distance Games.A _social distance game_ (SDG) consists of a set \(N=\{1,\ldots,n\}\) of _agents_, a simple undirected graph \(G=(N,E)\) over the set of agents called a _social network_, and a non-increasing _scoring vector_\(\overline{\mathrm{s}}=(s_{1},\ldots,s_{\delta})\) where a) for each \(a\in[\delta]\), \(s_{a}\in\mathbb{Z}\) and b) for each \(a\in[\delta-1]\), \(s_{a+1}\leq s_{a}\).
In some cases, it will be useful to treat \(\overline{\mathrm{s}}\) as a function from \(\mathbb{N}\) rather than a vector; to this end, we set \(\overline{\mathrm{s}}(a)=s_{a}\) for each \(a\leq\delta\) and \(\overline{\mathrm{s}}(a)=-\infty\) when \(a>\delta\). The value "\(-\infty\)" here represents an inadmissible outcome, and formally we set \(-\infty+z=-\infty\) and \(-\infty<z\) for each \(z\in\mathbb{Z}\).
A _coalition_ is a subset \(C\subseteq N\), and an outcome is a partitioning \(\Pi=(C_{1},\ldots,C_{\ell})\) of \(N\) into coalitions; formally, \(\bigcup_{i=1}^{\ell}C_{i}=N\), every \(C_{i}\in\Pi\) is a coalition, and all coalitions in \(\Pi\) are pairwise disjoint. We use \(\Pi_{i}\) to denote the coalition the agent \(i\in N\) is part of in the outcome \(\Pi\). The _utility_ of an agent \(i\in N\) for a coalition \(\Pi_{i}\in\Pi\) is
\[\mathrm{u}^{\overline{\mathrm{s}}}(i,\Pi_{i})=\sum_{j\in\Pi_{i}\setminus\{i \}}\overline{\mathrm{s}}(\mathrm{dist}_{\Pi_{i}}(i,j)),\]
where \(\mathrm{dist}_{\Pi_{i}}(i,j)\) is the length of a shortest path between \(i\) and \(j\) in the graph \(G[\Pi_{i}]\), i.e., the subgraph of \(G\) induced on the agents of \(\Pi_{i}\). We explicitly note that if \(\Pi_{i}\) is a singleton coalition then \(\mathrm{u}^{\overline{\mathrm{s}}}(i,\Pi_{i})=0\). Moreover, in line with previous work [15] we set \(\mathrm{dist}_{\Pi_{i}}(i,j):=+\infty\) if there is no \(i\)-\(j\) path in \(G[\Pi_{i}]\), meaning that \(\mathrm{u}^{\overline{\mathrm{s}}}(i,\Pi_{i})=-\infty\) whenever \(G[\Pi_{i}]\) is not connected.
For brevity, we drop the superscript from \(u^{\overline{\mathrm{s}}}\) whenever the scoring vector \(\overline{\mathrm{s}}\) is clear from the context. To measure the satisfaction of the agents with a given outcome, we use the well-known notation of _social welfare_, which is the total utility of all agents for an outcome \(\Pi\), that is,
\[\mathrm{SW}^{\overline{\mathrm{s}}}(\Pi)=\sum_{i\in N}\mathrm{u}^{\overline{ \mathrm{s}}}(i,\Pi_{i}).\]
Here, too, we drop the superscript specifying the scoring vector whenever it is clear from the context.
We assume that all our agents are selfish, behave strategically, and their aim is to maximize their utility. To do so, they can perform _deviations_ from the current outcome \(\Pi\). We say that \(\Pi\) admits an _IR-deviation_ if there is an agent \(i\in N\) such that \(\mathrm{u}(i,C)<0\); in other words, agent \(i\) prefers to be in a singleton coalition over its current coalition. If no agent admits an IR-deviation, the outcome is called _individually rational_ (IR). We say that \(\Pi\) admits an _NS-deviation_ if there is an agent \(i\) and a coalition \(C\in\Pi\cup\{\emptyset\}\) such that \(\mathrm{u}(i,C\cup\{i\})>\mathrm{u}(i,\Pi_{i})\). \(\Pi\) is called _Nash stable_ (NS) if no agent admits an NS-deviation. We remark that other notions of stability exist in the literature [14, Chapter 15], but Nash stability and individual rationality are the most basic notions used for stability based on individual choice [30, 39].
Having described all the components in our score-based SDG model, we are now ready to formalize the three classes of problems considered in this paper. We note that even though these are stated as decision problems for complexity-theoretic reasons, each of our algorithms for these problems can also output a suitable outcome as a witness. For an arbitrary fixed scoring vector \(\overline{\mathrm{s}}\), we define:
\(\overline{\mathrm{s}}\)-SDG-WF
Input: A social network \(G=(N,E)\), desired welfare \(b\in\mathbb{N}\).
Question: Does the distance game given by \(G\) and \(\overline{\mathrm{s}}\) admit an outcome with social welfare
at least \(b\)?
\(\overline{\mathrm{s}}\)-SDG-WF-IR and \(\overline{\mathrm{s}}\)-SDG-WF-Nash are then defined analogously, but with the additional condition that the outcome must be individually rational or Nash stable, respectively.
We remark that for each of the three problems, one may assume w.l.o.g. that \(s_{1}>0\); otherwise the trivial outcome consisting of \(|N|\) singleton coalitions is both welfare-optimal and stable. Moreover,
without loss of generality we assume \(G\) to be connected since an optimal outcome for a disconnected graph \(G\) can be obtained as a union of optimal outcomes in each connected component of \(G\).
The last remark we provide to the definition of our model is that it trivially also supports the well-known _small world_ property [28] that has been extensively studied on social networks. In their original work on SDGs, Branzei and Larson showed that their model exhibits the small world property by establishing a diameter bound of 14 in each coalition in a so-called _core partition_[15]. Here, we observe that for each choice of \(\overline{s}\), a welfare-maximizing coalition will always have diameter at most \(\delta\).
Parameterized Complexity.The _parameterized complexity_ framework [18, 20] provides the ideal tools for the fine-grained analysis of computational problems which are \(\mathsf{NP}\)-hard and hence intractable from the perspective of classical complexity theory. Within this framework, we analyze the running times of algorithms not only with respect to the input size \(n\), but also with respect to a numerical parameter \(k\in\mathbb{N}\) that describes a well-defined structural property of the instance; the central question is then whether the superpolynomial component of the running time can be confined by a function of this parameter alone.
The most favorable complexity class in this respect is \(\mathsf{FPT}\) (short for "fixed-parameter tractable") and contains all problems solvable in \(f(k)\cdot n^{\mathcal{O}(1)}\) time, where \(f\) is a computable function. Algorithms with this running time are called _fixed-parameter algorithms_. A less favorable, but still positive, outcome is an algorithm with running time of the form \(n^{\prime(k)}\); problems admitting algorithms with such running times belong to the class \(\mathsf{XP}\).
Structural Parameters.Let \(G=(V,E)\) be a graph. A set \(U\subseteq V\) is a _vertex cover_ if for every edge \(e\in E\) it holds that \(U\cap e\neq\emptyset\). The _vertex cover number_ of \(G\), denoted \(\operatorname{vc}(G)\), is the minimum size of a vertex cover of \(G\). A _nice tree-decomposition_ of \(G\) is a pair \((\mathcal{T},\beta)\), where \(\mathcal{T}\) is a tree rooted at a node \(r\in V(\mathcal{T})\), \(\beta\colon V(\mathcal{T})\to 2^{V}\) is a function assigning each node \(x\) of \(\mathcal{T}\) its _bag_, and the following conditions hold:
* for every edge \(\{u,v\}\in E(G)\) there is a node \(x\in V(\mathcal{T})\) such that \(u,v\in\beta(x)\),
* for every vertex \(v\in V\), the set of nodes \(x\) with \(v\in\beta(x)\) induces a connected subtree of \(\mathcal{T}\),
* \(|\beta(r)|=|\beta(x)|=0\) for every _leaf_\(x\in V(\mathcal{T})\), and
* there are only tree kinds of internal nodes in \(\mathcal{T}\):
* \(x\) is an _introduce node_ if it has exactly one child \(y\) such that \(\beta(x)=\beta(y)\cup\{v\}\) for some \(v\notin\beta(y)\),
* \(x\) is a _join node_ if it has exactly two children \(y\) and \(z\) such that \(\beta(x)=\beta(y)=\beta(z)\), or
* \(x\) is a _forget node_ if it has exactly one child \(y\) such that \(\beta(x)=\beta(y)\setminus\{v\}\) for some \(v\in\beta(y)\).
The _width_ of a nice tree-decomposition \((\mathcal{T},\beta)\) is \(\max_{x\in V(\mathcal{T})}|\beta(x)|-1\), and the treewidth \(\operatorname{tw}(G)\) of a graph \(G\) is the minimum width of a nice tree-decomposition of \(G\). Given a nice tree-decomposition and a node \(x\), we denote by \(G^{x}\) the subgraph induced by the set \(V^{x}=\bigcup_{y\text{ is a descendant of }x}\beta(y)\), where we suppose that \(x\) is a descendant of itself. It is well-known that optimal nice tree-decompositions can be computed efficiently [8, 31, 32].
Integer Quadratic Programming.Integer Quadratic Programming (IQP) over \(d\) dimensions can be formalized as the task of computing
\[\max\left\{x^{T}Qx\mid Ax\leq b,\,x\geq 0,\,x\in\mathbb{Z}^{d}\right\}\,,\] (IQP)
where \(Q\in\mathbb{Z}^{d\times d}\), \(A\in\mathbb{Z}^{m\times d}\), \(b\in\mathbb{Z}^{m}\). That is, IQP asks for an integral vector \(x\in\mathbb{Z}^{d}\) which maximizes the value of a quadratic form subject to satisfying a set of linear constraints.
**Proposition 1** ([32, 39], see also [25]).: Integer Quadratic Programming _is fixed-parameter tractable when parameterized by \(d+\|A\|_{\infty}+\|Q\|_{\infty}\)._
## 3 Structural Properties of Outcomes
As our first set of contributions, we establish some basic properties of our model and the associated problems that are studied within this paper. We begin by showcasing that the imposition of individual rationality or Nash stability as additional constraints on our outcomes does in fact have an impact on the maximum welfare that can be achieved (and hence it is indeed necessary to consider three distinct problems). We do not consider this to be obvious at first glance: intuitively, an agent \(i\)'s own contribution to the social welfare can only improve if they perform an IR- or NS-deviation, and the fact that the distance function \(\operatorname{dist}_{\Pi_{i}}\) is symmetric would seem to suggest that this can only increase the total social welfare.
**Lemma 2**.: _There is a scoring vector \(\overline{s}\) and a social network \(G\) such that the single outcome achieving the maximum social welfare is not individually rational._
Proof.: Consider a scoring function \(\overline{s}\) such that \(\overline{s}=(1,1,-1,-1,-1,-1)\). Consider the social network \(G\) in Figure 2 formed from a path \(P\) on \(5\) vertices and a clique \(K\) on \(5\) vertices by connecting the endpoints of \(P\) to all vertices of \(K\). Let \(x\) be the central agent of \(P\). Let \(C\) be the grand coalition in \(G\). The graph can be viewed as a \(6\)-cycle with \(K\) forming one "bold" agent. All vertices on the cycle contribute positively to the agent's utility, except for the one that is exactly opposite on the cycle. Hence, \(\operatorname{u}(x,C)=4-5=-1\), while utility of all other agents is \(8-1=7\) in \(C\). This gives total social welfare of \(62\) for the grand coalition.
However, if \(x\) leaves the coalition to form its own one, their utility will improve from \(-1\) to \(0\), whereas the total social welfare drops. Indeed, in \(C\setminus\{x\}\) there are \(2\) agents with utility \(6-2=4\), \(2\) agents with utility \(7-1=6\) and \(5\) agents with utility \(8-0\), giving total social welfare of \(60\). If any \(y\neq x\) was to be excluded from \(C\) to form outcome \(\{y\},C\setminus\{y\}\), then \(y\) joining \(C\) improves social welfare, proving that it was not optimal. Finally, if the outcome consists of several coalitions with the largest one of size \(8\), then the welfare is at most \(8\cdot 7+2\cdot 1=56\), if the largest size is \(7\), then we get at most \(7\cdot 6+3\cdot 2=48\), for \(6\) it is \(6\cdot 5+4\cdot 3=42\) and for \(5\) it is \(5\cdot 4+5\cdot 4=40\).
Hence the grand coalition \(C\) is the only outcome with maximal social welfare, but it is not individually rational (and therefore not Nash stable), as \(\operatorname{u}(x,C)=-1\).
**Lemma 3**.: _There is a scoring vector \(\overline{s}\) and a social network \(G\) such that the single individually rational outcome achieving the maximum social welfare among such outcomes is not Nash stable._
Figure 2: Social Network from Lemma 2. Figure 3: Social Network from Lemma 3.
Proof.: Consider again the scoring function \(\overline{s}=(1,1,-1,-1,-1,-1)\). Similarly to previous lemma, consider the social network \(G\) in Figure 3 formed from a path \(P\) on \(5\) vertices and a clique \(K\) on \(4\) vertices by connecting the endpoints of \(P\) to all vertices of \(K\) and adding a agent \(y\) only connected to the central agent of \(P\) which we call \(x\). Let \(C\) be the coalition containing all vertices of \(G\) except for \(y\). As in the previous lemma, \(G[C]\) can be viewed as a \(6\)-cycle with \(K\) forming one "bold" agent. Hence, \(\operatorname{u}_{x}(C)=4-4=0\), while utility of other agents in \(C\) is \(7-1=6\). Trivially \(\operatorname{u}_{y}(\{y\})=0\), hence the outcome \((\{y\},C)\) is individually rational. It has total social welfare of \(48\). However, it is not Nash stable, as \(x\) wants to deviate to \(\{x,y\}\) giving them utility \(1\).
However, the outcome \((\{x,y\},C\setminus\{x\})\), which is Nash stable, has total social welfare only \(46\). Note that \(\operatorname{u}_{z}(C\setminus\{x\})\geq 3\) for every agent \(z\in C\setminus\{x\}\), so any outcome \((\{x,y,z\},C\setminus\{x,z\})\) cannot be Nash stable. While the total social welfare of the grand coalition is \(46\), the utility of \(y\) is \(3-6=-3\) in this coalition, so this outcome is not even individually rational. From the computations in the previous lemma, it follows, that to attain the social welfare of \(48\), the largest coalition in the outcome must be of size at least \(7\). Moreover, if it is of size exactly \(7\), then these \(7\) vertices must be at mutual distance at most \(2\). However, there are no \(7\) vertices in mutual distance at most \(2\) in \(G\). Hence, in any outcome with social welfare \(48\) the largest coalition must be of size at least \(8\). Agent \(y\) has only \(3\) agents in distance at most \(2\) in \(G\). Hence, for \(y\) to get a positive utility from some coalition, the coalition must be of size at most \(7\), i.e., \(y\) cannot be part of the largest coalition in any outcome with social welfare at least \(48\). However, for every \(z\in C\), \(z\) joining the coalition \(C\setminus\{z\}\) improves the social welfare of the outcome, proving that it was not optimal.
Hence the outcome \((\{y\},C)\) is the only individually rational outcome with maximal social welfare, but it is not Nash stable.
It should be noted that Lemmas 2 and 3 also contrast many other models where outputs maximizing social welfare are stable for symmetric utilities [12, 7, 16].
As our next two structural results, we prove that on certain SDGs it is possible to bound not only the diameter but also the size of each coalition in a welfare-maximum outcome. Notably, we establish such bounds for SDGs on bounded-degree networks and SDGs which have a simple scoring vector on a tree-like network. While arguably interesting in their own right, these properties will be important for establishing the fixed-parameter tractability of computing welfare-optimal outcomes in the next section.
**Lemma 4**.: _For every scoring vector \(\overline{s}=(s_{1},\ldots,s_{\overline{s}})\), if \(G\) is a graph of maximum degree \(\Delta(G)\) and \(C\) is a coalition of size more than \((s_{1}+1)\cdot\Delta(G)\cdot(\Delta(G)-1)^{\delta-1}\), then for every \(i\in C\) we have \(\operatorname{u}(i,C)<0\)._
Proof.: Let \(i\in C\). There are at most \(\Delta(G)\cdot(\Delta(G)-1)^{\delta-1}\) agents in distance at most \(\delta\) from \(i\). Each of these agents contributes at most \(s_{1}\) to \(\operatorname{u}(i,C)\). Every other agent contributes at most \(-1\). Hence, if there are more than \((s_{1}+1)\cdot\Delta(G)\cdot(\Delta(G)-1)^{\delta-1}\) agents in \(C\), then more than \(s_{1}\cdot\Delta(G)\cdot(\Delta(G)-1)^{\delta-1}\) of them have a negative contribution to \(\operatorname{u}(i,C)\) and
\[\operatorname{u}(i,C)<s_{1}\cdot\Delta(G)\cdot(\Delta(G)-1)^{\delta-1}-1\cdot s _{1}\cdot\Delta(G)\cdot(\Delta(G)-1)^{\delta-1}=0.\qed\]
**Lemma 5**.: _Let \(\overline{s}=(s_{1},\ldots,s_{\delta})\) be such that \(s_{2}<0\). If \(G\) is a graph of treewidth \(\operatorname{tw}\) and \(C\) is a coalition of size more than \(2(s_{1}+1)\cdot\operatorname{tw}+1\), then \(\sum_{i\in C}\operatorname{u}(i,C)<0\)._
Proof.: Each agent adjacent to \(i\) contributes \(s_{1}\) to \(\operatorname{u}(i,C)\), whereas all the other agents contribute at most \(-1\). Since a graph of treewidth \(\operatorname{tw}\) is \(\operatorname{tw}\)-degenerate, there are \(|E(G[C])|\leq|C|\cdot\operatorname{tw}\) pairs of ad
jacent agents and \(\binom{|C|}{2}-|E(G[C])|\) pairs of non-adjacent agents. We have
\[\sum_{i\in C}\mathrm{u}(i,C) =\sum_{i,j\in C;i\neq j}\overline{s}(\mathrm{dist}(i,j))\] \[\leq 2\left(s_{1}\cdot|E\left(G[C]\right)|-\left(\binom{|C|}{2}-|E \left(G[C]\right)|\right)\right)\] \[=2\left((s_{1}+1)\cdot|E\left(G[C]\right)|-\binom{|C|}{2}\right)\] \[\leq 2(s_{1}+1)\cdot|C|\cdot\mathrm{tw}-|C|(|C|-1)\] \[=|C|\left(2(s_{1}+1)\cdot\mathrm{tw}-(|C|-1)\right)\] \[<|C|\left(2(s_{1}+1)\cdot\mathrm{tw}-(2(s_{1}+1)\cdot\mathrm{tw}+ 1-1)\right)=0.\qed\]
## 4 Computing Optimal Outcomes
### Intractability
As our first step towards an understanding of the complexity of computing a welfare-optimal outcome in an SDG, we establish the \(\mathsf{NP}\)-hardness of \(\overline{s}\)-SDG-WF, \(\overline{s}\)-SDG-WF-IR and \(\overline{s}\)-SDG-WF-Nash even for a very simple choice of \(\overline{s}\).
**Theorem 6**.: _Let \(\overline{s}=(s_{1})\) for any \(s_{1}>0\). Then \(\overline{s}\)-SDG-WF, \(\overline{s}\)-SDG-WF-IR and \(\overline{s}\)-SDG-WF-Nash are \(\mathsf{NP}\)-hard._
Proof Sketch.: As our first step, we prove the \(\mathsf{NP}\)-hardness of the intermediate problem called 3-Coloring Triangle Covered Graph (3CTCG) via an adaptation of a known reduction from NotAll-Equal-3-SAT [38, Theorem 9.8]:
\begin{tabular}{|l l|} \hline
3-Coloring Triangle Covered Graph (3CTCG) & \\ Input: & An undirected graph \(G=(V,E)\) with \(|V|=3n\) vertices such that \(G\) contains a & \\ & collection of \(n\) mutually vertex disjoint triangles. & \\ Question: & Does \(G\) have a 3-coloring? & \\ \hline \end{tabular}
Next, we reduce 3CTCG to our three problems via a single construction. Let \(G\) be an instance of 3CTCG with \(3n\) vertices and \(T_{1},\ldots,T_{n}\) the corresponding collection of triangles. Let \(\overline{G}\) be a complement of \(G\), let \(s_{1}=s_{1}(\overline{s})\) and let \(b=3ns_{1}\cdot(n-1)\). To establish the \(\mathsf{NP}\)-hardness of \(\overline{s}\)-SDG-WF, it suffices to show that \(G\) is a Yes-instance of 3CTCG if and only if \(\overline{G}\) admits an outcome with social welfare at least \(b\); for the remaining two problems, we additionally show that such an outcome will furthermore be individually rational and Nash stable.
### An Algorithm for Tree-Like Networks
We complement Theorem 6 by establishing that all three problems under consideration can be solved in polynomial time on networks of bounded treewidth--in other words, we show that they are \(\mathsf{XP}\)-tractable w.r.t. treewidth. We first describe the "baseline" algorithm for solving \(\overline{s}\)-SDG-WF, and then prove that this may be adapted to also solve the other two problems by expanding on its records and procedures (see the appendix).
**Theorem 7**.: _For every fixed scoring vector \(\overline{s}\), the \(\overline{s}\)-SDG-WF, \(\overline{s}\)-SDG-WF-IR, and \(\overline{s}\)-SDG-WF-Nash problems are in \(\mathsf{XP}\) when parameterized by the treewidth of the social network \(G\)._
Proof Sketch.: Our algorithm is based on leaf-to-root dynamic programming along a nice tree-decomposition of the input social network with rather complicated structure. In each node \(x\) of the tree-decomposition, we store a set \(\mathcal{R}_{x}\) of partial solutions called _records_. Each record realizes a single _signature_ which is a triple \((C,S,T)\), where
* \(C\) is a partition of bag agents into parts of coalitions; there are at most \(\operatorname{tw}+1\) different coalitions intersecting \(\beta(x)\) and, thus, at most \(t_{w}{}^{\mathcal{O}(\operatorname{tw})}\) possible partitions of \(\beta(x)\).
* \(S\) is a function assigning each pair of agents that are part of the same coalition according to \(C\) the shortest intra-coalitional path; recall that for fixed \(\overline{s}\), the diameter of every coalition is bounded by a constant \(\delta\) and, therefore, there are \(n^{\mathcal{O}(\delta)}=n^{\mathcal{O}(1)}\) possible paths for each pair of agents which gives us \(n^{\mathcal{O}(\operatorname{tw}^{2})}\) combinations in total.
* \(T\) is a table storing for every coalition \(P\) and every possible vector of distances to bag agents that are in \(P\) the number of agents from \(P\) that were already forgotten in some node of the tree-decomposition; the number of possible coalitions is at most \(\operatorname{tw}+1\), the number of potential distance vectors is \(\delta^{\operatorname{tw}+1}=2^{\mathcal{O}(\operatorname{tw})}\), and there are at most \(n\) values for every combination of coalition and distance vector which leads to at most \(n^{2^{\mathcal{O}(\operatorname{tw})}}\) different tables \(T\).
The value of every record is a pair \((\pi,w)\), where \(\pi\) is a partition of \(V^{x}\) such that \(\operatorname{SW}(\pi)=w\) and \(\pi\) witnesses that there is a partition of \(V^{x}\) corresponding to the signature of the record, as described above. We store only one record for every signature - the one with the highest social welfare. Therefore, in every node \(x\), there are at most \(n^{2^{\mathcal{O}(\operatorname{tw})}}\) different records.
Once the computation ends, we check the record in the root node \(r\) and based on the value of \(w\), we return the answer; Yes if \(w\geq b\) and No otherwise. Moreover, as \(G^{r}=G\), the partition \(\pi\) is also an outcome admitting social-welfare \(w\).
### Fixed-Parameter Tractability
A natural follow-up question to Theorem 7 is whether one can improve these results to fixed-parameter algorithms. As our final contribution, we show that this is possible at least when dealing with simple scoring vectors, or on networks with stronger structural restrictions. To obtain both of these results, we first show that to obtain fixed-parameter tractability it suffices to have a bound on the size of the largest coalition in a solution (i.e., a welfare-optimal outcome).
**Theorem 8**.: _For every fixed scoring vector \(\overline{s}\), the variants of \(\overline{s}\)-SDG-WF, \(\overline{s}\)-SDG-WF-IR, \(\overline{s}\)-SDG-WF-Nash where we only consider outcomes consisting of coalitions of at most a prescribed size are \(\mathsf{FPT}\) parameterized by the treewidth of the network and the maximum coalition size combined._
Proof Sketch.: Similar to the previous ones, we design a dynamic programming (DP) on a nice tree decomposition, albeit the procedure and records are completely different.
Given a subset of agents \(X\subseteq N\), let \(\Pi=(\pi_{1},\pi_{2},\ldots,\pi_{\ell})\) be a partition of a set containing \(X\) and some "anonymous" agents. We use \(\mathsf{T}(\Pi)\) to denote a set of graph topologies on \(\pi_{1},\pi_{2},\ldots,\pi_{\ell}\) given \(X\). That is, \(\mathsf{T}(\Pi)=\{\mathsf{T}(\pi_{1}),\ldots,\mathsf{T}(\pi_{\ell})\}\) where \(\mathsf{T}(\pi_{i})\) is some graph on \(|\pi_{i}|\) agents, namely \(\pi_{i}\cap X\) and \(|\pi_{i}\setminus X|\) "anonymous" agents, for each \(i\in[\ell]\). The maximum coalition size of any welfare maximizing partition is denoted by sz. Table, M, contains an entry \(\mathsf{M}[x,C,\mathsf{T}(\Pi)]\) for every node \(x\) of the tree decomposition, each partition \(C\) of \(\beta(x)\), and each set of graph topologies \(\mathsf{T}(\Pi)\) given \(\beta(x)\) where \(\Pi\) is a partition of
at most \(\operatorname{\textsc{sz}}\cdot\operatorname{\textsc{tw}}\) agents. An entry of \(\mathsf{M}\) stores the maximum welfare in \(G^{x}\) under the condition that the partition into coalitions satisfies the following properties. Recall that for a partition \(P\) of agents and an agent \(a\), we use \(P_{a}\) to denote the coalition agent \(a\) is part of in \(P\).
1. \(C\) _and \(\Pi\) are consistent_, i.e., the partition of the bag agents \(\beta(x)\) in \(G^{x}\) is denoted by \(C\) and \(C_{a}=\Pi_{a}\cap\beta(x)\) for each agent \(a\in\beta(x)\).
2. The coalition of agent \(a\in\beta(x)\) in the graph \(G^{x}\) is \(\Pi_{a}\).
3. \(\mathsf{T}(\Pi)\) _is consistent with \(G^{x}\)_ i.e., the subgraph of \(G^{x}\) induced on the agents in coalition of \(a\) is \(\mathsf{T}(\Pi_{a})\), i.e., \(G^{x}[\Pi_{a}]=\mathsf{T}(\Pi_{a})\).
Observe that we do not store \(\Pi\). We only store the topology of \(\Pi\) which is a graph on at most \(\operatorname{\textsc{sz}}\cdot\operatorname{\textsc{tw}}\) agents.
We say an entry of \(\mathsf{M}[x,C,\mathsf{T}(\Pi)]\) is _valid_ if it holds that
1. \(C\) _and \(\Pi\) are consistent_, i.e., \(C_{a}=\Pi_{a}\cap\beta(x)\) for each agent \(a\in\beta(x)\),
2. Either \(C_{a}=C_{b}\), or \(C_{a}\cap C_{b}=\emptyset\) for each pair of agents \(a,b\in\beta(x)\),
3. \(\mathsf{T}(\Pi)\) _is consistent with \(G^{x}\) in \(\beta(x)\)_, i.e., for each pair of agents \(a,b\in\beta(x)\) such that \(\Pi_{a}=\Pi_{b}\), there is an edge \((a,b)\in\mathsf{T}(\Pi_{a})\) if and only if \((a,b)\) is an edge in \(G^{x}\).
Once the table is computed correctly, the solution is given by the value stored in \(\mathsf{M}[r,C,\mathsf{T}(\Pi)]\) where \(C\) is empty partition and \(\mathsf{T}(\Pi)\) is empty. Roughly speaking, the basis corresponds to leaves (whose bags are empty), and are initialized to store \(0\). For each entry that is not valid we store \(-\infty\). To complete the proof, it now suffices to describe the computation of the records at each of the three non-trivial types of nodes in the decomposition and prove correctness.
Similarly to Theorem 7, we design a dynamic programming on a nice tree decomposition, albeit the procedure and records are completely different.
From Lemma 5 it follows that if \(s_{2}<0\) and \(\operatorname{\textsc{tw}}(G)\) is bounded, then the maximum coalition size of a welfare maximizing outcome is bounded. Hence, using Theorem 8 we get the following.
**Corollary 9**.: \(\operatorname{\bar{s}}\)-SDG-WF-Nash, \(\operatorname{\bar{s}}\)-SDG-WF-IR_, and \(\operatorname{\bar{s}}\)-SDG-WF are fixed-parameter tractable parameterized by the treewidth \(\operatorname{\textsc{tw}}(G)\) if \(s_{2}<0\)._
Turning back to general scoring vectors, we recall that Lemma 4 provided a bound on the size of the coalitions in a welfare-optimal outcome in terms of the maximum degree \(\Delta(G)\) of the network \(G\). Applying Theorem 8 again yields:
**Corollary 10**.: \(\operatorname{\bar{s}}\)-SDG-WF-Nash_, \(\operatorname{\bar{s}}\)-SDG-WF-IR_, and \(\operatorname{\bar{s}}\)-SDG-WF are fixed-parameter tractable parameterized by the treewidth \(\operatorname{\textsc{tw}}(G)\) and the maximum degree \(\Delta(G)\) of the social network._
As our final contribution, we provide fixed-parameter algorithms for computing welfare-optimal outcomes that can also deal with networks containing high-degree agents. To do so, we exploit a different structural parameter than the treewidth--namely the vertex cover number of \(G\) (\(\operatorname{\textsc{vc}}(G)\)). We note that while the vertex cover number is a significantly more "restrictive" graph parameter than treewidth, it has found numerous applications in the design of efficient algorithms in coalition formation, including for other types of coalition games [6, 9, 27].
**Theorem 11**.: \(\operatorname{\bar{s}}\)-SDG-WF-Nash_, \(\operatorname{\bar{s}}\)-SDG-WF-IR_, and \(\operatorname{\bar{s}}\)-SDG-WF are fixed-parameter tractable parameterized by the vertex cover number \(\operatorname{\textsc{vc}}(G)\) of the social network._
Proof Sketch.: Let \(k=\operatorname{vc}(G)\) and let \(U\) be a vertex cover for \(G\) of size \(k\). Observe that in each solution there are at most \(k\) non-singleton coalitions, since \(G\) has a vertex cover of size \(k\) and each coalition must be connected. Furthermore, the vertices of \(G-U\) can be partitioned into at most \(2^{k}\) groups according to their neighborhood in the set \(U\). That is, there are \(n_{W}\) vertices in \(G-U\) such that their neighborhood is \(W\) for some \(W\subseteq U\); denote this set of vertices \(I_{W}\).
We perform exhaustive branching to determine certain information about the structure of the coalitions in a solution--notably:
1. which vertices of \(U\) belong to each coalition (i.e., we partition the set \(U\)); note that there are at most \(k^{k}\) such partitions, and
2. if there is at least one agent of \(I_{W}\) in the coalition or not ; note that there are at most \((2^{2^{k}})^{k}\) such assignments of these sets to the coalitions.
We branch over all possible admissible options of the coalitional structure described above possessed by a hypothetical solution. The total number of branches is upper-bounded by a function of the parameter value \(k\) and thus for the problems to be in \(\mathsf{FPT}\) it suffices to show that for each branch we can find a solution (if it exists) by a fixed-parameter subprocedure. To conclude the proof, we show that a welfare-maximum outcome (which furthermore satisfies the imposed stability constraints) with a given coalitional structure can be computed by modeling this as an Integer Quadratic Program where \(d+\|A\|_{\infty}+\|Q\|_{\infty}\) are all upper-bounded by a function of \(k\)--such a program can be solved in \(\mathsf{FPT}\) time using Proposition 1.
The (integer) variables of the program are \(x_{W}^{C}\), which express the number of vertices from the set \(I_{W}\) in the coalition with \(C\subseteq U\); thus, we have \(x_{W}^{C}\in\mathbb{Z}\) and \(x_{W}^{C}\geq 1\). Let \(\mathcal{C}\) be the considered partitioning of the vertex cover \(U\). We use \(C\in\mathcal{C}\) for the set \(C\subseteq U\) in the coalition and \(C^{+}\) for the set \(C\) and the guessed groups having at least one agent in the coalition. We require that the vertices of \(G-U\) are also partitioned in the solution, i.e.,
\[\sum_{C\in\mathcal{C}}\sum_{W\in C^{+}}x_{W}^{C}=n_{W}\qquad\forall W\subseteq U. \tag{1}\]
The quadratic objective expresses the welfare of the coalitions in the solution while the linear constraints ensure the stability of the outcome; for the latter, we rely on the fact that it is sufficient to verify the stability for a single agent from the group \(I_{W}\) in each coalition.
## 5 Conclusions and Future Research Directions
In this work, we studied social distance games through the lens of an adaptable, non-normalized scoring vector which can capture the positive as well as negative dynamics of social interactions within coalitions. The main focus of this work was on welfare maximization, possibly in combination with individual-based stability notions--individual rationality and Nash stability. It is not surprising that these problems are intractable for general networks; we complement our model with algorithms that work well in tree-like environments.
Our work opens up a number of avenues for future research. One can consider other notions of individual-based stability such as individual stability [14, pp. 360-361][24], or various notions of group-based stability such as core stability [14, p. 360][14, 35]. Furthermore, our results do not settle the complexity of finding stable solutions (without simultaneous welfare maximization). Therefore, it remains open if one can find a Nash stable solution for a specific scoring vector. Also, a more complex open
problem is to characterize those scoring vectors that guarantee the existence of a Nash (or individually) stable solution.
Finally, we remark that the proposed score-based SDG model can be generalized further, e.g., by allowing for a broader definition of the scoring vectors. For instance, it is easy to generalize all our algorithms to scoring vectors which are not monotone in their "positive part". One could also consider situations where the presence of an agent that is "far away" does not immediately set the utility of other agents in the coalition to \(-\infty\). One way to model these settings would be to consider "_open_" scoring vectors, for which we set \(\overline{\mathrm{s}}(a)=\overline{\mathrm{s}}(\delta)\) for all \(a>\delta\)--meaning that distances over \(\delta\) are all treated uniformly but not necessarily as unacceptable.
Notice that if \(\overline{\mathrm{s}}(\delta)\geq 0\) for an open scoring vector \(\overline{\mathrm{s}}\), the grand coalition is always a social-welfare maximizing outcome for all three problems--hence here it is natural to focus on choices of \(\overline{\mathrm{s}}\) with at least one negative entry. We note that all of our fixed-parameter algorithms immediately carry over to this setting for arbitrary choices of open scoring vectors \(\overline{\mathrm{s}}\). The situation becomes more interesting when considering the small-world property: while the diameter of every welfare-maximizing outcome can be bounded in the case of Nash stable or individually rational coalitions (as we prove in our final Theorem 12 below), whether the same holds in the case of merely trying to maximize social welfare is open and seems to be a non-trivial question. Because of this, Theorem 7 can also be extended to the \(\overline{\mathrm{s}}\)-SDG-WF-IR and \(\overline{\mathrm{s}}\)-SDG-WF-Nash with open scoring vectors, but it is non-obvious for \(\overline{\mathrm{s}}\)-SDG-WF.
**Theorem 12**.: _Let \(\overline{\mathrm{s}}=(s_{1},\ldots,s_{\delta})\) be an arbitrary open scoring vector and \(G\) be a social network. Every outcome \(\Pi\) containing a coalition \(C\in\Pi\) with diameter exceeding \(\ell=2\cdot s_{1}\cdot\delta\) can be neither Nash-stable nor individually rational._
Proof Sketch.: Consider a shortest path \(P\) in \(C\) whose length exceeds \(\ell\). We identify a set of edge cuts along \(P\) and show that at least one such cut must be near an agent whose utility in \(C\) is negative, due to the presence of a large number of agents that must be distant from the chosen edge cut.
Acknowledgements.All authors are grateful for support from the OeAD bilateral Czech-Austrian WTZ-funding Programme (Projects No. CZ 05/2021 and 8J21AT021). Robert Ganian acknowledges support from the Austrian Science Foundation (FWF, project Y1329). Thekla Hamm also acknowledges support from FWF, project J4651-N. Dusan Knop, Simon Schierreich, and Ondrej Suchy acknowledge the support of the Czech Science Foundation Grant No. 22-19557S. Simon Schierreich was additionally supported by the Grant Agency of the Czech Technical University in Prague, grant No. SGS23/205/OHK3/3T/18.
|
2310.01810 | High angular momentum coupling for enhanced Rydberg-atom sensing in the
VHF band | Recent advances in Rydberg atom electrometry detail promising applications in
radio frequency (RF) communications. Presently, most applications use carrier
frequencies greater than 1~GHz where resonant Autler-Townes splitting provides
the highest sensitivity. This letter documents a series of experiments with
Rydberg atomic sensors to collect and process waveforms from the automated
identification system (AIS) used in maritime navigation in the Very High
Frequency (VHF) band. Detection in this band is difficult with conventional
resonant Autler-Townes based Rydberg sensing and requires a new approach. We
show the results from a new method called High Angular Momentum Matching
Excited Raman (HAMMER), which enhances low frequency detection and exhibits
superior sensitivity compared to the traditional AC Stark effect. From
measurements of electromagnetically induced transparency (EIT) in rubidium and
cesium vapor cells, we show the relationship between incident electric field
strength and observed signal-to-noise ratio and find that the sensitivity of
the HAMMER scheme in rubidium achieved an equivalent single VHF tone
sensitivity of $\mathrm{100~\mu V/m/\sqrt{Hz}}$. With these results, we
estimate the usable range of the atomic vapor cell antenna for AIS waveforms
given current technology and detection techniques. | Nikunjkumar Prajapati, Jakob W. Kunzler, Alexandra B. Artusio-Glimpse, Andrew Rotunno, Samuel Berweger, Matthew T. Simons, Christopher L. Holloway, Chad M. Gardner, Michael S. Mcbeth, Robert A. Younts | 2023-10-03T05:53:54Z | http://arxiv.org/abs/2310.01810v1 | # High angular momentum coupling for enhanced Rydberg-atom sensing in the VHF band
###### Abstract
Recent advances in Rydberg atom electrometry detail promising applications in radio frequency (RF) communications. Presently, most applications use carrier frequencies greater than 1 GHz where resonant Autler-Townes splitting provides the highest sensitivity. This letter documents a series of experiments with Rydberg atomic sensors to collect and process waveforms from the automated identification system (AIS) used in maritime navigation in the Very High Frequency (VHF) band. Detection in this band is difficult with conventional resonant Autler-Townes based Rydberg sensing and requires a new approach. We show the results from a new method called High Angular Momentum Matching Excited Raman (HAMMER), which enhances low frequency detection and exhibits superior sensitivity compared to the traditional AC Stark effect. From measurements of electromagnetically induced transparency (EIT) in rubidium and cesium vapor cells, we show the relationship between incident electric field strength and observed signal-to-noise ratio and find that the sensitivity of the HAMMER scheme in rubidium achieved an equivalent single VHF tone sensitivity of 100 \(\mu\)V/m/\(\sqrt{\mathrm{Hz}}\). With these results, we estimate the usable range of the atomic vapor cell antenna for AIS waveforms given current technology and detection techniques.
## I Introduction
Rydberg atom based electric field sensors have grown in popularity and utility over the past decade. With their ability to offer field measurements traceable to the international system of standards [1; 2; 3] and beat the Chu limit [4], their application in low SWaP (size, weight, and power) and fixed environments has drawn interest from the antenna and physics community [5]. From their first demonstration for traceable electric field measurements [1; 6], many new applications have risen [5]. This includes and is not limited to phase resolved measurements for Angle-of-Arrival (AOA) [7], quadrature amplitude modulation (QAM) reception [8; 9], wide scan range spectrum analyzers [10], radio frequency (RF) power standards [11], alternating and direct current (AC/DC) voltage standards [12], video reception [13], and many more applications [5; 14; 15; 16; 17; 18; 19].
Rydberg atoms boost a large polarizability due to the separation of the nucleus and the highly excited electron [20]. This makes them highly susceptible to external electric fields. Furthermore, one can tune the RF resonance of the Rydberg atoms by selecting the principle quantum number \(n\) of the Rydberg state of the atom's excitation [16]. However, for low-frequency applications, this method requires the tuning to very high \(n\) where many deleterious broadening mechanisms occur, such as atomic collisions, charge effects, and various decoherence mechanisms [21]. For broadband applications of the Rydberg atom sensor, an off-resonant AC Stark effect is more suitable [22]. Furthermore, this allows for detection of frequencies in the Very High Frequency (VHF) (50 MHz to 300 MHz) and ultra-high frequnecy (UHF) (300 MHz to 900 MHz) bands and below without suffering the consequences of high \(n\) tuning. Yet, with the off-resonant Stark shift, there are limitations to the sensitivity of the Rydberg atoms. Groups have demonstrated weak field, satellite signal detection with the use of a high gain antenna, low noise amplifiers, a wave-guide coupled vapor cell, and a dressed ladder system to maintain a low \(n\) all to measure XM band (lower) frequency signals [23]. But to show true supremacy as an antenna not bound by the Chu limit, other methods are needed.
We demonstrate the use of "Rydberg state engineering", similar to what is mentioned in [24]. In this paper, we demonstrate VHF detection in two ways for comparison. In the first case, we utilize Stark shifting of resonances as represented in Fig. 1 (a). In the second case, we use a method that we developed, High Angular Momen
Figure 1: (a) Level diagram showing the interaction of AC Stark shifting measurements. (b) Level diagram showing the interaction of coupling in the high angular momentum F and G states that cause mixing and enhancement of the measurement.
tum Matched for Exciting Raman (HAMMER) method, shown in Fig. 1 (b). This method involves the use of a dressing RF field that couples the Rydberg state to a nearby higher angular momentum Rydberg state through a resonant super high frequency (SHF) RF transition. Then, by applying a strong VHF local oscillator (LO), we Stark shift the F and G Rydberg states to bring the F to G transition into resonance with the VHF signal field (discussed more in Section II). The boost in sensitivity comes from the higher angular momentum coupling in from the now resonant dressing. For example, in rubidium (Rb) the polarizability of the \(50\mathrm{D}_{5/2},m_{j}=5/2\) state is 0.02 MHz(V/m)\({}^{-2}\) while the polarizability of the \(49\mathrm{F}_{7/2},m_{j}=1/2\) is 1.5 MHz(V/m)\({}^{-2}\) and the \(49\mathrm{G}_{9/2},m_{j}=1/2\) is 5.5 MHz(V/m)\({}^{-2}\). This difference is less pronounced in the cesium (Cs) atoms, but still provides a benefit.
We compare the sensitivity and usable signal of the bare-state Stark shifting method to the dressed-state HAMMER method. To study the usability of these schemes for communication, we apply automatic identification system (AIS) signals that are used for maritime navigation.
## II Stark shifting vs. Hammer
In practice, most Rydberg atom electric field sensor demonstrations have utilized resonant atomic effects. Additionally, there have been recent efforts to allow for the Raman coupling between several Rydberg states through two-photon RF interactions [24]. There have also been efforts utilizing Stark shift based measurements to achieve broad tunability [22]. Here, we demonstrate a mixture of several of these techniques that allows for engineering an atomic level structure that enhances specific sensing capabilities of the atoms.
To provide an example and explain the mixed state effects we see, we focus this discussion on Rb atoms and the line shifting expected from the external fields; however, similar interactions occur in the Cs atoms as well. When we try to estimate the sensitivity of a Rydberg atom electrometry setup, we typically look at the change in transmission of the probe laser with the application of some electric field. The minimum resolvable change determines the electric field sensitivity of the sensor. The larger a shift produced by a given electric field, the better the sensitivity of the method.
In the experiment, we excite to the \(50\mathrm{D}_{5/2}\) Rydberg state. We then apply a field resonant with the \(50\mathrm{D}_{5/2}\rightarrow 49\mathrm{F}_{7/2}\) transition at 18.56 GHz. Following this, we apply a 162 MHz VHF local oscillator (LO) field. This has two purposes. The first is to provide a field reference for phase based measurements. The second is the bedrock for the HAMMER method to work. The field is increased in amplitude until there is a strong Stark shift on the \(49\mathrm{G}_{9/2}\) Rydberg state such that the \(49\mathrm{F}_{7/2}\to 49\mathrm{G}_{9/2}\) becomes resonant with the 162 MHz field. This produces a mixing of the G and F states with the D state. In this way, we engineer the states to operate in their most sensitive configuration.
The polarizabilities of the Rb Rydberg states are given in Table. 1. These polarizabilities are calculated using the Alkali Rydberg Calculator (ARC) [25]. We plot out the AC Stark shifts for the \(49\mathrm{F}_{7/2}\) and \(49\mathrm{G}_{9/2}\) to show the level of field needed to achieve this method of operation in Fig. 2. As the strength of the 162 MHz LO is increased, both the \(49\mathrm{F}_{7/2}\) and \(49\mathrm{G}_{9/2}\) states shift, but due to the larger polarizability of the \(49\mathrm{G}_{9/2}\) state, there is a region where the G state cross the F state. There is also a region, denoted by the gray shaded area, where the F and G state are roughly 162 MHz apart, making the interaction resonant rather than off resonant. This is the region where we operated experimentally to obtain optimal sensitivity. The field strength needed to move the two states to within 162 MHz of each other is roughly 20 V/m. We calibrate this measurement by measuring the Stark shift of the \(50\mathrm{D}_{5/2},m_{j}=1/2\) Rydberg state.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} m\({}_{j}\)value & 4.5 & 3.5 & 2.5 & 1.5 & 0.5 \\ \hline \(50\mathrm{D}_{5/2}\) & NA & NA & 0.0212 & 0.0425 & -0.042 \\ \(49\mathrm{F}_{7/2}\) & NA & 0.6697 & 1.069 & 1.328 & 1.455 \\ \(49\mathrm{G}_{9/2}\) & 2.553 & 3.864 & 4.802 & 5.365 & 5.552 \\ \end{tabular}
\end{table}
Table 1: Polarizability (MHz(V/m)\({}^{-2}\)) of the Rydberg states calculated using the ARC Rydberg calculator.
Figure 2: Stark map showing the \(49\mathrm{F}_{7/2}\) (red dashed) and \(49\mathrm{G}_{9/2}\) (black solid) states. The additional lines of the same color are the m\({}_{j}\) levels with lowest and highest levels labeled. The gray shaded region shows the first instance (defined by closes m\({}_{j}\) levels) where the two states are roughly 162 MHz apart.
## III Experimental apparatus
### Rydberg Electric Field Detection
We excite Rydberg atoms and probe them using two-photon EIT, as shown in Figs. 1 (a) and 3 (b) and (c). For the rubidium system, we use a 780 nm probe and 480 nm coupling laser. For the cesium system, we use an 850 nm probe and 510 nm coupling laser. The probe laser for each setup was split using a polarizing beam displacer (PBD) to generate a reference and signal arm. Both arms pass through the vapor cells. The beam sizes were approximately 1 mm on both the Cs and Rb system. The vapor cells used in both the Cs and Rb experiments were 25 mm in diameter, but the Cs was 25 mm long while the Rb cell was 75 mm long. Because our goal is not to compare Rb to Cs, but rather the effects of the high angular momentum states, we did not adjust the Rb and Cs setups to be perfectly matched.
The VHF signal field was provided by a pair of parallel copper plates measuring 10 cm by 15 cm placed in a 3D printed cradle and separated by 6 cm. The plates were driven by a 162 MHz AIS signal generated by the radio. We also radiated the atoms with an LO generated by an external signal generator set to a 15 kHz frequency offset from the target AIS signal at 162.025 MHz. The LO signal beats with the AIS in the atoms to provide a single sideband signal centered at 15 kHz at baseband. The single sideband signal contains the modulations of the AIS signal. We additionally applied the SHF dressing field for the HAMMER measurements through a horn antenna mounted above the vapor cell, as shown in Fig. 3 (c).
For the Cs system, the probe laser was tuned to the \(6S_{1/2},F=4\to 6P_{3/2},F=5\) transition and the coupling laser was tuned to the \(6P_{3/2},F=5\to 56D_{5/2}\) transition. The dressing field to reach the high angular momentum state is 4.07 GHz for the \(56D_{5/2}\to 55F_{7/2}\) transition. For the Rb system, the probe laser was tuned to the \(5S_{1/2},F=3\to 5P_{3/2},F=4\) and the coupling laser was tuned to the \(5P_{3/2},F=4\to 50D_{5/2}\) transition. The tuning field to reach the high angular momentum state for the HAMMER measurements is 18.5 GHz for the \(50D_{5/2}\to 49F_{7/2}\) transition.
### Radio Calibration and Implementation
For these experiments, we utilized a software-defined radio to generate multiple wave-forms, shown in Fig. 3 (a). We calibrate the electric field generated by the radio output by determining the Stark shifting of the electromagnetically induced transparency (EIT) peak with increasing power. By observing the peak movement for different powers of the radio, we obtain a curve to determine the received field at the atoms. The calibration is performed by stimulating the atoms with a single tone at 162 MHz from the SDR at 100% modulation depth. We calibrate for several analog gain levels of the SDR. By changing the gain on the SDR, we also change the Stark shift on the Rydberg state being measured with EIT. This shift is then used to calculate the incident electric field amplitude inside the test apparatus based on the tone's amplitude. Fig. 9 in Appendix.A shows the measured calibration curves in Rb and Cs. With this calibration for a given analog gain of the SDR, we can adjust the modulation depth from 100% to 0.001% with a linear scaling relative to field. This adjustment was made according to the modulation depth of the AIS signal envelope relative to the carrier wave amplitude established during the calibration process.
AIS is a packet protocol containing National Marine Electronics Association (NMEA) navigation messages modulated with Gaussian Minimum Shift Keying (GMSK) at a symbol rate of 9600 Hz. Class A commercial broadcasters radiate 12.5 Watts of power, and class B private crafts radiate 2 W. AIS uses two VHF channels at 161.075 MHz and 162.025 MHz. Most of the power spectral density is contained within the 9600 Hz bandwidth of spectrum around the carrier. The NMEA payloads are encoded with the high-level data link control (HDLC) protocol. This protocol provides a packet header for stream synchronization and a packet checksum to validate correct reception, but the HDLC proto
Figure 3: (a) Picture of radio, LNA, and cable connections to plates and horn. (b) Experimental schematic showing laser interactions in cell and connections. (c) Picture of setup showing plates and horn that supply fields to cell.
col does not provide forward error correction. Any single bit error will cause the checksum to fail and be counted as a packet loss. In practice, AIS re-transmits every few seconds depending on the speed and size of the vessel and is tolerant to lost packets from weak signals at far distances. Commercial AIS systems operate with ranges approaching 100 km.
The synthetic AIS packets used in this experiment were generated from a collection of recorded NMEA payloads obtained along the South Carolina coast. These packets were then regenerated at a high repetition rate to facilitate rapid measurement of the packet loss rate. The probability of detection is estimated by calculating the percentage of dropped packets within a specific time interval at a given transmission rate and electric field strength. The stimulated electric field strength from these packets is controlled by scaling the floating-point amplitude of the GMSK modulation according to the calibration scale. For example, the calibration shows that for a 20 dB gain on the radio, we see a field of roughly 7 V/m on the atoms.
## IV Results and Discussion
The comparison of the AC Stark shifting and the HAMMER method are done by looking at two parameters: sensitivity (receiver noise floor) and 10% packet success rate of each detection method. The sensitivity is measured using a spectrum analyzer set to 470 Hz resolution bandwidth. We obtain a spectrum of the AIS Gaussian packet spread across the 9600 Hz bandwidth that is received by the atoms, an example of which as measured with the Rb atoms is shown in Fig. 4. We define the sensitivity as the incident electric field strength that causes the peak received power spectral density to be equal that of the receiver noise floor. In our system, the noise floor is set by a low frequency thermal atomic noise that sits 10 dB above the shot noise of the laser, similar to what is seen in [26].
The signal-to-noise ratio (SNR) was obtained by taking the ratio of signal and noise power spectral densities. That is, the ratio of the total received power integrated across the spectral mask and divided by the bandwidth of the spectral mask (to estimate the received signal power spectral density) to that of the noise power spectral density in the region adjacent to the spectral mask. The SNR was recorded at various electric field stimulation levels. Fig. 5 depicts the relationship between stimulated electric field strength and the observed SNR on the spectrum analyzer for the different atomic species and methods used. Notably, in both Rb and Cs, the HAMMER detection method demonstrates superior sensitivity compared to AC Stark shifting detection for the 162 MHz signal.
The improved sensitivity is caused by the large polarizability of the Rydberg G state. For the case of Rb, there is a large discrepancy between the polarizability of the 50D\({}_{5/2}\) state and the 48G\({}_{9/2}\) state. The polarizability for 50D\({}_{5/2}\) is 0.02 GHz(V/m)\({}^{-2}\) while the polarizability for the 48G\({}_{9/2}\) state is roughly 20 GHz(V/m)\({}^{-2}\). This large polarizability of the 48G\({}_{9/2}\) state elicits a strong response from the atoms. However, because the G state is reached through the coupling of two RF resonances, line missinging and broadening from the Stark tuning of the G state resonance overall reduces the full benefits to sensitivity that would be possible with this method. For the case of Cs, there is a less pronounced difference in the polarizability. This is due to the already large polarizability of the Rydberg 55D\({}_{5/2}\) state that the lasers excite to. The polarizability of the Cs 56D\({}_{5/2}\) state is 0.4 MHz(V/m)\({}^{-2}\) while the polarizability of the Cs 54G\({}_{9/2}\) state is roughly 10 MHz(V/m)\({}^{-2}\). Furthermore, the separation of the F and G states in Rb is 700 MHz while it is nearly 1100 MHz in Cs. Because the Cs atoms require a larger Stark shift to move the G state and F state to be resonant with the 162 MHz AIS field, there can be more inhomogeneous broadening effects from nonuniform fields
Figure 4: Sample spectrum of AIS signal reception using atoms. The AIS signal is 10 kHz from the LO with a 9600 Hz Gaussian minimal shift keying modulation. Different traces correspond to different levels of received field strength.
Figure 5: signal-to-noise (SNR) of field detected to detected noise (dominated by laser and thermal atomic noise) as a function of calibrated field strength. (a) Data for Cs atoms for the two methods, AC Stark shifting (blue) and HAMMER method (red). (b) Same as (a), but for Rb atoms.
and state mixing. These effects also effect the Rb system, but since there is a larger polarizability and smaller separation of the F and G states, this effect is reduced.
The best sensitivity that we were able to measure was from the Rb system using the Hammer method. Fig. 5 (b) shows that the 0 dB SNR location occurs with 2 mV/m signal field. To estimate the sensitivity for the system, we integrate the power spectral density that is spread across the 9600 Hz of bandwidth to find the single tone equivalent power and field. For simple signals like QPSK, this would simply be a linear change in the power with respect to bandwidth. So roughly 40 dB change for 9600 Hz. However, for the Gaussian profile, this integration results in a 26 dB change to the SNR. For a representative single tone, our sensitivity is then 100 \(\mu\)V/m/\(\sqrt{Hz}\). However, in recent tests with different beam sizes and optical powers, we have managed to observe this level of sensitivity with just the Stark shifting method in cesium. This leads us to believe the Hammer could potentially lead to sensitivity on the order of 30 \(\mu\)V/m/\(\sqrt{Hz}\).
While we may try to make a comparison between Cs and Rb, this is simply to compare the benefits of the HAMMER detection method rather than overall sensitivity of the two systems. On inspection, it appears that Rb exhibits superior sensitivity compared to Cs, shown by Fig. 5. However, it is important to consider that the vapor cell dimensions, volumes, and pressures differ between Cs and Rb. Therefore, these experiments conducted with Cs and Rb should not be directly compared to evaluate the relative sensitivity of each species.
The probability of successfully decoding an AIS packet is determined by the SNR at the receiver. Fig. 6 illustrates the observed rate of successful packet detections in the software-defined radio as a function of the incident electric field strength. Using a 10% probability of detection as the operational threshold, a field strength of 17 mV/m is required in Rb using HAMMER and 114 mV/m is required using AC Stark shifting. In Cs, a field strength of 24 mV/m is needed using HAMMER, while 30 mV/m is required using AC Stark.
At the 10% packet detection threshold for electric field strength, the SNR based on the spectrum analyzer observations is 11 dB. Appendix B provides an analysis that supports the reasonableness of this result given the radio hardware and AIS modulation protocol. For comparison, an 11 dB SNR in an optimized commercial AIS receiver with a noise figure of 4 dB would correspond to an incident electric field strength of approximately 1 \(\mu\)V/m. On the other hand, the observed minimal detectable field for 10% success in Rb of 17 mV/m with 11 dB of SNR is an equivalent RF receiver with a noise figure of 86 dB.
Despite the relatively poor sensitivity at 162 MHz, we suspect that adding resonant structures to the vapor cell may enhance the electric field sensitivity within a narrow band, similar to an optimized classical receiver antenna. These types of structures have been successfully utilized in previous studies [27], and they hold the potential to improve signal performance. We demonstrate this improvement by making a split-ring resonator (SRR) centered at 162 MHz.shown in Fig. 8 (a). We were not able to obtain a calibrated measurement due to the antenna power limitations, but managed to show a factor of 40 dB enhancement in signal when we compared the signal strength received for with and without the SRR, shown in Fig. 8 (b). This result suggest a SRR will make the sensor competitive with conventional receivers.
For the end user, the most critical performance criterion of the AIS system is its effective range. This can be predicted by utilizing propagation models and the transmitter specifications of the AIS system to estimate the range corresponding to a given electric field strength. In this study, we present two models for range prediction. The Variable Terrain Radio Parabolic Equation (VTRPE) model [28] is a numerical method developed by Naval Information Warfare Center Pacific that models RF propagation over physical terrain maps. The Friis transmission model[29], on the other hand, is a simpler model that assumes equally dispersed radiation over a sphere. For an electric field strength of 17 mV/m, the minimum detectable field in the Rb HAMMER system that returned at least 10% packet success rate, the predicted effective range approaches 1 km, depending on the propagation model used. Fig. 7 provides a comparison between the higher fidelity VTRPE model and the standard Friis model for a Class A 12.5 W transmitter spread over 9600 Hz operating in open sea conditions at 5 meters elevation with a quarter wave monopole antenna.
## V Conclusion
In this study, we explored the application of Rydberg atom electrometry for radio-frequency communications, particularly in the context of automated identification system (AIS) waveforms used in maritime navigation. Through experiments with cesium and rubidium vapor cells, we observed the potential of utilizing high-angular Momentum Matching Excited Raman (HAMER) detection for enhanced low-frequency detection and superior sensitivity compared to traditional AC Stark effect
Figure 6: Packet success rate as a function of field strength. (a) Data for Cs atoms for the two methods, AC Stark shifting (blue) and HAMMER method (red). (b) Same as (a), but for Rb atoms.
detection. The results demonstrated the relationship between incident electric field strength and signal-to-noise ratio (SNR), providing insights into the performance of the atomic vapor cell antenna. Moreover, we assessed the range prediction using propagation models and concluded the current technology provides less than 1 km of operational range. Future research could explore the integration of resonant structures into the vapor cell to further enhance electric field sensitivity. Overall, these findings contribute to the understanding of Rydberg atom electrometry and its potential applications in VHF radio frequency communications.
## Appendix A Field Calibration Measurements
The Rb and Cs calibration was done with the same conditions for the plates and other experimental parameters using the Rydberg D\({}_{5/2}\) state (with different n). Even with the very large differences in possibilities between the Cs system and the Rb system, their calibration agrees to within a factor of 2. The Rb shifts were difficult to observe and the rough calibration can be attributed to the ratio of the shift expected and the linewidth of the two-photon EIT resonance.
## Appendix B SNR Packet Loss Basis
Some readers may find it unusual to have an SNR of 11 dB with a 10% probability of packet detection. However, a quick theoretical calculation justifies this value. An AIS packet typically contains a maximum of 168 bits, including protocol overhead and preamble symbols. Without error coding, the probability of successful detection can be modeled as a series of 168 independent Bernoulli trials. Assuming a success rate of 0.10, we find that the corresponding bit error probability is approximately 0.0136. According to Ref. [28], an SNR of
Figure 8: (a) Split-ring resonator structure dimensions and cell. (b) Single tone beat note between an LO and a signal received by the spectrum analyzer. (red) Trace showing data with the SRR present. (blue) Trace showing data without the SRR present.
Figure 7: Modeled electric field strength obtained from a Class A AIS transmitter at 5 meters elevation with a quarter wave monopole antenna radiating over water.
Figure 9: Electric field strength obtained from AC Stark shift produced by the Ettus X310 SDR when driving parallel copper plates around the vapor cell under a constant tone of 0.5 floating point amplitude. The 20 dB gain was used in all the experiments while varying the floating-point amplitude.
about 8.6 dB inside the software defined radio (SDR) corresponds to this bit error rate. The quantization noise of the Ettus X310 radio, with a UBX-160 and an effective number of bits of 11.3, dominates the noise floor at -95.81 dBm/Hz within the radio. The homodyne receiver in this radio has an low-noise amplifier (LNA) with 28 dB of gain, a mixer with a conversion loss of 5.2 dB, and bandpass filters with a net loss of 3 dB. The radio's internal analog gain of 17 dB results in a net receiver gain of 36.8 dB. Subtracting this gain from the ADC noise floor gives an input-referred noise floor of -132.6 dBm/Hz. This is approximately 2.4 dB higher than the noise level observed on the spectrum analyzer, which is -135 dBm/Hz. The 2.4 dB difference between the ADC quantization noise and the noise floor on the spectrum analyzer accounts for the predicted 8.6 dB SNR and the recorded 11 dB SNR. In simple terms, the radio's analog-to-digital conversion introduces additional noise that is not reflected in the spectrum analyzer plots, thus justifying the observations.
## Acknowledgements
This work was partially funded by the NIST-on-a-Chip (NOAC) Program and was developed with funding from the the Naval Information Warfare Center's NISE program.
## Conflict of Interest
The authors have no conflicts of interests to disclose.
## Data Availability Statement
The data relevant to the findings of this research project are available from the corresponding author upon reasonable request.
|
2307.15272 | Direct Power Flow Controller with Continuous Full Regulation Range | For enhancing power flow control in power transmission, a simplified new
structure of direct power flow controller with continuous full regulation range
(F-DPFC) was proposed. It has only one-stage power conversion and comprises of
a three-phase transformer in parallel and a three-phase trans-former in series
with grid, three single-phase full-bridge ac units, and a three-phase filter.
Compared with previous DPFC, the proposed one dispenses with two complex
three-phase se-lection switches which connect with high-voltage grid directly,
and has a continuous 360{\deg} adjustment range of compensation voltage by
taking place of buck-type ac unit with full-bridge type ac unit, and then
expanding the limit of its duty cycle from [0,1] to [-1,1]. Within a large
smooth zone replacing six separate zones, the proposed F-DPFC can regulate the
ampli-tude and phase angle of grid node voltage respectively and
simultaneously, and then the active and reactive power flow in grid can be
controlled smoothly and effectively. The new structure is easy to achieve
modular expansion and enables it to operate under high voltage and power
conditions. Its struc-ture and operational principle were analyzed in detail,
and a prototype was developed. The experimental results verified the
feasibility and the correctness of the theoretical analysis. | Chong Yao, Youjun Zhang | 2023-07-28T02:43:21Z | http://arxiv.org/abs/2307.15272v1 | # Direct Power Flow Controller with Continuous Full Regulation Range
###### Abstract
For enhancing power flow control in power transmission, a simplified new structure of direct power flow controller with continuous full regulation range (F-DPFC) was proposed. It has only one-stage power conversion and comprises of a three-phase transformer in parallel and a three-phase transformer in series with grid, three single-phase full-bridge ac units, and a three-phase filter. Compared with previous DPFC, the proposed one dispenses with two complex three-phase selection switches which connect with high-voltage grid directly, and has a continuous 360\({}^{\circ}\) adjustment range of compensation voltage by taking place of buck-type ac unit with full-bridge type ac unit, and then expanding the limit of its duty cycle from [0,1] to [-1,1]. Within a large smooth zone replacing six separate zones, the proposed F-DPFC can regulate the amplitude and phase angle of grid node voltage respectively and simultaneously, and then the active and reactive power flow in grid can be controlled smoothly and effectively. The new structure is easy to achieve modular expansion and enables it to operate under high voltage and power conditions. Its structure and operational principle were analyzed in detail, and a prototype was developed. The experimental results verified the feasibility and the correctness of the theoretical analysis.
direct power flow controller (DPFC), compensation voltage, grid voltage, phase regulation, power transmission.
## I Introduction
Low to control the power flow fast and accurately in power transmission systems is always the key to improve power grid quality and energy transfer efficiency. Power flow in power system includes active power flow and reactive power flow which are determined by the line impedance, transmission angle, and bus voltage [1]. Therefore, the flexible ac-transmission system (FACTS) which can adjust one or more ac-transmission system parameters to increase the stability of power system is widely used [2-4], among which the unified power flow controller (UPFC) [5-6] is the most common.
The UPFC consists of a static synchronous compensator (STATCOM) [7,8] and a static synchronous series compensator (SSSC) [7,9], which are connected through a large dc energy storage capacitor. STATCOM which is one of shunt FACTS devices [10] can regulate reactive power by being connected in parallel to the power transmission, and SSSC is able to control the line current and active power flow as series FACTS devices [11]. By combining STATCOM and SSSC, UPFC can realize the function of adjusting active and reactive respectively and simultaneously. However, due to the large dc energy storage element between STATCOM and SSSC, which has short equipment life cycle or a high number of failures and cause high maintenance cost, the further promotion of UPFC is restricted, even though it has superior control ability.
Accordingly, a three-phase power flower controller with direct PWM AC/AC converters [12] was presented, which is able to regulate the amplitude and phase of voltage without a dc energy storage element. It uses the vector synthesis of two-phase voltage to achieve the function of regulating voltage and adopts the circuit of bipolar matrix chopper which is more functional than quadrature shifter [13] to improve the ability of regulation. But this also leads to their need for complex structure and control system, and the circuit has low energy transfer efficiency. Moreover, its circuit structure is not easy to achieve modular expansion [14,15] and difficult to apply in high-voltage and high-power applications.
A new concept called direct power flow controller (DPFC) which does not contain large dc energy storage elements either was described in [4]. As shown in Fig.1, DPFC is based on single-stage ACCPA [16] and it replaces the boost-type ac converter in two-stage ACCPA [17-19] with an output transformer to achieve the function of boosting voltage and has only one-stage conversion circuit. DPFC is able to regulate the amplitude and phase angle of output compensation voltage respectively and simultaneously within 360\({}^{\circ}\) range, and adjust active power and reactive power in power transmission system by connecting the out compensation voltage in series to the power grid.
The regulation range of output compensation voltage in DPFC is shown in Fig.2, and we can see it is divided into six separate zones. Under a combination of selector switches, the adjustment range of the output compensation voltage phase angle is 60\({}^{\circ}\), and only with six combinations can the phase angle adjustment range be extended to 360\({}^{\circ}\). The adjustment process is only continuous within a separate zone and only by changing the combination of the select switches can the output compensation voltage be adjusted from one zone to another which is apparently not conducive to the stability of
Fig. 1: DPFC with full 360\({}^{\circ}\) regulation zone.
power transmission.
In order to solve the problem of discontinuous control processes in DPFC and simplify structure for modular expansion, a new structure of direct power flow controller with continuous full regulation range (F-DPFC) was proposed in this paper. F-DPFC replaces buck-type ac units in DPFC with full-bridge type ac units and removes selector switches which have complex structure and high cost owing to being connected to the high voltage power grid directly. F-DPFC also does not contain dc energy storage and has one-stage conversion circuit. Without the help of selection switches, F-DPFC can output a compensation voltage whose phase angle can vary within 360\({}^{\circ}\), and then adjust the amplitude and phase angle of grid node voltage individually or simultaneously to control active power flow and reactive power flow in power transmission system. The topology structure and operational principle of F-DPFC were described in detail. The regulation range of output compensation voltage and relationship between adjustment range and control parameters were analyzed, and then the selection of control parameters and closed-loop control strategy were given. Finally, a prototype of F-DPFC and experiment result were shown to verify the correctness of theory and feasibility of F-DPFC.
## II Topology Structure and Operational Principle
### _Topology Structure_
The topology structure of F-DPFC is shown in Fig. 3. Similar to DPFC, the structure of F-DPFC also contains a three-phase transformer in parallel and a three-phase transformer in series with grid. It takes place of buck type ac units in DPFC with full-bridge type ac units and removes selection switches. The circuit structure is further simplified and the circuit cost is reduced.
Fig. 4 shows the connection mode of input transformer winding and output transformer winding. The input terminal of transformer \(T_{\rm i}\) is connected in parallel with the power grid and the output terminal of the transformer \(T_{\rm o}\) is connected in series with the power grid. In this paper, \(T_{\rm i}\) and \(T_{\rm o}\) are of \(\Delta\)/Yn11 type connection group to offset third harmonic current and voltage. Each secondary winding of transformer \(T_{\rm i}\) is connected to the input terminal of full-bridge type ac units. The output terminals F (as shown in Fig. 3) of full-bridge type ac units are connected together and another output terminals are connected to the input terminal of transformer \(T_{\rm o}\) through three-phase output filter.
### _Operational Principle_
To facilitate analysis, we assume that:
1) original grid voltage is sinusoidal with angular frequency \(\omega\) (=2\(\pi\)/, where\(\,f\) is its frequency) and the original line voltage is \(u_{\rm lab}\), \(u_{\rm lab}\) and \(u_{\rm tax}\).
2) circuit components are ideal and low-frequency voltage drop across the inductor \(L_{\rm fb}\) (x=a, b, c, where x is the name of phase in lowercase letter) is not taken into account.
3) \(T_{\rm i}\) and \(T_{\rm o}\) are of \(\Delta\)/Yn11 type connection group with turn ratio \(N_{\rm i}\) and \(N_{\rm o}\) (note that points A\({}_{1}\), B\({}_{1}\) and C\({}_{1}\) or A\({}_{4}\), B\({}_{4}\) and C\({}_{4}\) are not connected together, and the secondary windings of \(T_{\rm o}\) are separately connected with power transmission line in series).
The input voltages \(u_{\rm tal}\), \(u_{\rm hj}\), \(u_{\rm ic1}\) and the duty ratio \(d_{\rm ax}\), \(d_{\rm b}\), \(d_{\rm c}\) of power flow control units-A, B, C are as below:
\[\begin{split}&\left[u_{\rm tal}=\frac{u_{\rm uib}}{N_{\rm i}}= \frac{U_{\rm int}}{N_{\rm i}}\sin\omega t=U_{\rm in}\sin\omega t\\ & u_{\rm hj}=\frac{u_{\rm hj}}{N_{\rm i}}=\frac{U_{\rm int}}{N_{ \rm i}}\sin(\omega t-120^{\circ})=U_{\rm in}\sin(\omega t-120^{\circ})\\ & u_{\rm ic1}=\frac{u_{\rm ca}}{N_{\rm i}}=\frac{U_{\rm int}}{N_{ \rm i}}\sin(\omega t+120^{\circ})=U_{\rm in}\sin(\omega t+120^{\circ})\end{split} \tag{1}\]
Fig. 4: The connection mode of input transformer and output transformer.
Fig. 3: Basic topology structure of the F-DPFC.
Fig. 2: Six basic regulation zones of DPFC.
\[\begin{cases}d_{a}=k_{0}+k_{2}\sin(2\omega t+\beta_{2})\\ d_{b}=k_{0}+k_{2}\sin[2(\omega t-120^{\circ})+\beta_{2}]\\ d_{c}=k_{0}+k_{2}\sin[2(\omega t+120^{\circ})+\beta_{2}]\end{cases} \tag{2}\]
where \(U_{\text{inL}}\) and \(U_{\text{in}}\) are the amplitude of \(u_{\text{inb}}\) and \(u_{\text{in}}\) respectively, \(\beta_{2}\) is the initial phase angle of the ac component of \(d_{\text{a}}\), and parameter \(k_{2}\) is nonnegative. In unit-A in Fig. 3, when switch unit \(S_{1}\) is always on and switch unit \(S_{3}\) is always off and switch units \(S_{2}\) and \(S_{4}\) are alternately on, 0\(<\)\(d_{\text{a}}\)\(<\)1; when switch unit \(S_{3}\) is always on and switch unit \(S_{1}\) is always off and switch units \(S_{2}\) and \(S_{4}\) are alternately on, -1\(<\)\(d_{\text{a}}\)\(<\)0. Therefore, we can get -1\(\leq\)\(d_{\text{a}}\)\(\leq\)1 and further know that \(k_{0}\)+\(k_{2}\)\(\leq\)1 at 0\(\leq\)\(k_{0}\)\(\leq\)1, or \(k_{0}\)-\(k_{2}\)\(>\)-1 at -1\(\leq\)\(k_{0}\)\(\leq\)0. The value range of \(k_{0}\), \(k_{2}\) is shown in Fig. 5.
After output filter filters out the high-frequency component, \(u_{\text{ac2}}\) (between point X\({}_{3}\) and F, not shown in Fig. 3, X=A, B, C, X is the name of phase in uppercase letter) which are the output voltages of full-bridge type ac units are obtained:
\[\begin{cases}u_{\text{ac2}}=d_{a}u_{\text{in}}\\ \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \
and here \(U_{\rm mL}\) and \(\varphi_{\rm t}\) would be as:
\[\varphi_{\rm r}=\arctan\frac{\frac{3U_{\rm int,t}}{N_{\rm i}N_{\rm o}}\sin(\varphi _{\rm t}+60^{\circ})}{U_{\rm int,t}+\frac{3U_{\rm int,t}}{N_{\rm i}N_{\rm o}}\cos( \varphi_{\rm t}+60^{\circ})} \tag{12}\]
\[U_{\rm int,t}=\sqrt{U_{\rm int,t}+\frac{3U_{\rm int,t}}{N_{\rm i}N_{\rm o}}( \varphi_{\rm t}+60^{\circ})^{2}+[\frac{3U_{\rm int,t}}{N_{\rm i}N_{\rm o}}\sin( \varphi_{\rm t}+60^{\circ})]^{2}} \tag{13}\]
From (7), (8) we know that \(U_{\rm com}\) and \(\varphi_{\rm t}\) are controlled by three parameters (\(k_{\rm 0}\), \(k_{\rm 2}\) and \(\beta_{\rm 2}\)).
### Expansion of Circuit Modular Structure
Due to material constraints, the voltage stress of power electronic device cannot be significantly increased. Therefore, modular expansion is an effective way for circuit structures to be suitable for high-voltage and high-power application. The structure of modular expansion is briefly introduced here.
Increase the number of the secondary windings of \(T_{\rm i}\) and the full-bridge ac circuits of each phase. Output terminals of the full-bridge ac circuits connected in series and the input terminals connected to the secondary windings of the three-phase multi winding transformer respectively. A-phase conversion unit of modular structure is shown in Fig. 6.
By increasing the number of modules, the maximum voltage stress that the circuit can withstand can be multiplied. The modular control strategy is flexible, and due to limited space, detailed explanations will be provided in the future.
## III Adjustment Range and Control Strategy
### Range of Grid Compensation Voltage
For the convenience of analysis, take A-phase as an example. We can simplify the A-phase in formula (5) to the following formula (where \(\beta\)-\(\beta_{\rm 2}\)+\(90^{\circ}\)):
\[k_{\rm d}=\frac{u_{\rm au3}}{U_{\rm in}}=k_{\rm 0}\sin\alpha t+\frac{1}{2}k_{ \rm 2}\sin(\alpha t+\beta) \tag{14}\]
where \(k_{\rm d}\) is a vector with the same phase as \(u_{\rm au3}\).
The compensation voltage synthesis diagram is shown in Fig. 7. In the diagram, the vector \(OC\) represents \(u_{\rm au3}/U_{\rm im}\); the vector \(OA\) represents the voltage component \(k_{\rm o}\)sinot and the vector \(AC\) represents the voltage component \(0.5k_{\rm 2}\)sin(\(\alpha t\)+\(\beta\)). The following can be obtained :
\[\begin{cases}|\overrightarrow{AB}|=\cos\beta\cdot|\ \overrightarrow{AC}|=\frac{k_{ \rm 2}\cos\beta}{2}\\ |\overrightarrow{BC}|=\sin\beta\cdot|\ \overrightarrow{AC}|=\frac{k_{ \rm 3}\sin\beta}{2}\\ |\overrightarrow{OB}|\ll\overrightarrow{OA}|+|\overrightarrow{AB}|=k_{ \rm 0}+\frac{k_{\rm 2}\cos\beta}{2}\end{cases} \tag{15}\]
Further obtain amplitude ratio \(|k_{\rm d}|\) and phase \(\varphi_{\rm t}\):
\[\begin{split}|k_{\rm d}|&=|\overrightarrow{OC}|\ll| \overrightarrow{OB}|^{2}+|\overrightarrow{BC}|^{2}\\ &=\sqrt{(k_{\rm 0}+\frac{k_{\rm 2}\cos\beta}{2})^{2}+(\frac{k_{\rm 2} \sin\beta}{2})^{2}}\\ &=\sqrt{\frac{{k_{\rm 2}}^{2}}{4}+{k_{\rm 0}}^{2}+k_{\rm 0}k_{\rm 2}\cos \beta}\end{split} \tag{16}\]
\[\begin{split}\varphi_{\rm t}&=\arctan\frac{| \overrightarrow{BC}|}{|OB|}=\arctan\frac{|\overrightarrow{BC}|}{|OA|+| \overrightarrow{AB}|}\\ &=\arctan\frac{k_{\rm 2}\sin\beta}{2k_{\rm 0}+k_{\rm 2}\cos\beta} \end{split} \tag{17}\]
Combining the value range of \(k_{\rm 0}\) and \(k_{\rm 2}\), we can further obtain the total adjustment range of compensation voltage. Establabl polar coordinates as shown in Fig. 8. Obviously, when \(k_{\rm 0}\)=0, the adjustment range is a circle with point O as the centre and 0.5 as the radius. When \(k_{\rm 0}\)=1, the adjustment range of compensation voltage is point E (1, 0\({}^{\circ}\)). Make the tangent line of circle O through point E and the tangent point is F, \(\angle\)EOF=60\({}^{\circ}\). When 0\(<\)\(k_{\rm 0}\)<1, assuming OA=\(k_{\rm 0}\), as shown in Fig. 9, the adjustment range of the compensation voltage is a circle with point A as the centre, AC as the radius. One knows that AC\({}_{\rm max}\)=0.5(1-\(k_{\rm 0}\)) and AE=1-\(k_{\rm 0}\). In the right triangle ACE, \(\angle\)ACE=90\({}^{\circ}\), 2AC=AE. One can obtain \(\angle\)CEA=30\({}^{\circ}\) and
Fig. 8: When \(k_{\rm 0}\) is a fixed value, the adjustment range of compensation voltage.
Fig. 6: Structure of phase-A modular expansion.
Fig. 7: Schematic diagram of compensation voltage synthesis.
\(\angle\)FEO=30\({}^{\circ}\), so points F, C and E are on the same straight line. The adjustment range of compensation voltage for other zones can be obtained in the same way. If \(T_{\rm i}\) and \(T_{\rm o}\) are of \(\Delta\)Yn11 type connection group, the total compensation voltage adjustment range as shown in Fig. 9 is obtained. In addition, under different connection group of \(T_{\rm i}\) and \(T_{\rm o}\), up to three different compensation adjustment ranges can be obtained, and phase difference among them is 60\({}^{\circ}\).
### _Influence of Control Parameters on Adjustment Range_
1)The relationship between adjustable range of \(|k_{\rm d}|\) and \(k_{\rm 0}\):
It can be seen from Fig. 8 that the extreme value point of adjustable voltage amplitude is the two intersections of the circle A and the line \(\varphi\)=0\({}^{\circ}\) or 180\({}^{\circ}\), and the adjustment range is the circle based on A as centre and 0.5(1-\(k_{\rm 0}\)) as the radius. When the adjustable range does not include the origin O, we can get:
\[\begin{cases}|\,k_{\rm d}\mid_{\rm max}=k_{\rm 0}+\frac{k_{\rm 2}}{2}=\frac{1 +k_{\rm 0}}{2}\\ |\,k_{\rm d}\mid_{\rm min}=k_{\rm 0}-\frac{k_{\rm 2}}{2}=\frac{1-k_{\rm 0}}{2} \end{cases}\quad(\frac{1}{3}<k_{\rm 0}<1) \tag{18}\]
When the adjustable range includes the origin O, \(|OC|_{\rm min}\)=0. When -1\(<\)\(k_{\rm 0}\)\(<\)0, the situation is similar. Fig. 10 shows the relationship between the regulation range of \(|k_{\rm d}|\) and \(k_{\rm 0}\).
2)The relationship between adjustable range of \(\varphi\) and \(k_{\rm 0}\):
When the compensation voltage phase takes the extreme value and 0\(<\)\(k_{\rm 0}\)\(<\)1, its relationship with \(k_{\rm 0}\) is shown in Fig.11. At this time, \(k_{\rm 2}\)=1-\(k_{\rm 0}\), OA=\(k_{\rm 0}\), 2AC=1-\(k_{\rm 0}\). We can get:
\[\begin{cases}\varphi\in[-\arcsin\frac{1-k_{\rm 0}}{2k_{\rm 0}},\arcsin\frac{1 -k_{\rm 0}}{2k_{\rm 0}}]\frac{1}{3}<k_{\rm 0}<1\\ \varphi\in[0,2\pi],0<k_{\rm 0}<\frac{1}{3}\end{cases} \tag{19}\]
Similar when -1\(<\)\(k_{\rm 0}\)\(<\)0, and the relationship between the regulation range of \(\varphi\) and \(k_{\rm 0}\) is shown in Fig. 12.
### _Control Strategy_
1) Selection of initial parameters
Known from formula (4), the value of \(k_{\rm 2}\) affects the magnitude of the third harmonic voltage in F-DPFC and then it should be taken as small as possible for the stability of grid. A reasonable parameter selection strategy is proposed below and the compensation voltage vector synthesis is shown in the Fig.13.
From Fig. 13 and formula (14), in order to minimize the value of \(k_{\rm 2}\), the vector \(AC\) should be perpendicular to the vector \(OA\), that is, \(\beta\)=90\({}^{\circ}\) or -90\({}^{\circ}\). When the output compensation voltage is required to be within the zone I and II, \(\beta\)=90\({}^{\circ}\); When the output compensation voltage is required to be in the zone III and IV, \(\beta\)=-90\({}^{\circ}\). Then obtain the value of \(k_{\rm 0}\) and \(k_{\rm 2}\) based on the amplitude and phase of desired output compensation voltage. The specific process is as follows:
For the convenience of analysis, we assume that \(T_{\rm o}\) are of \(\Delta\)Yn11 type connection group and the turn ratio of \(T_{\rm o}\) is 1:1. Taking A-phase as an example, set the required output compensation voltage amplitude and phase as \(U_{\rm s}\) and \(\varphi_{\rm s}\) (where \(U_{\rm s}\) is the amplitude of \(u_{\rm ou}\) and \(\varphi_{\rm s}\) is the phase angle of \(u_{\rm ou}\) leading \(u_{\rm ni}\)). Through the above analysis, one knows that
Fig. 11: Compensation voltage regulation range.
Fig. 12: Relationship between regulation range of \(\varphi\) and \(k_{\rm 0}\).
Fig. 10: Relationship between regulation range of \(|k_{\rm d}|\) and \(k_{\rm 0}\).
Fig. 9: The total adjustment range of compensation voltage.
\(k_{0}\)=\(U_{\text{c}}\)cos(\(\varphi_{\text{v}}\)-30) and \(k_{2}\)=\(2U_{\text{c}}\)sin(\(\varphi_{\text{v}}\)-30), when the output compensation voltage is required to be in zone I and II, \(\beta\)=90\({}^{\circ}\); when the output compensation voltage is required to be in zone III and IV, \(\beta\)=-90\({}^{\circ}\).
It should be noted that the adjustment range of output compensation voltage synthesized based on the above control strategy is slightly smaller than the total compensation voltage adjustment range analyzed previously, and the range is a rhombus formed by connecting the four intersection point of the edge of total adjustment range and coordinate axis. When the required output adjustment point is outside the range, we can obtain the value range of \(k_{0}\), \(k_{2}\) and \(\beta\) based on previous analysis. Due to the small size of this area and there will be a certain margin during operation, we will not discuss it in detail here.
2) Closed-loop Control Strategy
The control object of F-DPFC is the amplitude and phase angle of the output voltage. Firstly, assign the initial value to parameter \(k_{0}\), \(k_{2}\) and \(\beta\). During the control process, phase closed-loop is performed first, followed by amplitude closed-loop. \(\varphi_{\text{oi}}\) which is the phase angle of \(u_{\text{oa}}\) leading \(u_{\text{ai}}\) and \(U_{\text{o1}}\) which is the amplitude of \(u_{\text{oa}}\) are obtained by sampling. Then, by comparing them with reference phase angle \(\varphi_{\text{ref}}\) and voltage amplitude \(U_{\text{ref}}\), the parameter \(k_{0}\) and \(k_{2}\) are continuous adjusted to generate a new duty cycle signal. If \(k_{0}\) or \(k_{2}\) is 0, the control process is simple because there is only one parameter that needs to be adjusted, so this condition will not be discussed here. One adjustment cycle in detail is as follows:
1. The phase closed-loop: when \(\varphi_{\text{oi}}\)\(>\)\(\varphi_{\text{ref}}\), if \(k_{0}\)\(>\)0 and \(\beta\)=90\({}^{\circ}\), or \(k_{0}\)\(<\)0 and \(\beta\)=-90\({}^{\circ}\), \(k_{2}\) decreases, and if \(k_{0}\)\(<\)0 and \(\beta\)=90\({}^{\circ}\), or \(k_{0}\)\(>\)0 and \(\beta\)=-90\({}^{\circ}\), \(k_{2}\) increases; when \(\varphi_{\text{o1}}\)\(<\)\(\varphi_{\text{ref}}\) if \(k_{0}\)\(>\)0 and \(\beta\)=90\({}^{\circ}\), or \(k_{0}\)\(<\)0 and \(\beta\)=-90\({}^{\circ}\), \(k_{2}\) increases, and if \(k_{0}\)\(<\)0 and \(\beta\)=90\({}^{\circ}\), or \(k_{0}\)\(>\)0 and \(\beta\)=-90\({}^{\circ}\), \(k_{2}\) decreases. When the absolute value of the difference between \(\varphi_{\text{o1}}\) and \(\varphi_{\text{ref}}\) is maintained within a small range \(\Delta\), the ratio of \(k_{2}\) to \(k_{0}\) is saved at this time and the phase closed-loop is completed.
2. The amplitude closed-loop: when \(U_{\text{o1}}\)\(>\)\(U_{\text{ref}}\), if \(k_{0}\)\(>\)0, \(k_{0}\) decreases, and if \(k_{0}\)\(<\)0, \(k_{0}\) increases, then obtain \(k_{2}\) based on the value of \(k_{0}\) and saved ratio of \(k_{2}\) to \(k_{0}\); when \(U_{\text{o1}}\)\(<\)\(U_{\text{ref}}\), if \(k_{0}\)\(<\)0, \(k_{0}\) decreases, and if \(k_{0}\)\(>\)0, \(k_{0}\) increases, then obtain \(k_{2}\) based on the saved ratio of \(k_{2}\) to \(k_{0}\), until the amplitude closed-loop is finished.
The flowchart is shown in Fig. 14. It should be noted that start and finish are the beginning and end of a cycle rather than the adjustment process.
## IV Experimental Results
Through the above theoretical analysis, the prototype as shown in Fig. 15 is established and we conduct experiments on the compensation voltage in different zones. \(T_{\text{i}}\) and \(T_{\text{o}}\) are of \(\Delta\)/Yn11 type connection group. To observe the third harmonic voltage, a small capacitor is connected in parallel at the output of each phase transformation unit. Table I lists its specifications.
Fig. 14: Closed-loop regulation flowchart.
Fig. 15: F-DPFC circuit construction.
In order to minimize the content of the third harmonic in the circuit during operation, the parameter \(k_{2}\) needs to be the minimum value, so the parameter \(\beta\) equals to 90\({}^{\circ}\) or -90\({}^{\circ}\) in this experiment. Since the fluctuation of the power grid is generally within 20%, the input voltage of the full-bridge buck ac unit is set to be about 70\(\sqrt{2}\)V. The parameters of \(k_{0}\) and \(k_{2}\) are continuously adjusted to achieve closed-loop control by using TMS320F2812 chips. The specific experimental results are as follows.
In zone I, when \(T_{\rm i}\) and \(T_{\rm o}\) are of \(\Delta\)/Yn11 type connection group and the \(\varphi_{\rm ref}\) and \(U_{\rm ref}\) are respectively 76\({}^{\circ}\) and 33\(\sqrt{2}\)V (where \(U_{\rm ref}\) is the amplitude of \(u_{\rm ca}\) and \(\varphi_{\rm ref}\) is the phase angle of \(u_{\rm ca}\) leading \(u_{\rm ini}\)), Fig. 16 shows the experimental waveforms of F-DPFC.
The input voltage \(u_{\rm ini}\) and outp ut voltage \(u_{\rm a1}\) (between point E\({}_{\rm i}\) and point F in Fig. 3) and \(u_{\rm a2}\) (between point A\({}_{\rm 3}\) and point F in Fig. 3 ) of A-phase buck ac unit in F-DPFC are shown in Fig. 16(a). Among them, \(u_{\rm a1}\) is modulated by the A-phase buck ac unit with a duty cycle of \(d_{\rm s}\), \(u_{\rm a1}\) is a high-frequency pulse sequence, and \(u_{\rm a1}\) is its amplitude envelope curve. Most of the high-frequency components of \(u_{\rm a1}\) are filtered by \(L_{\rm R}\) to obtain \(u_{\rm a2}\). \(u_{\rm a2}\) has not only the fundamental voltage component, but also the third harmonic voltage component and a small number of high-frequency components.
The experimental waveforms of the voltage \(u_{\rm a02}\), \(u_{\rm b02}\) (including the third harmonic component and the fundamental component) and \(u_{\rm a0}\) (A-phase output compensation voltage in Fig. 3) are shown in Fig. 16(b). The phase difference between \(u_{\rm a02}\) and \(u_{\rm b02}\) is 120\({}^{\circ}\), of which the third harmonic component is the same and the phase difference of the fundamental component is 120\({}^{\circ}\). Therefore, by offsetting the third harmonic, the output compensation voltage \(u_{\rm ca}\) can be obtained, where \(u_{\rm a0}\)=\(N_{\rm a}\)(\(u_{\rm a02}\)=\(u_{\rm a02}\)).
The experimental waveforms of \(u_{\rm a0}\) (the original grid line voltage), \(u_{\rm b0}\) (B-phase output compensation voltage in Fig.3), \(u_{\rm b0}\) (the regulated grid line voltage) are shown in Fig. 16(c). One knows that \(u_{\rm a02}\)=\(u_{\rm a02}\)+\(u_{\rm a02}\)-\(u_{\rm b02}\). With the help of Code Composer Studio Software and DSP simulator, it is easy to observe variables. As can be measured from Fig. 16(c), the phase angle of \(u_{\rm b0}\) lags the \(u_{\rm a0}\) by 43.5\({}^{\circ}\) and we can get the phase angle of \(u_{\rm a0}\) leads the \(u_{\rm a1}\) by 76.5\({}^{\circ}\) (where \(u_{\rm a0}\) leading \(u_{\rm b0}\) by 120\({}^{\circ}\), and \(u_{\rm a1}\) and \(u_{\rm a1}\) having the same phase), and the amplitudes of \(u_{\rm a02}\) is 34\(\sqrt{2}\)V. \(u_{\rm a0}\) is leading \(u_{\rm a02}\) by 9.6\({}^{\circ}\) and has an amplitude of 190\(\sqrt{2}\)V smaller than that of \(u_{\rm a0}\) (the measured values are basically the same as the reference values). We can see that under the closed-loop control, the difference between the measured value and the calculated value is very small. The correctness of F-DPFC theoretical analysis is verified.
The experimental waveforms of regulated grid line voltage \(u_{\rm ab}\), \(u_{\rm bc}\) and \(u_{\rm ca}\) (where \(u_{\rm a02}\)=\(u_{\rm a0}\)+\(u_{\rm a0}\)-\(u_{\rm a0}\), \(u_{\rm bc}\)=\(u_{\rm b02}\)+\(u_{\rm b02}\)-\(u_{\rm c02}\), \(u_{\rm a02}\)=\(u_{\rm a02}\)+\(u_{\rm a02}\)-\(u_{\rm a02}\)) are shown in Fig. 16(d), the amplitude of \(u_{\rm ab}\), \(u_{\rm bc}\) and \(u_{\rm ca}\) is 190\(\sqrt{2}\)V. \(u_{\rm a0}\) is compensated by \(u_{\rm ca}\) and \(u_{\rm a0}\) to obtain \(u_{\rm ab}\). \(u_{\rm ab}\) and \(u_{\rm a0}\) have different phases and amplitudes. When the phase difference is 120\({}^{\circ}\), \(u_{\rm ab}\), \(u_{\rm bc}\) and \(u_{\rm ca}\) are positive-sequence symmetrical (usually the original grid line voltage \(u_{\rm a0}\), \(u_{\rm a0}\) and \(u_{\rm ca}\) are symmetrical, which obviously means compensation phase voltages \(u_{\rm aa}\), \(u_{\rm b0}\), and \(u_{\rm osc}\) must be symmetrical).
In zone II, when \(T_{\rm i}\) and \(T_{\rm o}\) are of \(\Delta\)/Yn11 type connection group and the \(\varphi_{\rm ref}\) and \(U_{\rm ref}\) are respectively 170\({}^{\circ}\) and 28\(\sqrt{2}\)V. The experimental waveforms of \(u_{\rm a1}\), \(u_{\rm a02}\) and \(u_{\rm ca}\) are shown in Fig. 17. The \(u_{\rm aa}\) leads the \(u_{\rm a1}\) by 172.5\({}^{\circ}\) and the amplitudes of \(u_{\rm aa}\) is 28\(\sqrt{2}\)V (the measured values are basically the same as the reference values). |
2303.07381 | LRG-BEASTS: Evidence for clouds in the transmission spectrum of HATS-46
b | We have performed low-resolution ground-based spectroscopy of HATS-46 b in
transmission, using the EFOSC2 instrument on the ESO New Technology Telescope
(NTT). HATS-46 b is a highly-inflated exoplanet that is a prime target for
transmission spectroscopy, having a Jupiter-like radius (0.95 R$_\textrm{Jup}$)
but a much lower mass (0.16 M$_\textrm{Jup}$). It orbits a G-type star with a
4.7 d period, giving an equilibrium temperature of 1100 K. We observed one
transit of HATS-46 b with the NTT, with the time-series spectra covering a
wavelength range of 3900 - 9000 Angstrom at a resolution of $R \sim 380$. We
achieved a remarkably precise transmission spectrum of 1.03 $\times$ photon
noise, with a median uncertainty of $357$ ppm for $\sim 200$ Angstrom wide
bins, despite the relative faintness of the host star with $V_{\mathrm{mag}} =
13.6$. The transmission spectrum does not show strong absorption features and
retrievals favour a cloudy model, ruling out a clear atmosphere with
$3.0\sigma$ confidence. We also place a conservative upper limit on the sodium
abundance under the alternative scenario of a clear atmosphere. This is the
eighth planet in the LRG-BEASTS survey, which uses 4m-class telescopes such as
the NTT to obtain low-resolution transmission spectra of hot Jupiters with
precisions of around one atmospheric scale height. | E. Ahrer, P. J. Wheatley, S. Gandhi, J. Kirk, G. W. King, T. Louden, L. Welbanks | 2023-03-13T18:01:25Z | http://arxiv.org/abs/2303.07381v1 | # LRG-BEASTS: Evidence for clouds in the transmission spectrum of HATS-46 b
###### Abstract
We have performed low-resolution ground-based spectroscopy of HATS-46 b in transmission, using the EFOSC2 instrument on the ESO New Technology Telescope (NTT). HATS-46 b is a highly-inflated exoplanet that is a prime target for transmission spectroscopy, having a Jupiter-like radius (0.95 R\({}_{\rm Jup}\)) but a much lower mass (0.16 M\({}_{\rm Jup}\)). It orbits a G-type star with a 4.7 d period, giving an equilibrium temperature of 1100 K. We observed one transit of HATS-46 b with the NTT, with the time-series spectra covering a wavelength range of 3900 - 9000 A at a resolution of \(R\sim 380\). We achieved a remarkably precise transmission spectrum of 1.03 \(\times\) photon noise, with a median uncertainty of 357 ppm for \(\sim 200\) A wide bins, despite the relative faintness of the host star with \(V_{\rm mag}=13.6\). The transmission spectrum does not show strong absorption features and retrievals favour a cloudy model, ruling out a clear atmosphere with 3.0\(\sigma\) confidence. We also place a conservative upper limit on the sodium abundance under the alternative scenario of a clear atmosphere. This is the eighth planet in the LRG-BEASTS survey, which uses 4 m-class telescopes such as the NTT to obtain low-resolution transmission spectra of hot Jupiters with precisions of around one atmospheric scale height.
keywords: methods: observational - techniques: spectroscopic - planets and satellites: atmospheres - planets and satellites: individual: HATS-46 b
## 1 Introduction
The study of transit depth versus wavelength, or _transmission spectroscopy_, is an essential method to characterise the atmospheres of transiting exoplanets with both ground- and space-based telescopes (e.g. Charbonneau et al., 2002; Snellen et al., 2008; Bean et al., 2010; Stevenson et al., 2014; Sing et al., 2016; May et al., 2018; Weaver et al., 2021; Alam et al., 2022; The JWST Transiting Exoplanet Community Early Release Science Team et al., 2022). Hot Jupiters, especially those with inflated radii, are prime targets for transmission spectroscopy as they have large atmospheric scale heights due to their high temperatures, their hydrogen-dominated atmospheres and their low surface gravities. The sample of hot Jupiters studied to date exhibit a diverse range of atmospheric properties that can include: narrow or pressure-broadened sodium absorption (e.g. Fischer et al., 2016; Nikolov et al., 2018; Alam et al., 2021; McGruder et al., 2022), detections of other atomic species and/or broad molecular bands (e.g. Lendl et al., 2017; Carter et al., 2020; Ahrer et al., 2023; Alderson et al., 2023; Feinstein et al., 2023; Rastamkulov et al., 2023), Rayleigh scattering (e.g. Kirk et al., 2017; Chen et al., 2021) and sometimes super-Rayleigh slopes (e.g. Pont et al., 2013; Alderson et al., 2020; Ahrer et al., 2022), as well as high-altitude clouds muting absorption features (e.g. Gibson et al., 2013; Knutson et al., 2014; Kreidberg et al., 2014; Lendl et al., 2016; Louden et al., 2017; Espinoza et al., 2019; Spyratos et al., 2021).
Transmission spectroscopy of hot Jupiters provides crucial information about the composition and chemistry of these exoplanets to understand their formation and migration process (e.g. Oberg et al., 2011; Madhusudhan et al., 2014; Booth et al., 2017), as well as what processes play a role in cloud and haze formation at these hot temperatures. The processes and parameters governing the presence or absence of clouds and hazes in the atmospheres of gas giants are still debated (e.g. Heng, 2016; Fu et al., 2017; Fisher and Heng, 2018; Pinhas et al., 2019; Gao et al., 2020).
A larger sample size is needed to explore this parameter space, and the aim of the Low-Resolution Ground-Based Exoplanet Atmosphere Survey using Transmission Spectroscopy (LRG-BEASTS; 'large beats') is to contribute to that by characterising a large number of gaseous exoplanets in transmission at optical wavelengths. This includes the detection of hazes, Rayleigh scattering and grey clouds
in the atmospheres of WASP-52b (Kirk et al., 2016; Louden et al., 2017), HAT-P-18 b (Kirk et al., 2017), WASP-80 (Kirk et al., 2018), WASP-21 b (Alderson et al., 2020) and WASP-94A b (Ahrer et al., 2022), as well as detections of sodium absorption in the atmospheres of WASP-21 b (Alderson et al., 2020) and WASP-94A b (Ahrer et al., 2022). In addition, within LRG-BEASTS Kirk et al. (2019) analysed the atmosphere of WASP-39 b revealing a supersolar metallicity and Kirk et al. (2021) found tentative evidence for TiO in the atmosphere of the ultralup Jupiter WASP-103 b.
In this paper we present the first transmission spectrum of the exoplanet HATS-46 b. Our observations were made using the EFOSC2 instrument on the New Technology Telescope (NTT) as part of the LRG-BEASTS survey. HATS-46 b was discovered within the HAT-South survey (Bakos et al., 2013) by Brahm et al. (2018). Their photometric observations, together with follow-up radial velocity measurements, confirm HATS-46 b, which orbits its G type host star in 4.74 days. Using TESS and Gaia data, HATS-46 b has been re-characterised by Louden & Hartman (2021) who provided revised planetary and orbital parameters: HATS-46 b has a mass of \(0.158\pm 0.042\) M\({}_{\rm Jup}\) and a radius of \(0.951\pm 0.029\) R\({}_{\rm Jup}\), orbiting at a distance of \(0.05272\pm 0.00045\) au; the equilibrium temperature was determined to \(1082.1\pm 8.2\) K. Stellar and planet parameters are summarised in Table 1. The star HATS-46 does not appear to be very active as the RV measurements by Brahm et al. (2018) did not show any evidence for periodic modulation on a rotation period. Unfortunately, the signal-to-noise of the RV spectra was not sufficient to place constraints on the chromospheric activity from the Ca II H&K lines (Brahm et al., 2018). The TESS light curves showed evidence for variability, with a possible period at around 15 d, but if real this signal would also have been expected to be detected in the HATSouth light curve (Louden & Hartman, 2021).
This paper is divided into the following sections. First, we describe the observations in Section 2, then discuss the data reduction and analysis in Sections 3 & 4. This is followed by our discussion and conclusions in Section 5.
## 2 Observations
We observed HATS-46 with the NTT using the EFOSC2 instrument (Buzzoni et al., 1984) on the night of 17 August 20171. EFOSC2 is mounted at the Nasmyth B focus of the ESO NTT in La Silla, Chile, which has a Loral/Lesser CCD detector with a size of \(2048\times 2048\) pixels. The overall field of view is 4.1 arcmin with a resolution of 0.12 arcseconds per pixel and a pixel binning of \(2\times 2\) was applied.
Footnote 1: Based on observations collected at the European Southern Observatory under ESO programme 099.C-0390(A) (PI: Kirk).
At our request, a slit with a width of 27 arcsec was custom-built, with the aim of avoiding differential slit losses between target and comparison star. Grism #13 was used for our spectroscopic measurements, providing a low-resolution (\(R\sim 380\)) spectrum from \(3900-9000\) A.
In total, 93 spectral frames were acquired, each with a relatively long exposure time of 240 s due to the relatively faint magnitude of both target and comparison star. The readout time was 22 seconds. The observations were taken at an airmass ranging from 1.60 to 1.12 to 1.26. The illumination of the moon was at 16% and it only rose towards the very end of the observation night at a distance to the target of 108\({}^{\circ}\).
For calibration, 67 bias frames were acquired, as well as 112 flat frames (54 lamp, 53 sky, 5 dome) and 3 HeAr arc frames, taken at the beginning of the night. While we experimented with using flat frames in our data reduction, we did not use any in our final reduction as we found it to increase the noise in our data. This is in line with previous reports of similar analyses, both by the LRG-BEASTS and ACCESS surveys (e.g. Rackham et al., 2017; Bixel et al., 2019; Weaver et al., 2020; Kirk et al., 2021).
A nearby star (UCAC4 169-000364) at a distance of 1 arcmin to the target star HATS-46 served as a comparison star and is not known to be a variable star. The two stars are a good match in both magnitude (\(\Delta V_{\rm mag}=0.87\)) and colour (\(\Delta(B-V)=0.09\)), thus well-suited for differential spectro-photometry.
## 3 Data reduction
LRG-BEASTS observations are commonly reduced using a custom-built Python pipeline, which is described in detail by Kirk et al. (2018). The data for HATS-46 have been reduced following this pipeline, but with modifications to the cosmic ray removal and wavelength calibration, introduced in Ahrer et al. (2022). In the following we summarise the reduction steps.
First, the biases were median-combined to produce a master bias. When executing the Python script for extracting the spectra from each science frame the master bias is subtracted from each science frame. However, before extracting the spectra from the individual frames, pixels affected by cosmic rays were identified and replaced with the median of the surrounding pixels.
An aperture width of 32 pixels was applied to extract the spectral counts from each star. To fit the sky background we used a second order polynomial, which was fitted to regions of 50 pixels either side of the stars at a distance of 5 pixels from the edge of the aperture. Outliers of more than three standard deviations were masked from the fit. Extracted properties such as airmass, pixel shift along the slit, Full Width Half Maximum (FWHM), normalised sky background and differential white-light flux and their changes throughout the night are displayed in Fig. 1. Example spectra are plotted in Fig. 2.
Wavelength calibration follows the spectral extractions and is a
\begin{table}
\begin{tabular}{l c} \hline \hline
**Parameter** & **Value** \\ \hline Stellar parameters & \\ \hline \(V_{\rm mag}\) & \(13.634\pm 0.050\) \\ Spectral type & G \\ Temperature \(T_{\rm eff}\) (K) & \(5451\pm 19\) \\ Age (Gyr) & \(8.4\pm 1.9\) \\ Surface gravity log \(g\) (log\({}_{10}\)(cm s\({}^{-2}\))) & \(4.474\pm 0.019\) \\ Metallicity [Fe/H] & \(-0.029\pm 0.039\) \\ Mass (\(M_{\odot}\)) & \(0.869\pm 0.023\) \\ Radius (\(R_{\odot}\)) & \(0.894\pm 0.010\) \\ \hline Planetary parameters & \\ \hline Period (d) & \(4.7423749\pm 0.0000043\) \\ Mass (\(M_{\rm Jup}\)) & \(0.158\pm 0.042\) \\ Radius (\(R_{\rm Jup}\)) & \(0.951\pm 0.029\) \\ Semi-major axis (au) & \(0.05272\pm 0.00045\) \\ Equilibrium temperature \(T_{\rm eq}\) (K) & \(1082.1\pm 8.2\) \\ Inclination (\({}^{\circ}\)) & \(86.97\pm 0.10\) \\ Surface gravity log g (log\({}_{10}\)(cm s\({}^{-2}\))) & \(2.64\pm 0.14\) \\ \hline \end{tabular}
\end{table}
Table 1: Parameters for the star HATS-46 and its planet HATS-46 b, with \(V_{\rm mag}\) and spectral type as determined by Brahm et al. (2018) and all other parameters as revised by Louden & Hartman (2021).
two-step process. First, RASCAL (Veitch-Michaelis & Lam, 2019) was utilised to find a wavelength solution using the HeAr arc frames. The second step is to optimise the wavelength calibration by fitting the positions of the stellar absorption lines in each frame, adjusting the solution, and then saving the wavelength solution for each frame individually. This allowed us to account for wavelength drifts between the frames throughout the night, which were of the order of \(\sim 5\) pixels or \(\sim 20\) A.
Lastly, the spectra were binned into 26 wavelength bins, computed by summing the flux within the corresponding wavelength range of each frame and dividing by the comparison star's flux in the same wavelength bin to correct for the affects of the Earth's atmosphere. Similarly, a white-light light curve was computed by defining one single bin across the whole wavelength range. Bin widths of \(\sim 200\) A (avoiding edges of strong stellar absorption lines) were applied across the whole spectral range, with the exception of two small ranges where we searched for absorption by sodium and potassium, see Fig. 2.
Observations with EFOSC2 at wavelengths \(>7200\) A are subject to fringing effects (see Fig. 2). We found that correcting for these effects in the individual spectra using flat fields was not possible as the fringing changed in amplitude and phase during the night and the acquired flat frames were taken before the observations started.
## 4 Data analysis
### Transit model
Each transit light curve was described using the batman Python package (Kreidberg, 2015) in combination with the analytic light curves from Mandel & Agol (2002) and fitted using the nested sampling algorithm PolyChordy (Handley et al., 2015). First, the white-light light curve was fitted using the ratio of planet to star radius \(R_{p}/R_{*}\), the inclination of the system \(i\), the scaled stellar radius \(a/R_{*}\), the time of mid-transit \(T_{C}\) and the two quadratic limb-darkening coefficients \(u1\) and \(u2\). We computed the limb-darkening coefficients with the Limb-Darkening Toolkit (LDTk) package (Parviainen & Aigrain, 2015), which uses phoenix models (Husser et al., 2013) based on the stellar parameters to determine \(u1\) and \(u2\) and their errors. One of them (\(u2\)) was held fixed to the generated value to avoid degeneracy, while the other one was fitted for (\(u1\)) using a uniform prior with four times the generated error (see Table 2) to allow for small inconsistencies between the stellar model and the observation. This quadratic limb-darkening law provides a good fit to the data, see Section 4.2, and the fitted values for \(u1\) were consistent with the model prediction. The Kipping parameterisation (Kipping, 2013) was also tested to check for potential effects in the transmission spectrum due to the chosen limb-darkening parameterisation, but we can confirm that this is not the case.
All priors for the system parameters can be found in Table 2, which were chosen to be uniform and wide (\(\pm 5\sigma\)) centred on the previously reported literature values (Table 1; Louden & Hartman, 2021). Depending on the detrending method, additional parameters were added to the fitting (introduced in the following section).
The determined values for \(a/R_{*}\), \(i\) and \(T_{C}\) from the white-light light curve fitting (Table 2) were then held fixed for the spectroscopic light curve fitting, which allowed us to fit for relative changes in transit depths over the wavelength range. Thus the fitting parameters for each of the 26 binned light curves were transit depth \(R_{p}/R_{*}\), limb-darkening coefficient \(u1\) and additional noise modelling parameters.
### Light curve fitting
For detrending the white-light light curve, various different approaches were investigated e.g. different combinations of kernels and kernel inputs for a Gaussian Process (GP), 1st and 2nd order polynomials using airmass, FWHM, derotator angle, etc. However, all of these models retrieved very low amplitudes for their respective noise modelling, e.g. see amplitude of the best-fitting GP model in top panel in Fig. 3 which is 0.062 % compared to the transit depth of 1.287 %. In addition, the Bayesian evidence values for each of these fits did not statistically favour a particular GP model or parametric fitting model. The differences across all wavelengths in Bayesian evidences averaged at 0.5 (0.67\(\sigma\)) and never exceeded 1 (\(<1.15\sigma\)). Conse
Figure 1: From top to bottom: variations of airmass, pixel shift along the X axis, FWHM, sky background and differential flux across the night. In the middle panels, the target is indicated with dark blue X symbols, and the comparison star with orange + symbols.
Figure 2: Normalised spectra of comparison star (orange) and target star (dark blue), as well as the expected strong telluric lines (black) in the redder part of the wavelength range. Wavelength bin edges are indicated with dashed black lines. Green lines indicate the position of the sodium doublet (5890, 5895 Å) and potassium doublet (7665, 7699 Å).
quently, we opted to use only a linear dependence on the FWHM for detrending the white-light light curve, see bottom panel in Fig. 3.
To determine the transit depths for each wavelength bin, we fitted the individual light curves of the 26 bins with a transit model and a detrending model. We conducted an investigation of the systematics modelling, similar to the one done for the white-light light curve fit. This was to ensure that our transmission spectrum is independent of the choice of noise modelling, and to provide the best estimate of the uncertainties.
The light curves show very little evidence for systematic trends such as drifts or correlated noise, see left panel in Fig. 4 for the raw light curves. We experimented with simple models to account for the small noise amplitudes, as well as using a transit model without any systematic modelling at all. First, linear models in time, airmass and FWHM were investigated, with the linear in FWHM performing the best according to the Bayesian evidence value of each spectroscopic light curve fit and an average fitted noise amplitude of 0.06% or 600 ppm. In addition, we looked into GP models and sampled different types of kernels and kernel input, out of which the a exponential-squared model with FWHM as input resulted in the best choice, with an average fitted GP amplitude of 0.03% or 300 ppm. As both the linear in FWHM and GP model resulted in similar transit depths and small noise amplitudes, we chose the first, parametric model over the GP model due to its lower uncertainties in the transit depths. This results in an average precision of transit depth error equal to 1.03 x photon noise. The light curves and respective fits are shown in Fig. 4, as well as the residual scatter of the fits and their respective Root Mean Square (RMS) values.
The previously described models all favoured only small variations and FHWM as the detrending source for all spectroscopic bins. This led us to investigate using a common noise model (e.g. as used in Sing et al., 2012; Gibson et al., 2013; Lendl et al., 2016; Nikolov et al., 2016; Nortmann et al., 2016; Huitson et al., 2017; Todorov et al., 2019; Wilson et al., 2020; Kirk et al., 2021; McGruder et al., 2022) in the hope of reducing our uncertainties and getting rid of common noise structures potentially dominating the systematics. In this method the GP component from the white-light light curve fit is subtracted from the spectroscopic light curves before fitting them individually. However, this did not have the desired effect of improving the noise modelling and on average resulted in larger uncertainties. Therefore we did not pursue this method further.
All computed transmission spectra using the GP model, the polynomial model, the common noise model and one without any detrending at all i.e. solely a transit model, are shown in Fig. 5. This demonstrates that our resulting transmission spectrum is independent of our choice of noise modelling. Following the points made above about each detrending approach, we selected a simple polynomial model, 'Linear in FWHM', as the preferred detrending method. The
\begin{table}
\begin{tabular}{l l c c} \hline Parameter & \multicolumn{2}{c}{Prior distribution and range} & Fitted values \\ \hline Scaled stellar radius \(a/R_{*}\) & Uniform & \(a/R_{*}\pm 5\sigma_{a/R_{*}}\) & \(13.94^{+0.24}_{-0.05}\) \\ Inclination \(i\) (\({}^{\circ}\)) & Uniform & \(i\pm 5\sigma_{I}\) & \(87.60^{+0.31}_{-0.03}\) \\ Time of mid-transit \(T_{C}\) (BJD) & Uniform & \(0.9\times T_{C}\), \(1.1\times T_{C}\) & \(2457983.70725.00035.00046\) \\ Transit depth \(R_{p}/R_{*}\) & Uniform & \(R_{p}/R_{*}\pm 5\sigma_{R_{p}/R_{*}}\) & \(0.11250^{+0.00183}_{-0.00083}\) \\ Limb-darkening coefficient \(u1\) & Uniform & \(u1\pm 4\sigma_{u1}\) & \(0.547\pm 0.014\) \\ Limb-darkening coefficient \(u2\) & Fixed & – & 0.1171 \\ \hline \end{tabular}
\end{table}
Table 2: Parameter values obtained from the white-light curve fitting and the respective priors. Values for semi-major axis \(a\), radius of the star \(R_{*}\) and radius of the planet \(R_{p}\) and inclination \(i\) are listed in Table 1. The retrieved values for the parameters \(a/R_{*}\), \(i\) and \(T_{C}\) listed here were fixed for the spectroscopic light curve fitting.
Figure 3: The white-light light curve fitted with a transit and two different models to account for systematics. The best-fit model is plotted in green, while the individual components of the model are plotted in dashed turquoise for the transit model and dark blue for the respective systematics models. In the top panels, labelled (a), we use a GP model for systematics. In the lower panels, labelled (b), we use a linear function of FWHM.
final transmission spectrum in tabular form is displayed in Table 3. Note that for our final spectrum we chose to dismiss the relatively large transit depth of the bin centred on the potassium doublet due the high chance of it being affected by the nearby strong telluric signal (O\({}_{2}\) A-band). Other studies in the past have come to similar conclusions when probing for potassium absorption with ground-based instruments (e.g. Kirk et al., 2017; McGruder et al., 2022).
### Atmospheric Retrieval
We retrieve the transmission spectrum of HATS-46 b using the HV-DRA (Gandhi & Madhusudhan, 2018) and Aurora (Welbanks & Madhusudhan, 2021) atmospheric retrieval codes. Our model uses 14 free parameters which describe the atmospheric composition, thermal profile and cloud/haze properties (shown in Table 4) to generate spectra of HATS-46 b to compare against the observations. We use high temperature molecular line lists to compute the cross sections and hence opacity for the spectrally active species, utilising the Kurucz line list for the atomic species Na and K (Kurucz & Bell, 1995), and the ExoMol POKAZATEL line list for H\({}_{2}\)O (Tennyson et al., 2016; Polyansky et al., 2018). We spectrally broaden each line in the line list with both pressure and temperature, resulting in a Voigt profile (see e.g. Gandhi et al., 2020). We also include collisionally induced absorption from H\({}_{2}\)-H\({}_{2}\) and H\({}_{2}\)-He interactions (Richard et al., 2012), as well as Rayleigh scattering due to H\({}_{2}\).
In addition to these sources of opacity we also include 4 free parameters to model and fit for a partially cloudy and/or hazy atmosphere, as any clouds/hazes can have a strong influence on the overall spectrum. We include a grey (wavelength independent) cloud deck, P\({}_{\rm cl}\), and two parameters which determine a wavelength dependent haze, with \(\alpha_{\rm haze}\) the strength and \(\gamma_{\rm haze}\) the wavelength dependence of the haze (see e.g., Pinhas et al., 2018). Finally, we include the cloud/haze fraction, \(\phi_{\rm cl}\), as a free parameter, with the prior ranging from 0, representing a clear atmosphere, to 1, a fully cloudy/hazy atmosphere (see Table 4).
We model the temperature profile of the atmosphere using the method described in Madhusudhan & Seager (2009). This parametrisation breaks the atmosphere into three distinct layers, with the temperature at the top of the model atmosphere included as a free parameter. We also retrieve the transition pressures P\({}_{1}\) between the top layers 1 and 2 and P\({}_{3}\) between layers 2 and 3. The top two layers have temperature-pressure gradients \(\alpha_{1}\) and \(\alpha_{2}\) as free parameters.
Figure 4: Left: Our fits (red) of the undetrended spectroscopic light curves (black) using a transit model and a linear in FWHM for detrending to the data with their respective centre wavelengths (blue end at the top) displayed on the right vertical axis. Right: Residuals of the corresponding light curve fitting. The scatter is quantified in the form of the RMS on the right vertical axis.
The final deepest layer of the atmosphere is fixed to an isotherm, and continuity of the temperature between these layers results in 6 free parameters for the temperature profile. We restrict our parametrisation to only allow non-inverted or isothermal temperature profiles given that we do not expect stratospheres for planets with such temperatures (e.g. Fortney et al., 2008), similar to previous work with transmission retrievals (e.g. Pinhas et al., 2019). We also include an additional free parameter for the reference pressure, P\({}_{\rm ref}\), the point in the atmosphere where the radius of the planet is set. We model the atmosphere between 100-10\({}^{-6}\) bar with 100 layers evenly spaced in log pressure, and model the spectrum with 4000 wavelength points between 0.39-0.9 \(\mu\)m. Our Bayesian analysis is carried out using the Nested Sampling algorithm MultiNest(Feroz & Hobson, 2008; Feroz et al., 2009; Buchner et al., 2014).
The retrieved constraints are shown in Table 4, and the posterior distribution is shown in the Appendix, Fig. A1. For our retrievals we considered two competing scenarios: a cloudy/hazy atmosphere and a relatively-clear atmosphere. The first case, where clouds mask atomic and molecular species in the transmission spectrum of HATS-46 b, is statistically preferred to 3.0\(\sigma\) due to the relatively featureless spectrum, when using Bayesian model evidence comparisons (e.g., Benneke & Seager, 2013; Welbanks & Madhusudhan, 2021). In the alternative, less statistically preferred scenario of a clear atmosphere, where clouds do not mask the atomic and molecular species, we can place constraints on the abundance of K and Na. There is no visible feature of Na in the spectrum, hence we place an upper limit on Na abundance of log(Na) \(<-4.45\) to 3\(\sigma\), i.e., less than 20\(\times\)solar Na abundance for this cloud-free scenario. This is a conservative upper limit, since the lack of features in the transmission spectrum drives the atmospheric temperatures in the model to the lower end of the prior, which decreases the atmospheric scale height and thereby the strength of features. There is therefore a degeneracy between temperature and abundance, and an atmospheric temperature closer to the equilibrium temperature would give a tighter limit on abundance.
Additionally, we assess the impact of unocutded star spots and faculae in the transmission spectrum of HATS-46 b using Aurora(Welbanks & Madhusudhan, 2021). We allow for the possibility of a contaminated stellar photosphere and retrieve for three additional parameters to the fiducial model described above. These are, the photospheric temperature (Gaussian prior centred at effective temperature of the star and a width of 100 K), the fraction of unocutded spots of faculae (uniform prior between 0 and 50 %), and the temperature of these inhomogeneities (uniform prior from 0.5 to 1.5 times the effective temperature of the star). Priors are in line with what is recommended by Pinhas et al. (2018). The retrieved properties stellar properties are in agreement with the possibility of a spotless star. The retrieved photospheric temperature of HATS-46 is consistent with the reported value in Table 1, with a relatively low fraction of spots (i.e., 2\(\sigma\) upper limit of \(\lesssim\) 22%) with temperatures consistent with the photospheric stellar temperature at 2\(\sigma\). The presence of stellar heterogeneities is not preferred since its Bayesian evidence value is lower relative to our fiducial model. Based on these observations and the models considered here, we find no evidence for stellar contamination affecting our observations.
## 5 Discussion & Conclusions
We presented the analysis and results of spectroscopic NTT/EFOSC2 data of HATS-46 b in transmission. The inflated, Jupiter-sized exoplanet orbits its relatively faint (V\({}_{\rm mag}\) = 13.6) G type host star
Figure 5: Transmission spectra of HATS-46 b using NTT/EFOSC2 observations. Median precisions of transit depths for \(\sim 200\) Å wide bins are quoted in brackets in the description respectively. The orange and blue colours represent the resulting transmission spectrum using Gaussian Process (387 ppm) and a linear in FWHM (357 ppm) to account for systematics modelling, respectively. The black represents the case for when not using any noise modelling i.e. solely a transit model (326 ppm). The green indicates a model where the GP component fitted to the white-light light curve was subtracted (common noise model) from the spectroscopic light curves and then a linear in FWHM was used to fit residual systematics (358 ppm). The ‘Linear in FWHM’ transmission spectrum is used for the retrieval analysis (see text for further details), but note that the bin centred on the potassium doublet (7665, 7699 Å) is not included as it is affected by the close-by strong telluric O\({}_{2}\) line.
\begin{table}
\begin{tabular}{c c c c} \hline Bins (Å) & \(R_{p}/R_{*}\) & u1 & u2 (fixed) \\ \hline
3900 - 4200 & \(0.1137^{+0.0018}_{-0.0017}\) & \(0.92\pm 0.02\) & -0.0737 \\
4200 - 4440 & \(0.1157\pm 0.0017\) & \(0.87\pm 0.02\) & -0.0523 \\
4420 - 4680 & \(0.1130\pm 0.0015\) & \(0.79\pm 0.02\) & 0.0380 \\
4680 - 4910 & \(0.1134^{+0.0014}_{-0.0013}\) & \(0.73\pm 0.02\) & 0.0726 \\
4910 - 5120 & \(0.1151^{+0.0012}_{-0.0015}\) & \(0.71\pm 0.02\) & 0.0721 \\
5120 - 5350 & \(0.1154\pm 0.0014\) & \(0.67\pm 0.02\) & 0.0837 \\
5350 - 5570 & \(0.1117^{+0.0014}_{-0.0015}\) & \(0.63\pm 0.01\) & 0.1050 \\
5570 - 5818 & \(0.1139\pm 0.0013\) & \(0.59\pm 0.01\) & 0.1241 \\
5818 - 5868 & \(0.1155^{+0.0030}_{-0.0030}\) & \(0.58\pm 0.01\) & 0.1330 \\
5568 - 5918 & \(0.1117\pm 0.0029\) & \(0.58\pm 0.01\) & 0.1209 \\
5918 - 5968 & \(0.1156^{+0.0028}_{-0.0029}\) & \(0.58\pm 0.01\) & 0.1295 \\
5968 - 6190 & \(0.1121^{+0.0018}_{-0.0016}\) & \(0.55\pm 0.01\) & 0.1336 \\
6190 - 6400 & \(0.1136^{+0.0016}_{-0.0015}\) & \(0.53\pm 0.01\) & 0.1364 \\
6400 - 6610 & \(0.1128\pm 0.0015\) & \(0.50\pm 0.01\) & 0.1512 \\
6610 - 6820 & \(0.1146\pm 0.0015\) & \(0.50\pm 0.01\) & 0.1433 \\
6820 - 7040 & \(0.1136\pm 0.0015\) & \(0.48\pm 0.01\) & 0.1446 \\
7040 - 7240 & \(0.1146^{+0.0017}_{-0.0017}\) & \(0.47\pm 0.01\) & 0.1449 \\
7240 - 7440 & \(0.1130^{+0.0018}_{-0.0019}\) & \(0.45\pm 0.01\) & 0.1452 \\
7440 - 7649 & \(0.1157^{+0.0020}_{-0.0021}\) & \(0.44\pm 0.01\) & 0.1464 \\
7749 - 7950 & \(0.1127^{+0.0023}_{-0.0023}\) & \(0.42\pm 0.01\) & 0.147 \\
7950 - 8150 & \(0.1111\pm 0.0026\) & \(0.42\pm 0.01\) & 0.1476 \\
8150 - 8350 & \(0.1128^{+0.0029}_{-0.001}\) & \(0.40\pm 0.01\) & 0.1482 \\
8350 - 8550 & \(0.1120^{+0.0030}_{-0.0011}\) & \(0.38\pm 0.01\) & 0.1474 \\
8550 - 8770 & \(0.1111^{+0.0033}_{-0.0033}\) & \(0.37\pm 0.01\) & 0.1488 \\
8770 - 9000 & \(0.1177\pm 0.0035\) & \(0.37\pm 0.01\) & 0.1494 \\ \hline \end{tabular}
\end{table}
Table 3: Retrieved transmission spectrum of HATS-46 b in tabulated form using the ‘Linear in FWHM’ detrending approach, as plotted in Fig. 5, excluding the bin centred on the K doublet.
in a 4.7-day period and has an equilibrium temperature of 1100 K (Louden & Hartman, 2021).
One transit was observed with NTT/EFOSC2 using the method of long-slit spectroscopy and a comparison star was used to conduct differential spectroscopy. A total of 93 spectral frames with exposure times of 240 s were acquired. The resulting light curves did not show noise structures beyond a weak dependence on seeing, with fitted average amplitudes of 600 ppm for our best noise model, which included a linear detrend against FWHM.
We extracted the transmission spectrum in 26 bins, covering the wavelength range of 3900 - 9000 A with a median transit depth uncertainty of 357 ppm for the \(\sim 200\)A wide bins. The measured transmission spectrum is relatively featureless, it does not show a sodium feature or a scattering slope. The fitted, relatively large transit depth at the wavelength of the potassium doublet was dismissed as an effect of the nearby strong telluric signal due to the O\({}_{2}\) A-band. Our atmospheric retrieval analysis of the transmission spectrum of HATS-46 b favours a cloudy atmosphere with 3.0\(\sigma\) confidence. In an alternative cloud-free model we place a conservative upper limit on the Na abundance of 20\(\times\)solar (3\(\sigma\) confidence). Including stellar activity in our retrievals results in lower Bayesian evidence and no meaningful constraints on the additional parameters. If activity were to play a role in the shape in our transmission spectrum, we would expect to retrieve constraints on the spot coverage fraction or temperature of the spots. Thus the cloudy atmosphere model without the additional stellar activity parameters is favoured.
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Parameter** & **Prior Range** & **Retrieval Constraint** \\ \hline \(\log(X_{\rm{H_{2}O}})\) & -15 \(\rightarrow\) +1 & \(-8.4^{+4.8}_{-4.2}\) \\ \(\log(X_{\rm{Na}})\) & -15 \(\rightarrow\) -1 & \(-10.1^{+1.5}_{-3.0}\) \\ \(\log(X_{\rm{K}})\) & -15 \(\rightarrow\) +1 & \(-8.6^{+3.3}_{-4.0}\) \\ \(T_{\rm{top}}\) / K & 750 \(\rightarrow\) 2500 & 1167\({}^{+530}_{-300}\) \\ \(\alpha_{1}\) / K\({}^{-\frac{1}{2}}\) & 0 \(\rightarrow\) 1 & 0.67\({}^{+0.21}_{-0.23}\) \\ \(\alpha_{2}\) / K\({}^{-\frac{1}{2}}\) & 0 \(\rightarrow\) 1 & 0.61\({}^{+0.25}_{-0.27}\) \\ \(\log(P_{1}\)/bar) & -6 \(\rightarrow\) 2 & \(-1.7\pm 1.7\) \\ \(\log(P_{2}\)/bar) & -6 \(\rightarrow\) 2 & \(-4.1^{+1.6}_{-1.3}\) \\ \(\log(P_{3}\)/bar) & -2 \(\rightarrow\) 2 & 0.60\({}^{+0.90}_{-1.35}\) \\ \(\log(P_{\rm{ref}}\)/bar) & -4 \(\rightarrow\) 2 & \(-2.51^{+1.02}_{-0.86}\) \\ \(\log(\alpha_{\rm{hune}})\) & -4 \(\rightarrow\) 6 & \(-0.0^{+2.8}_{-2.5}\) \\ \(\gamma_{\rm{hune}}\) & -20 \(\rightarrow\) -1 & \(-11.3^{+6.3}_{-5.5}\) \\ \(\log(P_{\rm{cl}}\)/bar) & -6 \(\rightarrow\) 2 & \(-4.42^{+1.24}_{-0.94}\) \\ \(\phi_{\rm{cl}}\) & 0 \(\rightarrow\) 1 & 0.79\({}^{+0.13}_{-0.19}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Parameters and uniform prior ranges for our retrieval. We retrieve the Na, K and H\({}_{2}\)O abundances, temperature profile, and partial cloud/haze parameters. Our temperature profile includes 6 free parameters, and our cloud/haze parametrisation includes 4 free parameters (see Section 4.3).
Figure 6: Transmission spectrum of HATS-46 b as observed by NTT/EFOSC2 and using linear in FWHM detrending (black), and the median retrieved atmospheric model (red), including the respective 1\(\sigma\) and 2\(\sigma\) confidence intervals. It is shown that the retrieved transmission spectrum is relatively featureless, suggesting high-altitude clouds in the atmosphere. Note that narrower bins around the Na doublet (\(5890,5895\) Å) are used to probe for absorption and the bin centred on the K doublet (7665, 7699 Å) was disregarded due to the close strong O\({}_{2}\) telluric line.
## Acknowledgements
This research has made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. PJW acknowledges support from STFC under consolidated grants ST/P0004959/1 and ST/T000406/1. SG is grateful to Leiden Observatory at Leiden University for the award of the Oort Fellowship. JK acknowledges financial support from Imperial College London through an Imperial College Research Fellowship grant.
## Data Availability
The raw data used in our analysis are available from the ESO data archive under ESO programme 099.C-0390(A) (PI: Kirk). The reduced light curves presented in this article will be available via VizieR at CDS (Ochsenbein et al., 2000).
|
2305.03269 | Phase Neural Operator for Multi-Station Picking of Seismic Arrivals | Seismic wave arrival time measurements form the basis for numerous downstream
applications. State-of-the-art approaches for phase picking use deep neural
networks to annotate seismograms at each station independently, yet human
experts annotate seismic data by examining the whole network jointly. Here, we
introduce a general-purpose network-wide phase picking algorithm based on a
recently developed machine learning paradigm called Neural Operator. Our model,
called PhaseNO, leverages the spatio-temporal contextual information to pick
phases simultaneously for any seismic network geometry. This results in
superior performance over leading baseline algorithms by detecting many more
earthquakes, picking more phase arrivals, while also greatly improving
measurement accuracy. Following similar trends being seen across the domains of
artificial intelligence, our approach provides but a glimpse of the potential
gains from fully-utilizing the massive seismic datasets being collected
worldwide. | Hongyu Sun, Zachary E. Ross, Weiqiang Zhu, Kamyar Azizzadenesheli | 2023-05-05T03:56:22Z | http://arxiv.org/abs/2305.03269v2 | # Next-Generation Seismic Monitoring
###### Abstract
Seismic phase picking is the task of annotating seismograms with seismic wave arrival times and underpins earthquake monitoring operations globally. State-of-the-art approaches for phase picking use deep neural networks to annotate seismograms at each station independently; this is in stark contrast to the way that human experts annotate seismic data, in which waveforms from the whole network are examined simultaneously. With the performance gains of single-station algorithms approaching saturation, it is clear that meaningful future advances will require algorithms that can naturally examine data for entire networks at once. Here, we introduce a general-purpose network-wide phase picking algorithm, PhaseNO, that is based on a recently developed machine learning paradigm called Neural Operator. PhaseNO can use data from any number of stations arranged in any arbitrary geometry to pick phases across the entire seismic network simultaneously. By leveraging the natural spatial and temporal contextual information, PhaseNO achieves superior performance over leading baseline algorithms by detecting many more earthquakes, picking many more phase arrivals, yet also greatly improving measurement accuracy. Following similar trends being seen across the domains of artificial intelligence, our approach provides but a glimpse of the potential gains from fully-utilizing the massive seismic datasets being collected around the world.
## Introduction
Seismic phase detection and picking are fundamental tasks in earthquake seismology, where the aim is to identify earthquakes in the continuous data and measure the arrival times of seismic waves. These arrival time measurements, or "phase picks", are the basis for earthquake catalogs, which are databases of earthquake attributes including the occurrence
time, source location, and magnitude. The rapidly growing amount of continuous seismic data worldwide has brought new opportunities for building and enriching earthquake catalogs (Ross et al., 2019, Li et al., 2021, Liu et al., 2022). These enhanced catalogs uncover new earthquakes absent from standard catalogs, illuminating fault complexity (Ross et al., 2019, Tan et al., 2021), earthquake behavior (Cochran et al., 2023), and fluid migration (Hotovec-Ellis et al., 2018, Wilding et al., 2023), and much more in the subsurface.
Analyzing these enormous datasets to build seismicity catalogs requires efficient and reliable phase picking methods. Historically, human seismic analysts manually labeled earthquake signals and the arrival times of seismic phases by looking for coherent wavefronts on multiple stations and then picking the onset times of P and S waves at each station. Such analysis, however, is subjective, time-consuming, and prone to errors. Considerable effort has been dedicated to developing accurate, automatic, and timely earthquake detection methods, such as short-term average/long-term average (Withers et al., 1998), template matching (Gibbons and Ringdal, 2006, Shelly et al., 2007), and fingerprint and similarity threshold (Yoon et al., 2015). Recent advances in deep learning have shown promise in automatic and efficient phase picking (Perol et al., 2018, Ross et al., 2018, Zhu and Beroza, 2018, Mousavi et al., 2019, Zhu et al., 2019, Zhou et al., 2019, Wang et al., 2019, Dokht et al., 2019, Mousavi et al., 2020, Xiao et al., 2021, Zhu et al., 2022b, Feng et al., 2022, Munchmeyer et al., 2022). These advances have made deep learning become state of the art for earthquake monitoring tasks. In this context, deep neural networks are trained to recognize complex patterns in the seismic data and extract useful features for earthquake detection, without requiring any prior information about the dataset. In general, neural phase pickers are highly scalable to large data sets. However, the single-station detection strategy used in most of the machine-learning detection algorithms makes them easily fail to detect events buried in a high level of noise or mistakenly detect local noise signals with emergence pulses. Indeed, the performance gains of single-station neural phase pickers have rapidly saturated, leading to the question of where the next breakthroughs in phase picking will come from.
Across the various domains of artificial intelligence, such as natural language processing and computer vision, the largest gains in performance have come from (i) using ever-larger datasets with increasingly detailed labeling/prediction tasks, and (ii) incorporating powerful model architectures (e.g. transformers) that are capable of learning to extract information from these very complex datasets. Translating these successes to the phase picking problem would similarly require formulating the problem more generally, in which the goal is to output phase picks only after examining the seismic data for all available sensors in a network. To accomplish such a general formulation, new models are needed that can naturally consider the spatial and temporal context on a variable arrangement of sensors.
In this paper, we introduce such an approach for general purpose network-wide earthquake detection and phase picking. Our algorithm, called Phase Neural Operator (PhaseNO), builds on Neural Operators (Kovachki et al., 2023), a recent advance of deep learning models that operate directly on functions rather than finite dimensional vectors. PhaseNO learns infinite dimensional function representations of seismic wavefields across the network, allowing us to accurately measure the arrival times of different phases jointly at multiple stations with arbitrary geometry. We evaluate our approach on real-world seismic datasets and compare its performance with state-of-the-art phase picking methods. We demonstrate that PhaseNO outperforms leading baseline algorithms by detecting many more earthquakes, picking many
more phase arrivals, yet also greatly improving measurement accuracy. The runtime of PhaseNO scales linearly with the duration of continuous waveforms, and the memory usage remains constant for a fixed number of stations in a seismic network, making it applicable to large seismic datasets in a timely fashion. Overall, our approach demonstrates the power of leveraging both temporal and spatial information for seismic phase picking and more broadly, highlights the benefits of using advanced machine learning techniques for improving earthquake monitoring systems.
## Results
### Phase Neural Operator
We introduce a deep learning model for network-wide earthquake phase picking. PhaseNO is designed to learn an operator between infinite-dimensional function spaces on a bounded physical domain. The input function is a seismic wavefield observed at some arbitrary collection of points in space and time, \(f(x,y,t)\), and the output function is a probability mask \(g(x,y,t)\) that indicates the likelihood of P- and S-wave arrivals at each point \((x,y,t)\). A powerful advantage of Neural Operators over classical Neural Networks is that they are discretization-invariant, meaning the input and output functions can be discretized on a different (arbitrary) mesh every time a solution is to be evaluated, without having to re-train the model. This critical property allows for Neural Operators to be evaluated at any point within the input physical domain, enabling phase picking on a dynamic seismic network with different geometries.
In our model, we combine two types of Neural Operators to naturally handle the mathematical structure of seismic network data. For the temporal information, we use Fourier Neural Operator (FNO) layers [11], which are ideal for cases in which the domain is sure to be discretized on a regular mesh, because fast Fourier transforms are used to quickly compute a solution. Since seismograms are mostly sampled regularly in time, FNO can efficiently process and encode seismograms. For the spatial information, our sensors are generally not on a regular mesh, and so we instead use Graph Neural Operators [10] to model the relationship of seismic waveforms at different stations. This type of neural operator is naturally able to work with irregular sensors, as it uses message passing [14] to aggregate features from multiple stations and construct an operator with kernel integration. FNO and GNO layers are sequentially connected and repeated several times, allowing for sufficient communications and exchange of spatiotemporal information between all stations in a seismic network.
Figure 1 summarizes the PhaseNO architecture. The model is composed of multiple blocks of operator layers in which FNO and GNO are sequentially connected to extract both spatial and temporal information. These blocks are repeated several times to exchange temporal and spatial information during phase picking. Skip connections are used to connect the blocks, resulting in a U-shape architecture. The skip connection directly concatenates FNO results on the left part of the model with GNO results on the right without going through deep layers, which improves convergence and allows for deeper, more overparameterized models. The data at each station have five channels of information. The first three
channels are either the three-component seismograms, or a single component channel (if applicable). The horizontal Cartesian station coordinates are encoded as the last two channels. The relative locations between stations can be used to learn weights as edge features in a graph for GNO (see Methods).
### Baselines
In this study, we benchmark the performance of PhaseNO against three leading baseline models: EQTransformer [Mousavi et al., 2020], PhaseNet [Zhu and Beroza, 2018], and EdgePhase [Feng et al., 2022]. We summarize key attributes about these baselines here. EQTransformer and PhaseNet are single-station detection and picking models using convolutional layers and other modern deep learning components. PhaseNet was trained on an earthquake dataset from Northern California with several hundred thousand data samples (623,054 P/S picks). EQTransformer was trained on a global dataset of earthquakes called STEAD [Mousavi et al., 2019a]. EdgePhase is a multi-station picking model that incorporates an edge convolution module in the latent space of EQTransformer; it was built on the pre-trained layers of EQTransformer and then fine-tuned on earthquake and noise data of the year 2021 recorded by
Figure 1: **PhaseNO architecture.** The model consists of multiple FNO and GNO layers that are sequentially connected and repeated. Skip connections are used between layers, resulting in a U-shape architecture. \(\mathcal{P}\) and \(\mathcal{Q}\) are up- and down-projections parameterized by neural networks. The model uses seismograms from a seismic network containing multiple stations with an arbitrary geometry as the input and predicts the probabilities of P-phase and S-phase arrival times for all input stations. Station locations are encoded as two channels of the input, in addition to three channels carrying the three-component waveforms. With PhaseNO, the temporal and spatial information on wavefields is exchanged during phase picking, resulting in more true detections and fewer false alarms in a seismic network.
the Southern California Seismic Network (SCSN). The pre-trained EQTransformer compared here has been fine-tuned with the same dataset as EdgePhase, leading to better performance than the original model (Feng et al., 2022).
### Performance evaluation
We trained PhaseNO on an earthquake dataset from the Northern California Earthquake Data Center (NCEDC) spanning the period 1984-2019 (see Methods). We evaluated PhaseNO and each baseline model on an out of sample test dataset for the period 2020 containing 43,700 P/S picks of 5,769 events. For all of the models, P- and S-picks were determined from peaks in the predicted probability distributions by setting a pre-determined threshold. Each model used a distinct threshold as the one maximizing the F1 score to ensure the models compared under their best conditions (Figure 2; Figure S1).
Our method results in the highest F1 scores for both P- and S-waves, being 0.99 and 0.98 respectively. This is in addition to having the highest optimal thresholds (0.70 for P and 0.65 for S) of all the models tested (Table 1). Given that similar labeling strategies were used for training the baselines (Gaussian for PhaseNet and triangular for the other models), a higher threshold indicates that PhaseNO has a higher confidence level for detecting and picking seismic arrivals than other methods. The two single station picking models, PhaseNet and EQTransformer, have similar F1 scores, but the former has higher recall and the latter has higher precision. EdgePhase is built on EQTransformer and has better performance in terms of the precision-recall curves. However, the phase picks are less precise in terms of time residuals (Table 1; Figure S2). PhaseNO detects more true positives, fewer false negatives, and fewer false positive picks than the other deep-learning models at almost all signal-to-noise levels (Figure 3). Despite generating more picks, PhaseNO results in the smallest mean absolute error for both P and S phases. Overall, PhaseNO achieves the best performance on all six metrics, with one minor exception. The standard deviation of P phase residuals for PhaseNO is 0.01 s (one time step) larger than PhaseNet. It should be noted that the newly detected phases by PhaseNO are likely to be more challenging cases as their signal-to-noise levels are lower, and thus result in slightly increased standard deviation.
We compare the predicted probability distributions of each neural phase picker for sev
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline & & \(\mu\)(s) & \(\sigma\)(s) & MAE(s) & F1 & Precision & Recall \\ \hline \multirow{4}{*}{P-phase} & PhaseNO & **0.00** & 0.05 & **0.02** & **0.99** & **0.99** & **0.99** \\ & PhaseNet & **0.00** & **0.04** & **0.02** & 0.96 & 0.95 & 0.97 \\ & EdgePhase & 0.01 & 0.09 & 0.07 & 0.97 & 0.98 & 0.96 \\ & EQTransformer & 0.02 & 0.08 & 0.06 & 0.96 & 0.97 & 0.95 \\ \hline \multirow{4}{*}{S-phase} & PhaseNO & **0.01** & **0.09** & **0.06** & **0.98** & **0.97** & **0.98** \\ & PhaseNet & 0.02 & **0.09** & **0.06** & 0.93 & 0.90 & 0.96 \\ \cline{1-1} & EdgePhase & 0.07 & 0.12 & 0.11 & 0.95 & 0.96 & 0.95 \\ \cline{1-1} & EQTransformer & **0.01** & 0.12 & 0.09 & 0.94 & 0.95 & 0.93 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Phase picking performance on the test dataset. The best score is highlighted in bold.
eral representative events (Figures 4, S3, S4). PhaseNO works very well on different event magnitudes, instrument types, and waveform shapes. PhaseNet generates some false positive picks that are removed by multi-station methods (PhaseNO and EdgePhase); however, EdgePhase also generates many false negatives (Figure 4). Through exchanging temporal and spatial information multiple times, PhaseNO effectively prevents false picks while improving the detection ability of true picks. PhaseNO successfully finds picks on low SNR waveforms by leveraging contextual information from other stations. PhaseNO is generally more successful in picking phases, leading to the highest probabilities around the manual picks compared to other methods.
S-phases generally exist in the coda of P-phases and are more challenging to find. Thus, more labeling errors from human analysts are expected on S phases than P phases. For instance, in Figure S3, three of the models generate consistent S picks, but the predicted peaks are systematically offset from the manual picks on this event. For these example cases, PhaseNO shows significant improvement in S-phase picking and generates higher probabilities than the other two methods. Moreover, the width of the picks predicted by PhaseNO may represent the degree of difficulty in picking the phases from the waveforms. Picks with high probabilities may have wider distribution if the waveforms have low SNR. Also, our model can handle the waveforms with more than one pick existing in a sample, owing to the data augmentation during training (Figure S4).
Figure 2: **Comparison of the precision-recall curves on the NCEDC2020 test dataset.** The best threshold (th) for each model on this test dataset is selected based on the maximum F1 scores (stars labeled on the curves) that the models achieve (Figure S1).
### Application to the 2019 Ridgecrest earthquake sequence
We tested the detection performance and generalization ability of PhaseNO on the 2019 Ridgecrest earthquake sequence. We downloaded continuous waveform data for EH, HH, and HN sensors for the period 4 July, 2019 (15:00:00) to 10 July, 2019 (00:00:00) at 20 SCSN stations, which is a total of 36 distinct sensors. Each of these sensors is treated as a distinct node in the graph, even if they are co-located (Table S1). Waveform data are divided into hourly streams with a sampling rate of 100 Hz. This is a challenging dataset due to the overlap of numerous events. Since no ground-truth catalog is available for the continuous data, we evaluated our results by comparing them with catalogs produced by SCSN, PhaseNet, and two template matching studies [14, 15].
We first divided the entire seismic network into two parts and constructed two graphs for every hour of data, due to the increased computational cost with the number of nodes in a graph (Figure S5). The 36 nodes were randomly divided into two graphs with 18 nodes. Continuous data were cut into a 30-s time window with an overlap of 10 s, resulting in 180 predictions for one-hour data on 18 nodes. After preprocessing, PhaseNO predicted the probabilities of earthquake phases on 18 nodes at once. We compare representative waveforms with probabilities predicted by PhaseNO and PhaseNet (Figures 5 - 7; Figures S6 - S9). Both models show great generalization ability, as these waveform were recorded outside of the training region. Our model works very well on continuous data, especially when there is more than one event in a 30-s time window, when the event is located at any position of the window, and when the waveform has different shapes with low SNR. In
Figure 3: **Phase picking performance as a function of noise level.** PhaseNO detects the most P and S phases and the fewest false picks compared to other state-of-the-art deep-learning models at almost all signal-to-noise ratio (SNR) levels. SNR is calculated by the ratio of standard deviations of the 5 s following and the 5 s preceding the arrival time of P phases.
Figure 4: **Results of two representative events in the test dataset.** (a) Event nc71112909 with the magnitude of 0.43. (b) Event nc71112684 with the magnitude of 1.41. The station name and epicentral distance are shown on the left part of the three-component waveforms. PhaseNO and EdgePhase predict the results event by event, whereas PhaseNet outputs the results station by station.
particular, owing to the learned waveform consistency among multiple stations, PhaseNO detects much more picks with meaningful moveout patterns than PhaseNet.
After prediction, we determined phase picks using a threshold of 0.3 for both P and S phases. PhaseNO detected 693,266 P and 686,629 S arrival times, while PhaseNet found 542,793 P and 572,991 S arrival times with the same threshold and the same stations. We evaluated the accuracy of the detected picks by comparing the arrival times with manually reviewed picks from SCSN (Figure S10). The standard deviation of the pick residuals between SCSN and PhaseNO was 0.10 s for P phases and 0.14 s for S phases, calculated from 118,746 P picks and 96,247 S picks. The standard deviation, however, was slightly higher than those with PhaseNet (0.08 s for P from 106,061 picks and 0.13 s for S from 88,438 picks). Since the newly detected picks are more challenging cases with low fidelity, it is reasonable for PhaseNO to show a larger travel time difference.
We convert candidate phase detections into events by phase association. For this task, we use the GaMMA method [Zhu et al., 2022a]. We set a minimum of 17 picks per event to filter out low-quality associations. This results in PhaseNet detecting 21,748 events with 37.54 picks per event, whereas PhaseNO detects 26,176 events with 39.37 picks per event (Figure 8a). Many of the unassociated picks are probably a consequence of our strict filtering criteria during association, rather than false detections. With the same association hyperparameters, the additional 4428 events highlights the advancement of PhaseNO for earthquake detection. Despite the increased number of events, PhaseNO shows high detection quality with around two more picks per event compared to PhaseNet, even though they are smaller events in general (Figure S11). GaMMA calculates magnitudes for events detected by PhaseNO and PhaseNet, and they both show linear Gutenberg-Richter distributions (Figure 8b). Indeed, our results have fewer microearthquakes compared to the template matching catalog by Ross et al. [2019a]. Since microearthquakes usually have limited propagation ranges and can only be recorded by several stations, they would have been filtered out during association and thus not shown on the frequency-magnitude distribution. Moreover, event locations determined by GaMMA are generally consistent between PhaseNO and PhaseNet catalogs (Figure 8c), confirming that the additional events by PhaseNO are reasonable detections of real earthquakes.
Furthermore, we also treat the manually reviewed SCSN catalog as a baseline and evaluate how many earthquakes were successfully recovered. We consider that two events are matched if they occur within 3 s from each other. With such criteria, Shelly, Ross et al., and PhaseNet matched around 81%, 86%, and 88% events, respectively. In comparison, with our strict filtering criteria during association, PhaseNO catalog totaling 26,176 events matched approximately 94% events in the SCSN catalog (10,673 of 11,389) with additional events, indicating the highest recall score of PhaseNO. PhaseNO consistently detects more events than PhaseNet, SCSN, and Shelly's template matching catalog [Shelly, 2020] over time and approaches the number of earthquakes reported by another more detailed template matching catalog [Ross et al., 2019a]. Moreover, PhaseNO achieves a much more stable detection with the greatest number of events found when the \(M_{w}\) 7.1 mainshock occurred (Figure 8a) and with the gradually reduced seismicity rate afterwards, indicating the power of the method to illuminate complex earthquake sequences. Examples of events and associated picks detected by PhaseNO can be found in the Supplementary Materials (Figures S12 - S14).
Figure 5: **Representative waveforms selected from one-hour continuous data starting at 04:00:00 on July 6, 2019 from the 2019 Ridgecrest earthquake sequence.** We compare PhaseNO and PhaseNet on a 35-s time window (2190 – 2225 s). The picking threshold is 0.3. PhaseNO detects many more events than PhaseNet on this 35-s time window.
Figure 6: **Representative waveforms selected from one-hour continuous data starting at 06:00:00 on July 6, 2019 from the 2019 Ridgecrest earthquake sequence.** We compare PhaseNO and PhaseNet on a 35-s time window (240 – 275 s). The picking threshold is 0.3. PhaseNO successfully detects phases on various shapes of waveforms with many more picks than PhaseNet. We notice that most of the newly detected phases are meaningful by visually checking the waveforms.
Figure 7: **Representative waveforms selected from one-hour continuous data starting at 02:00:00 on July 7, 2019 from the 2019 Ridgecrest earthquake sequence.** We compare PhaseNO and PhaseNet on a 35-s time window (1025 – 1060 s). The picking threshold is 0.3. PhaseNO detects an additional event compared with PhaseNet on these waveforms. The first arrivals of the newly detected event overlap with the coda of the larger event and thus make them challenging for the single-station detector. More examples can be found in the Supplementary Materials (Figures S6 – S9).
Figure 8: **Comparison of earthquake catalogs of the 2019 Ridgecrest earthquake sequence.** (a) Number of earthquakes. (b) Frequency-magnitude distributions. Triangles and dots show original and cumulative distributions, respectively. (c) Comparison of earthquake hypocenters detected by PhaseNO and PhaseNet. For both PhaseNO and PhaseNet, picks are determined with a threshold of 0.3 and then associated using GaMMA with a minimum of 17 picks per event.
## Discussion
The paper presents PhaseNO, a novel deep learning architecture for network-wide earthquake detection and phase picking. The number of earthquakes detected by PhaseNO is a more than two-fold increase compared to the SCSN catalog. These additional events provide a more complete description of the seismicity of the 2019 Ridgecrest sequence with high resolution. It should be noted that our catalog differs from those of the SCSN and template matching catalogs in the number of stations and association algorithms. However, picks from PhaseNO and PhaseNet are detected on the exact same stations and then associated with GaMMA, providing the fairest comparison. Two post-processing hyperparameters, the threshold in phase picking and the minimum number of picks associated with an event, control the total number of earthquakes in a catalog. A lower threshold and a smaller association minimum provide more events, despite likely more false positive events (Table S2). PhaseNO consistently detects more events than PhaseNet using the same hyperparameters, pointing out the importance of leveraging the spatial information in addition to the temporary information for phase picking.
With a fixed model architecture, PhaseNO can handle seismic networks with arbitrary geometries; in this paper we demonstrated this by training on the Northern California Seismic Network and evaluated the model on the Southern California Seismic Network, without retraining. This is a critical property of the Neural Operator class of models, which can learn in infinite dimensions. By analyzing the prediction errors as a function of SNR, earthquake magnitude, and epicentral distance (Figure S15), we showed that SNR plays a more important role than the other factors. Errors tend to increase with decreasing SNR, but they are generally consistent at different magnitudes. Large epicentral distances do not necessarily cause large prediction errors, probably because earthquakes that can propagate to long distances are generally large ones and show high SNR signals over a wide area. We further examine the relationship between these factors with predicted probabilities (Figure S16). We find that low prediction probabilities generally appear at low SNRs. Other factors, such as earthquake magnitude and epicentral distance, seem to have slight impact on the predicted probabilities. Moreover, most phases can be accurately detected with minor prediction errors, although the prediction probabilities may be low. More confident predictions unexpectedly show larger prediction errors than less confident predictions (smaller probabilities), which may imply the imperfection of manually picked labels.
PhaseNO shows several distinctive characteristics in terms of network design. First and foremost, compared to most of the currently popular detection algorithms (deep learning based or traditional methods), PhaseNO mimics human learning and decision making by using context from the whole seismic network, rather than seismograms at a single station. By consulting information and searching for consistent waveforms from surrounding stations, PhaseNO greatly improves phase picking on low SNR data, especially S phases that usually are hidden in the coda of P phases.
Apart from the characteristics in the spatial domain, PhaseNO has a unique ability to identify phases from temporal information. The well-known transformer architecture that has brought about major successes in natural language processing [23] can be viewed as a special case of Neural Operators [17]. Just as EQTransformer uses an attention mechanism to investigate global dependencies, PhaseNO supervises
the global features with kernel integrals in space and time. Similar to PhaseNet, PhaseNO adopts a U-shape architecture with skip connections, which improves model convergence and allows for a deeper model design with greater expressiveness.
We found that the uncertainty in PhaseNO predictions may correlate with the peak prediction width. However, wider peaks on low fidelity signals may impose challenges on pick determination, resulting in slightly increased errors of arrival times if just the maximum value is saved, particularly for S phases (Figure S10). These uncertainties would thus be of greater value for location or tomography algorithms that consider measurement errors explicitly.
Compared to EdgePhase, a multi-station picking model, our model uses multiple GNO layers, a type of Neural Operator that allows for kernel integration over the network to extract rich spatial features. Each GNO layer is inserted between two FNO layers, forcing the exchange of information between spatial and temporal domains. We also encode station locations as node features to weight the message constructed between nodes. Additionally, instead of building a graph based on geographic distances and only selecting neighboring nodes within a certain distance from the target node, we construct a graph using all nodes in a seismic network. All these modifications contribute to maximizing the usage of spatial features for phase picking.
A major limitation to PhaseNO, however, is the dependence of memory usage on the number of stations used in one prediction. Spatial information is exchanged between all pairs of nodes in a graph; therefore, the computational cost scales quadratically with the number of nodes, with complexity \(O(n^{2})\). Hence, we suggest selecting a subset of stations from the entire large seismic network for one prediction until all stations have been processed before moving to the next time segment of continuous data, similar to the procedure described in the Ridgecrest case study. If the seismic network covers a wide range of areas, we may select stations based on k-means clustering (Lloyd, 1982). In this way, we can greatly accelerate the prediction procedure and save memory usage, particularly when the seismic network contains a lot of stations and when the computational resources are limited.
To conclude, we present PhaseNO to detect earthquakes and pick seismic arrivals from a seismic network with arbitrary geometries based on the recent advancement of operator learning. Although PhaseNO is trained on P and S arrival times picked by experienced analysts, it successfully detects phases more reliably on unseen continuous data than human analysts. The increased detection sensitivity allows us to perform a blind search for signals in massive continuous data without prior knowledge of the desired signal. Systematically finding both repeating and unknown sources will substantially enrich seismic catalogs and reveal new insights into earthquake processes. Widespread use of our technique will improve earthquake monitoring and early warning for seismic hazard assessment.
## Methods
Neural operators are generalizations of neural networks to maps between infinite dimensional function spaces. This new class of models provably satisfy the universal approximation theorem for operators (Kovachki et al., 2023). Here we propose a novel architecture for learning maps from wavefields to phase picks. Neural Operators generally begin with a lifting
operator (\(\mathcal{P}\)) that maps the input function (\(f\)) to one with a larger co-domain, \(v\). These functions are then operated on iteratively with nonlinear kernel integration operators, and finally are passed through a projection operator (\(\mathcal{Q}\)) that maps the hidden representation to the output function (\(g\)). \(\mathcal{P}\) and \(\mathcal{Q}\) are parameterized with fully connected neural networks and act pointwise on the physical domain. The basic formula of the iterative kernel integration is a composition of linear integral operators and non-linear activation functions. Each integral operator has the following form:
\[u(x)=(\kappa*v^{l})(x)=\int\kappa(x,y)v^{l}(y)dy, \tag{1}\]
where \(v\) and \(u\) are the intermediate input and output functions, respectively, and \(\kappa\) is a kernel function. Here, we define \(v^{1}=\mathcal{P}(f)\) as the input to the operator. There are several ways to parameterize the kernel (Kovachki et al., 2023). We treat the seismograms recorded by a seismic network as the input function \(f\), discretized with a regular mesh in the time domain and irregular mesh in the spatial domain. In our architecture, we compute the kernel function separately for space and time.
### Fourier Neural Operators
For the regular mesh in the time domain, we parameterize the kernel in Fourier space and compute the kernel integral operator with fast Fourier transform, leading to efficient computation with almost linear complexity (Li et al., 2020a). From the convolution theorem, we have
\[(\kappa*v)(x)=\mathcal{F}^{-1}(R_{\phi}\cdot(\mathcal{F}(v))))(x), \tag{2}\]
where \(\mathcal{F}\) and \(\mathcal{F}^{-1}\) denote the Fourier transform and its inverse. \(R_{\phi}\) is the Fourier transform of \(\kappa\), parametrized by \(\phi\). With an activation function \(\sigma\) acting locally in each layer, a single FNO layer update is
\[u(x)=\sigma(Wv(x)+\mathcal{F}^{-1}(R_{\phi}\cdot(\mathcal{F}(v)))(x), \tag{3}\]
where \(W\) is a local linear operator. In practice, we truncate the Fourier series at a maximal number of modes and parameterize \(R\) with a few lower modes. Starting from the input \(v\), one FNO layer contains two parallel branches (Figure 1): one branch computes the kernel in the Fourier space and performs the global integration; the other applies a pointwise linear transform \(W\) to the input. Results from two branches are added before applying \(\sigma\).
PhaseNO utilizes seven FNO layers similar to the U-NO architecture (Rahman et al., 2022). The number of modes in each FNO layer is 24, 12, 8, 8, 12, 24, and 24. The width (the channel number) of the discretized \(u\) at each node changes with the dimension of \(R_{\phi}\). At each FNO layer, the discretized \(u\) has a dimension of 48\(\times\)3000, 96\(\times\)750, 192\(\times\)200, 96\(\times\)750, 48\(\times\)3000, 48\(\times\)3000, and 48\(\times\)3000, where the first dimension denotes the width and the second dimension denotes time. All FNO layers include nonlinearity via the Gaussian Error Linear Unit (Hendrycks and Gimpel, 2020), except the last one where no activation function is applied. Note that we did not draw the last FNO layer on Figure 1 for simplicity.
### Graph Neural Operators and the message passing framework
Kernel integration can be viewed as an aggregation of messages with the generalized message passing in graph neural networks [10]. Since \(f(x,y,t)\) in the spatial domain is discretized based on the geometry of a seismic network, we parameterize the kernel with GNOs and implement it with the message passing framework. We consider a seismic network with an arbitrary geometry as a graph. Each station in the seismic network is a node of the graph. Given node features \(v(x)\), we update the value \(v(x_{i})\) of the node \(x_{i}\) to the value \(u(x_{i})\) with the averaging aggregation by
\[u(x_{i})=\tau(v(x_{i}),\frac{1}{|\mathcal{N}(x_{i})|}\sum_{x_{j}\in\mathcal{N }(x_{i})}\varphi(v(x_{i}),v(x_{j}))), \tag{4}\]
where \(\tau\) and \(\varphi\) denote differentiable functions such as multilayer perceptron (MLP). \(n\) is the number of nodes in a graph. \(\mathcal{N}(x_{i})\) denotes the neighborhood of \(x_{i}\). To capture global dependencies, we construct a graph by connecting each node to all nodes in the graph, resulting in a total edge number of \(n^{2}\). In other words, \(\mathcal{N}(x_{i})\) consists of all stations in a seismic network and includes a self-loop, meaning that each node has an edge pointing to itself.
The edge features are computed by \(e_{ij}=\varphi(v(x_{i}),v(x_{j}))\) where \(\varphi:\mathbb{R}^{c}\times\mathbb{R}^{c}\rightarrow\mathbb{R}^{c^{\prime}}\) is a nonlinear function with a set of learnable parameters [23]. We choose \(\varphi\) as an MLP with one hidden layer containing \(4c\) neurons. The function takes the concatenation of two node features \(v(x_{i})\) and \(v(x_{j})\) as the input (with a channel number of \(2c\)) and outputs \(e_{ij}\) with the same channel number as the node features (\(c^{\prime}=c\)). When all the edge features (messages) are available, the target node \(x_{i}\) collects all the messages and aggregates them with an averaging operation. Finally, we use another MLP for \(\tau\) (with the same architecture as \(\varphi\)) to update the nodes features of \(x_{i}\) using the concatenation of \(v(x_{i})\) and the aggregated message as the input. Message passing allows the exchange of information between neighboring nodes, which enhances the relevant signals shared by adjacent nodes.
### Training and test datasets
Advanced deep-learning model architecture needs a training dataset with sufficient quality and quantity to investigate its full potential. Taking the wavefield properties of a seismic network into account, we come up with more effective data augmentation strategies for graph-type samples. First, we stack events at the graph level aiming to reserve the moveout patterns of arrivals at different stations for each event. In this way, waveforms at different stations in a graph may consist of different number of events. Second, earthquakes follow a power law whereby most events are small and may be observed by only several stations instead of the entire seismic network. Thus, it is important to add virtual stations at random locations in a graph with noise waveforms to regularize PhaseNO.
We construct a training dataset with three-component earthquake waveforms from NCEDC of the years from 1984 to 2019 and three-component noise waveforms from the STanford EArthquake Dataset (STEAD). The earthquake data are downloaded event by event with stations containing both P and S arrival times picked by human analysts. Gaps are padded with zeros if some segments are missing. We normalize each component by removing its
mean and dividing it by the standard deviation. We use a probability function with a triangular shape to label phase arrivals. Probabilities at manually picked P/S arrivals are labeled 1 and linearly decrease to 0 before and after the manual picks. For each pick, the duration of probabilities larger than 0 is 0.4 s, with the highest probability centered on the middle of the time window. Instead of treating seismograms on a single station as one sample, we construct a graph with all stations in a seismic network and use the graph as one sample.
We perform data augmentation during training. We stack the individually downloaded events with the following steps to reserve their moveout patterns on different stations:
* Randomly select station A from all stations recording event A,
* Randomly select event B from all the events recorded by station A,
* Randomly assign the weights of \(\alpha\) and \(\beta\) (\(0.1<\alpha<0.9\), \(0.1<\beta<0.9\), \(\alpha+\beta\)=1) to the amplitudes of two events,
* Randomly select a time shift between two events to be stacked,
* Stack event A and event B if both events are recorded on the same stations, and
* Reserve waveforms at other stations that record only one event.
Generally, more than one station records both events in a seismic network, even though we select event B based on one station recording event A. More events can be stacked by repeating the above steps. Around 66% of samples in the training dataset contain two or three events. We also generate up to 16 virtual stations with random locations and assign noise waveforms to these virtual stations. The noise data are randomly selected from the 235K noise samples in the STEAD. Except for 6.25% of samples that contain only earthquake waveforms, each sample has both earthquake and noise waveforms recorded at different stations in a seismic network. In one graph-type sample, the number of events on each station ranges from zero to three. Since a seismic network may contain both three-component and one-component seismometers, we randomly select several stations and consider them as one-component stations. On these stations, we randomly select and repeat one component three times. Each sample may have different number of stations. To save the computational cost, we reserve no more than 32 but at least 5 stations in one graph. We then cut 30-s waveforms at all stations with a random starting time, so positions of phases within the window are varied. With a sampling rate of 100 Hz, both input waveform and output probability have 3000 data points for each component at each station. In total we have 57K graphs for training. The edge index is built during training, with all nodes in a graph. If one station contains multiple types of channels, we consider them as different nodes with the same geographic locations. We show two examples of the graph-type samples in the Supplementary Materials (Figures S17 and S18).
The input data at each station consist of five channels: three waveform channels and two location channels. The output has two channels with P-phase and S-phase probabilities. To encode station locations as node features, we define a computational domain of \(2^{\circ}\times 2^{\circ}\) in the longitude and latitude dimensions and convert the geographic locations of stations to their
relative locations on the computational domain. The converted locations \(x_{i}\), \(y_{i}\) are included as the fourth and fifth channels along with waveforms.
The test dataset contains 5,769 samples built with the NCEDC earthquake dataset of the year 2020. Waveforms are preprocessed in a similar way to the training dataset without data augmentation. Each sample in the test dataset contains only one event recorded by multiple stations. In total, we use 43,700 P/S picks of the 5,769 events to evaluate PhaseNO and compare with other methods.
### Encoding station locations as node features
Station locations are encoded as node features along with waveform data. Instead of directly using longitudes and latitudes, here we convert the geographic locations (\(a_{i}\),\(b_{i}\)) of stations on the Earth to their relative locations (\(x_{i}\),\(y_{i}\)) on the computational domain. The converted locations \(x_{i}\),\(y_{i}\) are included as two channels of the input along with three-component waveforms. Each sample has a computational domain varying in its center. The center is selected based on the maximum longitude (\(a_{max}\)), the minimum longitude (\(a_{min}\)), the maximum latitude (\(b_{max}\)), and the minimum latitude (\(b_{min}\)) of all stations in a graph. The physical minimum of the computational domain is
\[a_{0}=\frac{a_{max}+a_{min}}{2}-\frac{l}{2} \tag{5}\]
\[b_{0}=\frac{b_{max}+b_{min}}{2}-\frac{l}{2} \tag{6}\]
where \(l\) denotes the physical range of the computational domain on the Earth. Then the relative location of each station on the computational domain is
\[x_{i}=\frac{a_{i}-a_{0}}{l} \tag{7}\]
\[y_{i}=\frac{b_{i}-b_{0}}{l} \tag{8}\]
The physical range ought to be large enough to include all the selected stations in a graph. Here we choose \(l=2^{\circ}\), corresponding to an area of around 200 \(km\)\(\times\) 200 \(km\). The computational domain and the relative locations are calculated independently for each sample during the training process. For practical applications, the locations are determined in the same way but only once if given one seismic network.
### Training details and evaluation metrics
We adopt the binary cross-entropy as our loss function. We choose Adaptive Moment Estimation (Adam) as the optimizer with a batch size of one and a learning rate of \(10e^{-4}\). The training takes around 10 hours for each epoch on one NVIDIA Tesla V100 GPU. The model converges after around 10 epochs (Figure S19). We stop training after 20 epochs and use the result as our final model.
We use six metrics to evaluate the performance: precision, recall, F1 score, mean (\(\mu\)), standard deviation (\(\sigma\)), and mean absolute error (MAE) between resulting picks and human
analysts (labels). The resulting pick is counted as a true positive pick (TP) if the time residual between the pick and the label is less than 0.5 s. If there are no positive picks beyond a time residual of 0.5 s for one label, we count it as a false negative one (FN). Moreover, false positives are counted if the positive picks cannot match any label (FP). Then we can evaluate the performance with:
\[Precision=\frac{TP}{TP+FP} \tag{9}\]
\[Recall=\frac{TP}{TP+FN} \tag{10}\]
\[F1score=\frac{2\times Precision\times Recall}{Precision+Recall} \tag{11}\]
## Acknowledgement
The training and test data are from Northern California Earthquake Data Center. The data of the 2019 Ridgecrest earthquake sequence can be accessed from Southern California Earthquake Data Center and Plate Boundary Observatory Borehole Seismic Network.
|
2306.02427 | Towards Efficient Controller Synthesis Techniques for Logical LTL Games | Two-player games are a fruitful way to represent and reason about several
important synthesis tasks. These tasks include controller synthesis (where one
asks for a controller for a given plant such that the controlled plant
satisfies a given temporal specification), program repair (setting values of
variables to avoid exceptions), and synchronization synthesis (adding
lock/unlock statements in multi-threaded programs to satisfy safety
assertions). In all these applications, a solution directly corresponds to a
winning strategy for one of the players in the induced game. In turn,
\emph{logically-specified} games offer a powerful way to model these tasks for
large or infinite-state systems. Much of the techniques proposed for solving
such games typically rely on abstraction-refinement or template-based
solutions. In this paper, we show how to apply classical fixpoint algorithms,
that have hitherto been used in explicit, finite-state, settings, to a symbolic
logical setting. We implement our techniques in a tool called GenSys-LTL and
show that they are not only effective in synthesizing valid controllers for a
variety of challenging benchmarks from the literature, but often compute
maximal winning regions and maximally-permissive controllers. We achieve
\textbf{46.38X speed-up} over the state of the art and also scale well for
non-trivial LTL specifications. | Stanly Samuel, Deepak D'Souza, Raghavan Komondoor | 2023-06-04T18:19:04Z | http://arxiv.org/abs/2306.02427v2 | # Towards Efficient Controller Synthesis Techniques for Logical LTL Games
###### Abstract
Two-player games are a fruitful way to represent and reason about several important synthesis tasks. These tasks include controller synthesis (where one asks for a controller for a given plant such that the controlled plant satisfies a given temporal specification), program repair (setting values of variables to avoid exceptions), and synchronization synthesis (adding lock/unlock statements in multi-threaded programs to satisfy safety assertions). In all these applications, a solution directly corresponds to a winning strategy for one of the players in the induced game. In turn, _logically-specified_ games offer a powerful way to model these tasks for large or infinite-state systems. Much of the techniques proposed for solving such games typically rely on abstraction-refinement or template-based solutions. In this paper, we show how to apply classical fixpoint algorithms, that have hitherto been used in explicit, finite-state, settings, to a symbolic logical setting. We implement our techniques in a tool called GenSys-LTL and show that they are not only effective in synthesizing valid controllers for a variety of challenging benchmarks from the literature, but often compute maximal winning regions and maximally-permissive controllers. We achieve 46.38X speed-up over the state of the art and also scale well for non-trivial LTL specifications.
reactive synthesis, symbolic algorithms, program synthesis, program repair, two-player games
## I Introduction
Two-player games are games played between two players called the Controller and the Environment, on a game graph or arena. The players generate an infinite sequence of states (a so-called "play") in the game by making moves alternately, from a specified set of legal moves. The Controller wins the play if the sequence of states satisfies a winning condition (e.g., a Linear-Time Temporal Logic (LTL) formula). The central question in these games is whether a player (typically the Controller) has a winning strategy from a given set of initial states (called the realizability problem), or more generally, to compute the set of states from which she wins (i.e. the winning region).
Games are a fruitful way to model and reason about several important problems in Sofware Engineering, like _controller synthesis_[1] (where a winning strategy for the Controller in the associated game directly corresponds to a valid controller for the system); _program repair_[2] (strategy corresponds to corrected program); _synchronization synthesis_[3] (strategy corresponds to appropriate placement of synchronization statements in a concurrent program); and _safety shield synthesis_[4] (winning region corresponds to region in which the neural-network based controller is allowed to operate without the shield stepping in).
Classical techniques for solving games [5, 6, 7], and more recent improvements [8, 9, 10], work on finite-state games, by iteratively computing sets of states till a fixpoint is reached. These algorithms typically allow us to compute the exact winning region and thereby answer the realizability question as well.
In recent years, _logical games_ - where the moves of the players are specified by logical formulas on the state variables - have attracted much attention, due to their ability to model large or infinite-state systems. Techniques proposed for these games range from constraint solving [11], finite unrollings and generalization [12], CEGAR-based abstraction-refinement [13, 14, 15], counterexample-based learning [16], combination of Sygus and classical LTL synthesis [17], and solver-based enumeration [18]. Among these Beyene et al [11] address general LTL specs, while the others handle only safety or reachability specs. Furthermore, none of these techniques are able to compute precise winning regions.
In this paper we show that symbolic fixpoint techniques can be effectively applied to solve logical games with general LTL specifications. We propose a bouquet of techniques that target different classes of LTL specs, from simple specs which directly involve a safety, reachability, Buchi, or Co-Buchi condition on the states of the game, to those for which the formula automata are non-deterministic. The techniques we propose are guaranteed (whenever they terminate) to compute the _exact_ winning region, and, for certain kinds of games, output a finite-memory winning strategy as well.
We show how to implement these algorithms in a logical setting, by leveraging the right tactics in available SMT solvers. We evaluate our prototype tool, called GenSys-LTL, on a host of benchmarks from the literature. Our tool terminates on all benchmarks except one, and takes an average time of 7.1 sec to solve each benchmark. It thus outperforms the state-of-the-art tools in terms of the number of instances solved, and by an order of magnitude in terms of running time.
## II Preliminaries
We will be dealing with standard first-order logic of addition (\(+\)), comparison (\(<\)), and constants \(0\) and \(1\), interpreted over the domain of reals \(\mathbb{R}\) (or a subset of \(\mathbb{R}\) like the integers \(\mathbb{Z}\)). The atomic formulas in this logic are thus of the form \(a_{1}x_{1}+\cdots+a_{n}x_{n}\sim c\), where \(a_{i}\)s and \(c\) are integers, \(x_{i}\)s are variables, and "\(\sim\)" is a comparison symbol in \(\{<,\leq,=,\geq,>\}\). We will refer to such formulas as _atomic constraints_, and to boolean combinations of such formulas (or equivalently, quantifier-free formulas) as _constraints_. We will denote the set of constraints over a set of variables \(V\) by \(\mathit{Constr}(V)\).
For a set of variables \(V\), a \(V\)_-valuation_ (or a \(V\)_-state_) is simply a mapping \(s:V\rightarrow\mathbb{R}\). Given a constraint \(\delta\) over a set of variables \(V\), and a \(V\)-state \(s\), we say \(s\)_satisfies_\(\delta\), written \(s\models\delta\), if the constraint \(\delta\) evaluates to true in \(s\) (defined in the expected way). We denote the set of \(V\)-states by \(\mathbf{V}_{\mathbb{R}}\). A _domain mapping_ for \(V\) is a map \(D:V\to 2^{\mathbb{R}}\), which assigns a domain \(D(x)\subseteq\mathbb{R}\) for each variable \(x\) in \(V\). We will call a \(V\)-state \(s\) whose range respects a domain mapping \(D\), in that for each \(x\in V\), \(s(x)\in D(x)\), a \((V,D)\)-state, and denote the set of such \((V,D)\)-states by \(\mathbf{V}_{D}\). We also denote the cardinality of a set \(S\) as \(|S|\).
We will sometimes write \(\varphi(X)\) to denote that the free variables in a formula \(\varphi\) are among the variables in the set \(X\). For a set of variables \(X=\{x_{1},\ldots,x_{n}\}\) we will sometimes use the notation \(X^{\prime}\) to refer to the set of "primed" variables \(\{x^{\prime}_{1},\ldots,x^{\prime}_{n}\}\). For a constraint \(\varphi\) over a set of variables \(X=\{x_{1},\ldots,x_{n}\}\), we will write \(\varphi[X^{\prime}/X]\) (or simply \(\varphi(X^{\prime})\) when \(X\) is clear from the context) to represent the constraint obtained by substituting \(x^{\prime}_{i}\) for each \(x_{i}\) in \(\varphi\).
Finally, we will make use of standard notation from formal languages. For a (possibly infinite) set \(S\), we will view finite and infinite sequences of elements of \(S\) as finite or infinite _words_ over \(S\). We denote the empty word by \(\epsilon\). If \(v\) and \(w\) are finite words and \(\alpha\) an infinite word over \(S\), we denote the concatenation of \(v\) and \(w\) by \(v\cdot w\), and the concatenation of \(v\) and \(\alpha\) by \(v\cdot\alpha\). We will use \(S^{*}\) and \(S^{\omega}\) to denote, respectively, the set of finite and infinite words over \(S\).
## III LTL and Automata
We will make use of a version of Linear-Time Temporal Logic (LTL) [19] where propositions are atomic constraints over a set of variables \(V\) (as in Holzmann [20], for example).
Let \(V\) be a set of variables. Then the formulas of \(\mathit{LTL}(V)\) are given by:
\[\psi::=\delta\ |\ \neg\psi\ |\ \psi\vee\psi\ |\ X\psi\ |\ \psi U\psi,\]
where \(\delta\) is an atomic constraint over \(V\). The formulas of \(\mathit{LTL}(V)\) are intepreted over an infinite sequence of \(V\)-states. For an \(\mathit{LTL}(V)\) formula \(\psi\) and an infinite sequence of \(V\)-states \(\pi=s_{0}s_{1}\cdots\), we define when \(\psi\) is satisfied at position \(i\) in \(\pi\), written \(\pi,i\vDash\psi\), inductively as follows:
\[\begin{array}{llll}\pi,i\vDash\delta&\text{iff}&s_{i}\vDash\delta\\ \pi,i\vDash\neg\psi&\text{iff}&\pi,i\not\!\psi\ \psi\\ \pi,i\vDash\psi\vee\psi^{\prime}&\text{iff}&\pi,i\vDash\psi\ \text{or}\ \pi,i\vDash\psi^{\prime}\\ \pi,i\vDash X\psi&\text{iff}&\pi,i+1\vDash\psi\\ \pi,i\vDash\psi U\psi^{\prime}&\text{iff}&\exists k\geq i\text{ s.t.}\ \pi,k\vDash\psi^{\prime}\text{ and}\\ &\forall j\colon i\leq j<k\rightarrow\pi,j\vDash\psi.\end{array}\]
We say \(\pi\)_satisfies_\(\psi\), written \(\pi\vDash\psi\), if \(\pi,0\vDash\psi\). We will freely make use of the derived operators \(F\) ("future") and \(G\) ("globally") defined by \(F\psi\equiv\mathit{true}\,U\,\psi\) and \(G\psi\equiv\neg F\neg\psi\). apart from the boolean operators \(\wedge\) ("and"), \(\rightarrow\) ("implies"), etc.
An \(\omega\)_-automaton_[21]\(\mathcal{A}\) over a set of variables \(V\), is a tuple \((Q,I,\mathcal{T},F)\) where \(Q\) is a finite set of states, \(I\subseteq Q\) is a set of initial states, \(\mathcal{T}\subseteq_{\mathit{fin}}Q\times\mathit{Constr}(V)\times Q\) is a "logical" transition relation, and \(F\subseteq Q\) is a set of final states. The logical transition relation \(\mathcal{T}\) induces a concrete transition relation \(\Delta_{\mathcal{T}}\subseteq Q\times\mathbf{V}_{\mathbb{R}}\times Q\), given by \((q,s,q^{\prime})\in\Delta_{\mathcal{T}}\) iff there exists \((q,\delta,q^{\prime})\in\mathcal{T}\) such that \(s\vDash\delta\). A _run_ of \(\mathcal{A}\) on an infinite sequence of \(V\)-states \(\pi=s_{0}s_{1}\cdots\) is an infinite sequence of states \(\rho=q_{0}q_{1}\cdots\), such that \(q_{0}\in I\), and for each \(i\), \((q_{i},s_{i},q_{i+1})\in\Delta_{\mathcal{T}}\).
We say an \(\omega\)-automaton \(\mathcal{A}=(Q,I,\mathcal{T},F)\) over \(V\), is _deterministic_ if \(I\) is singleton, and for every \(q\in Q\) and \(V\)-state \(s\), there is at most one \(q^{\prime}\in Q\) such that \((q,s,q^{\prime})\in\Delta_{\mathcal{T}}\). Similarly we say \(\mathcal{A}\) is _complete_ if for every \(q\in Q\) and \(V\)-state \(s\), there exists a \(q^{\prime}\in Q\) such that \((q,s,q^{\prime})\in\Delta_{\mathcal{T}}\).
An \(\omega\)-automaton can be viewed as either a _Buchi_[22], _Co-Buchi_, _Universal Co-Buchi_, or _Safety_ automaton based on how the _runs_ for a given \(V\)-state sequence \(\pi\) are _accepted_ using the final states \(F\), described as follows. A run \(\rho=q_{0}q_{1}\cdots\) of \(\mathcal{A}\) is _accepting_ by the _Buchi_ acceptance condition if for infinitely many \(i\), we have \(q_{i}\in F\), and a \(V\)-state sequence \(\pi\) is accepted by \(\mathcal{A}\) if there exists such a run \(\rho\) for \(\pi\). A _Buchi Automaton_ is an \(\omega\)-automaton where \(F\) is viewed as a Buchi acceptance condition. Similarly, by the _Co-Buchi_ acceptance condition \(\rho\) is _accepting_ if it visits \(F\) only a finite number of times and a \(V\)-state sequence \(\pi\) is accepted by \(\mathcal{A}\) if there exists such a run \(\rho\) for \(\pi\). We call such an automaton a _Co-Buchi Automaton (CA)_. The _Universal Co-Buchi_ acceptance condition states that a run \(\rho\) of \(\mathcal{A}\) is _accepting_ if it visits \(F\) only a finite number \(F\) only a finite number \(F\) is accepted by \(\mathcal{A}\) if there exists such a run \(\rho\) for \(\pi\). We call such an automaton a _Co-Buchi Automaton (CA)_. The _Universal Co-Buchi_ acceptance condition states that a run \(\rho\) of \(\mathcal{A}\) is _accepting_ if it visits \(F\) only a finite number of times, and a \(V\)-state sequence \(\pi\) is accepted by \(\mathcal{A}\) if _all_ runs \(\rho\) for \(\pi\) are accepting. We call such an automaton a _Universal Co-Buchi Automaton (UCA)_. Finally, we can view an \(\omega\)-automaton \(\mathcal{A}\) as a _safety_ automaton, by saying that \(\mathcal{A}\) accepts \(\pi\) iff there is a run of \(\mathcal{A}\) on \(\pi\) which never visits a state outside \(F\). We denote by \(L(\mathcal{A})\) the set of \(V\)-state sequences accepted by an \(\omega\)-automaton \(\mathcal{A}\).
It is well-known that any LTL formula \(\psi\) can be translated into a (possibly non-deterministic) Buchi automaton \(\mathcal{A}_{\psi}\) that accepts precisely the models of \(\psi\)[23]. The same construction works for \(\mathit{LTL}(V)\) formulas, by treating each atomic constraint as a propositional variable. Henceforth, for
an \(\mathit{LTL}(V)\) formula \(\psi\) we will denote the corresponding formula automaton by \(\mathcal{A}_{\psi}\).
Fig. 1 shows a formula automaton \(\mathcal{A}_{\psi}\) for the \(\mathit{LTL}(V)\) formula \(\psi=G(F(x=1)\wedge F(x=2)\wedge F(x=3))\) from Example IV.1, where \(V=\{x\}\). The automaton can be seen to be deterministic.
## IV LTL Games
In this section we introduce our notion of logically specified games, where moves are specified by logical constraints and winning conditions by LTL formulas. These games are similar to the formulation in Beyene et al [11].
A _2-player logical game with an LTL winning condition_ (or simply an _LTL game_) is of the form
\[\mathcal{G}=(V,D,\mathit{Con},\mathit{Env},\psi),\quad where\]
* \(V\) is a finite set of variables.
* \(D:V\to 2^{\mathbb{R}}\) is a domain mapping for \(V\).
* \(\mathit{Con}\) and \(\mathit{Env}\) are both constraints over \(V\cup V^{\prime}\), representing the moves of Player \(C\) and Player \(E\) respectively.
* \(\psi\) is an \(\mathit{LTL}(V)\) formula.
The constraint \(\mathit{Con}\) induces a transition relation
\[\Delta_{\mathit{Con}}\subseteq\mathbf{V}_{D}\times\mathbf{V}_{D}\]
given by \((s,s^{\prime})\in\Delta_{\mathit{Con}}\) iff \(s\) and \(s^{\prime}\) are \((V,D)\)-states, and \((s,s^{\prime})\vDash\mathit{Con}\). We use the notation \((s,s^{\prime})\vDash\mathit{Con}\) to denote the fact that \(t_{s,s^{\prime}}\vDash\mathit{Con}\), where \(t_{s,s^{\prime}}\) is the valuation over \(V\cup V^{\prime}\) which maps each \(x\in V\) to \(s(x)\) and \(x^{\prime}\in V^{\prime}\) to \(s^{\prime}(x)\). In a similar way, \(\mathit{Env}\) induces a transition relation \(\Delta_{\mathit{Env}}\subseteq\mathbf{V}_{D}\times\mathbf{V}_{D}\). For convenience we will assume that the \(C\)-moves are "complete" in that for every \((V,D)\)-state \(s\), there is a \((V,D)\)-state \(s^{\prime}\) such that \((s,s^{\prime})\in\Delta_{\mathit{Con}}\); and similarly for Player \(E\).
A play in \(\mathcal{G}\) is an sequence of \((V,D)\)-states obtained by an alternating sequence of moves of Players \(C\) and \(E\), with Player \(C\) making the first move. More precisely, an _(infinite) play_ of \(\mathcal{G}\), starting from a \((V,D)\)-state \(s\), is an infinite sequence of \((V,D)\)-states \(\pi=s_{0}s_{1}\cdots\), such that
* \(s_{0}=s\), and
* for all \(i\), \((s_{2i},s_{2i+1})\in\Delta_{\mathit{Con}}\) and \((s_{2i+1},s_{2i+2})\in\Delta_{\mathit{Env}}\).
We similarly define the notion of a _finite_ play \(w\) in the expected manner. We say a play \(\pi\) is _winning_ for Player \(C\) if it satisfies \(\psi\) (i.e. \(\pi\vDash\psi\)); otherwise it is winning for Player \(E\).
A strategy for Player \(C\) assigns to odd-length sequences of states, a non-empty subset of states that correspond to legal moves of \(C\). More precisely, a _strategy_ for Player \(C\) in \(\mathcal{G}\) is a partial map
\[\sigma:((\mathbf{V}_{D}\cdot\mathbf{V}_{D})^{*}\cdot\mathbf{V}_{D})\rightharpoonup 2 ^{\mathbf{V}_{D}}\]
satisfying the following constraints. We first define when a finite play \(w\) is _according to_\(\sigma\), inductively as follows:
* \(s\) is according to \(\sigma\) iff \(s\) belongs to the domain of \(\sigma\).
* if \(w\cdot s\) is of odd length and according to \(\sigma\), and \(s^{\prime}\in\sigma(w\cdot s)\), then \(w\cdot s\cdot s^{\prime}\) is according to \(\sigma\).
* if \(w\cdot s\) is of even length and according to \(\sigma\), and \((s,s^{\prime})\in\Delta_{\mathit{Env}}\), then \(w\cdot s\cdot s^{\prime}\) is according to \(\sigma\).
For \(\sigma\) to be a valid strategy in \(\mathcal{G}\), we require that for every finite play \(w\cdot s\) of odd-length, which is according to \(\sigma\), \(\sigma(w\cdot s)\) must be defined and non-empty, and for each \(s^{\prime}\in\sigma(w\cdot s)\) we must have \((s,s^{\prime})\in\Delta_{\mathit{Con}}\).
Finally, a strategy \(\sigma\) for Player \(C\) is _winning_ from a \((V,D)\)-state \(s\) in its domain, if every play that starts from \(s\) and is according to \(\sigma\), is winning for Player \(C\) (i.e. the play satisfies \(\psi\)). We say \(\sigma\) itself is _winning_ if it is winning from every state in its domain. We say Player \(C\)_wins_ from a \((V,D)\)-state \(s\) if it has a strategy which is winning from \(s\). We call the set of \((V,D)\)-states from which Player-\(C\) wins, the _winning region_ for Player \(C\) in \(\mathcal{G}\), and denote it \(\mathit{winreg}_{C}(\mathcal{G})\). The analogous notions for Player \(E\) are defined similarly.
We close this section with some further notions about strategies. We say that a winning strategy \(\sigma\) for \(C\) is _maximal_ if its domain is \(\mathit{winreg}_{C}(\mathcal{G})\), and for every strategy \(\sigma^{\prime}\) for \(C\) that is winning from a state \(s\) in \(\mathit{winreg}_{C}(\mathcal{G})\), we have \(\sigma^{\prime}(w)\subseteq\sigma(w)\) for each odd-length play \(w\) from \(s\) according to \(\sigma^{\prime}\). A strategy \(\sigma\) for \(C\) is called a _(finitely-representable) finite memory_ strategy, if it can be represented by a "Mealy-style" _strategy automaton_ (see Fig. 1(b)). This is a finite-state automaton similar to a deterministic Buchi automaton, but with a partition of the states into controller and environment states. The initial states are environment states. The states in the domain of the strategy are those that satisfy one of the outgoing guards from the initial state. Each controller state \(q\) has a label \(\mathit{mov}(q)\) associated with it in the form of a constraint over \(V\cup V^{\prime}\), which denotes a subset of moves allowed by \(\mathit{Con}\). The automaton represents a strategy \(\sigma\) in which \(\sigma(w)\) for odd-length \(w\) is given by the label \(\mathit{mov}(q)\) of the state \(q\) reached by the automaton on reading \(w\). Finally, a _memoryless_ strategy is one that is represented by a strategy automaton with a _single_ environment state.
Synthesizing winning strategies will be easier when the controller's moves are _finitely non-deterministic_, in that \(\mathit{Con}\) is given by a disjunction \(\mathit{Con}_{1}\vee\cdots\vee\mathit{Con}_{k}\), where each constraint \(\mathit{Con}_{i}\) is _deterministic_ (in that whenever \((s,s^{\prime})\vDash\mathit{Con}_{i}\) and \((s,s^{\prime\prime})\vDash\mathit{Con}_{i}\), we have \(s^{\prime}=s^{\prime\prime}\)). We call such a game a _finitely non-deterministic_ (FND) game.
Fig. 1: Büchi automaton \(A_{\psi}\) for the LTL formula \(\psi=G(F(x=1)\wedge F(x=2)\wedge F(x=3))\). Final states are indicated with double circles.
We illustrate some of these notions through an example below adapted from [15].
_Example 4.1 (Elevator)_: Consider a game \(\mathcal{G}_{1}\) which models an elevator control problem, where the system's state is represented by a single variable \(x\) of type integer, indicating the floor the elevator is currently positioned at. The controller can choose to move the elevator up or down by one floor, or stay at the same floor. The environment does nothing (simply "skips"). The specification requires us visit each of Floor 1, 2, and 3 infinitely often. The game \(\mathcal{G}_{1}\) has the following components: the set of variables \(V\) is \(\{x\}\), and the domain map \(D\) is given by \(D(x)=\mathbb{Z}\). The moves of Player \(C\) and Player \(E\) are given by the constraints \(\mathit{Con}\): \(x^{\prime}=x\lor x^{\prime}=x+1\lor x^{\prime}=x-1\), and \(\mathit{Env}\): \(x^{\prime}=x\), respectively. The LTL specification \(\psi\) is \(G(F(x=1)\wedge F(x=2)\wedge F(x=3))\). The game is easily seen to be finitely non-deterministic.
The "game graph" is shown in Fig. 1(a). For convenience we visualize the game as having two copies of the state space, one where it is the turn of Player \(C\) to make a move (denoted by circle states on the left) and the other where it is Player \(E\)'s turn to move (indicated by square states on the right). The moves of \(C\) go from left to right, while those of \(E\) go from right to left.
Player \(C\) has a winning strategy from all states; for example, by playing \(x^{\prime}=x-1\) from Floor 3 and above; \(x^{\prime}=x+1\) from Floor 1 and below; and \(x^{\prime}=x+1\) and \(x^{\prime}=x-1\) from Floor 2, depending on whether it was last in Floor 1 or 3 respectively. This finite-memory strategy is shown by the strategy automaton in Fig. 1(b).
It is easy to see that a memoryless winning strategy does _not_ exist for Player \(C\), as it cannot afford to play the _same_ move from state \(x\mapsto 2\) (it must keep track of the direction in which the lift is coming from).
We close this section with a description of the problems we address in this paper. The main problems we address are the following:
1. (_Winning Region_) Given an LTL game \(\mathcal{G}\), compute the winning region for Player \(C\). Wherever possible, also compute a finite-memory winning strategy for \(C\) from this region.
2. (_Realizability_) Given an LTL game \(\mathcal{G}\) and an initial region in the form of a constraint \(\mathit{Init}\) over the variables \(V\) of the game, decide whether Player \(C\) wins from every state in \(\mathit{Init}\). If possible, compute a finite-memory winning strategy for \(C\) from \(\mathit{Init}\).
It is easy to see that these problems are undecidable in general (for example by a reduction from the control-state reachability problem for 2-Counter Machines). Hence the procedures we give in subsequent sections may not always be terminating ones. In the sequel we focus on the problem of computing winning regions, since we can check realizability by checking if the given initial region is contained in the winning region.
## V GenSys-LTL Approach
```
Input : LTL game \(\mathcal{G}=(V,D,\mathit{Con},\mathit{Env},\psi)\) Output :\(\mathit{winreg}_{C}(\mathcal{G})\) or an approximation of it, and a strategy \(\sigma\) for Player \(C\) from this region. if\(\mathcal{G}\) is simplethen Compute \(\mathit{winreg}_{C}(\mathcal{G})\) (i.e. winning reg for \(C\) in \(\mathcal{G}\)). Compute winning strategy \(\sigma\). return\(\mathit{winreg}_{C}(\mathcal{G})\), \(\sigma\). \(\mathcal{A}_{\psi}\) := LTL2BA(\(\psi\)). \(\mathcal{A}_{\neg\psi}\) := LTL2BA(\(\neg\psi\)). if\(\mathcal{A}_{\psi}\) is deterministicthen Construct simple Buchi product game \(\mathcal{H}=\mathcal{G}\otimes\mathcal{A}_{\psi}\). Compute \(\mathit{winreg}_{C}(\mathcal{H})\). Extract \(\mathit{winreg}_{C}(\mathcal{G})\), winning strategy \(\sigma\) for \(C\) in \(\mathcal{G}\). return\(\mathit{winreg}_{C}(\mathcal{G})\), \(\sigma\). if\(\mathcal{A}_{\neg\psi}\) is deterministicthen Construct simple Co-Buchi product game \(\mathcal{H}=\mathcal{G}\otimes\mathcal{A}_{\neg\psi}\). Compute \(\mathit{winreg}_{C}(\mathcal{H})\). Extract \(\mathit{winreg}_{C}(\mathcal{G})\), winning strategy \(\sigma\) for \(C\). return\(\mathit{winreg}_{C}(\mathcal{G})\), \(\sigma\). // Both \(\mathcal{A}_{\psi}\) and \(\mathcal{A}_{\neg\psi}\) are non-det \(k:=0\); \(W_{U}:=\mathit{false}\); \(W_{O}:=\mathit{true}\); while\(W_{U}\neq W_{O}\)do Construct on-the-fly two \(k\)-safety product automatons involving \(\mathcal{G}\) with \(A_{\psi}\) and \(A_{\neg\psi}\), respectively, and from these, extract an under-approximation \(W_{U}\) of \(\mathit{winreg}_{C}(\mathcal{G})\) and an over-approximation \(W_{O}\) of \(\mathit{winreg}_{C}(\mathcal{G})\), respectively. From \(W_{U}\) extract a winning strategy \(\sigma\) for Player \(C\). \(k\) = \(k\) + 1. return\(W_{U},W_{O},\sigma\);
```
**Algorithm 1**GenSys-LTL overview
Fig. 2: Game graph and strategy for \(C\) in Elevator game
Our approach consists of a bouquet of techniques. This is motivated by our objective to handle each type of LTL formula with an efficient technique suited to that type. Algorithm 1 is the "main" program or driver of our approach. Fig. 3 also summarizes the approach.
Algorithm 1 takes as input an LTL game \(\mathcal{G}\). Lines 1-4 of the algorithm tackle the scenario when the given game \(\mathcal{G}\) is _simple_. These are games where the formula \(\psi\) is of one of restricted forms \(G(X)\) (safety), \(F(X)\) (reachability), \(GF(X)\) (Buchi), or \(FG(X)\) (Co-Buchi), where \(X\) is a constraint over the game variables \(V\). For these cases, we propose fixpoint procedures that directly work on the state-space of the given game \(\mathcal{G}\), and that use SMT formulae to encode sets of states. Sec. VI describes these procedures in detail. Due to the infiniteness of the state-space, these fixpoint computations are not guaranteed to terminate. When they do terminate, they are guaranteed to compute the precise winning region \(\mathit{winreg}_{C}(\mathcal{G})\), and, in the case of FND games, to extract a memoryless strategy automaton for these regions.
If the given formula \(\psi\) is not simple, we convert the formula as well as its negation, in Lines 5-6, into Buchi automata using a standard procedure [23]. If either of these two automata are _deterministic_ (see Sec. III), we construct a product of the game \(\mathcal{G}\) with the automaton, such that this product-game \(\mathcal{H}\) is a simple LTL game. We then compute the winning region on this product using the fixpoint computations mentioned above. If the fixpoint computation terminates, we extract a winning region for the original game \(\mathcal{G}\), and a strategy \(\sigma\). These steps are outlined in Lines 7-15 of Algorithm 1, and details are presented in Sec. VII.
The hardest scenario is when both the automata are non-deterministic. For this scenario, we propose an on-the-fly determinization and winning-region extraction approach. These steps are outlined in Lines 16-19 of Algorithm 1. We present the details in Section VIII.
We state the following claim which we will substantiate in subsequent sections:
**Theorem 1**: _Whenever Algorithm 1 terminates, it outputs the exact winning region for Player \(C\) when \(\mathcal{G}\) is either simple or deterministic; in other cases it outputs a sound under- and over-approximation of the winning region for Player \(C\) in \(\mathcal{G}\). Additionally, in the case when \(\mathcal{G}\) is FND, upon termination Algorithm 1 outputs a strategy automaton representing a winning strategy for Player \(C\) from this region. \(\Box\)_
## VI Simple LTL Games
Our approach reduces logical LTL games to "simple" LTL games in which the winning condition is _internal_ to the game. In this section we describe this subclass of LTL games and the basic fixpoint algorithms to solve them.
A _simple_ LTL game is an LTL game \(\mathcal{G}=(V,D,\mathit{Con},\mathit{Env},\psi)\), in which \(\psi\) is an \(\mathit{LTL}(V)\) formula of the form \(G(X)\), \(F(X)\), \(GF(X)\), or \(FG(X)\), where \(X\) is a constraint on \(V\). We refer to games with such specifications as _safety_, _reachability_, _Buchi_, and _co-Buchi_ games, respectively.
We can compute (with the possibility of non-termination) the winning regions \(\mathit{winreg}_{C}(\mathcal{G})\) for each of these four types of games, and a strategy automaton representing a memoryless winning strategy for Player \(C\), in the special case of FND games, by extending the classical algorithms for the finite-state versions of these games (see [6, 7]).
The algorithms we describe will make use of the following formulas representing sets of "controllable predecessors" in the context of different types of games. Here \(Y(V)\) is a constraint representing a set of game states.
* The set of controllable predecessors w.r.t. the set of states \(Y\), for a safety specification \(G(X)\) (namely states from which Player \(C\) has a _safe_ move from which all environment moves result in a \(Y\)-state): \[\begin{array}{rcl}\mathit{CP}_{S}^{X}(Y)&\equiv&\exists V^{\prime}(\mathit{ Con}(V,V^{\prime})\ \wedge X(V^{\prime})\ \wedge\\ &&\forall V^{\prime\prime}(\mathit{Env}(V^{\prime},V^{\prime\prime})\implies Y (V^{\prime\prime}))).\end{array}\]
* The set of controllable predecessors w.r.t. \(Y\), for a reachability specification \(F(X)\) (namely states from which either \(C\) has a move that either gets into \(X\), or from which all environment moves get into \(Y\)): \[\begin{array}{rcl}\mathit{CP}_{R}^{X}(Y)&\equiv&\exists V^{\prime}( \mathit{Con}(V,V^{\prime})\ \wedge(X(V^{\prime})\lor\\ &&\forall V^{\prime\prime}(\mathit{Env}(V^{\prime},V^{\prime\prime})\implies Y (V^{\prime\prime})))).\end{array}\]
* The set of predecessors w.r.t. \(Y\) for Player \(C\) (namely states from which \(C\) has a move that results in a \(Y\)-state): \[\begin{array}{rcl}\mathit{CP}_{C}(Y)&\equiv&\exists V^{\prime}(\mathit{ Con}(V,V^{\prime})\ \wedge Y(V^{\prime})).\end{array}\]
* The set of predecessors w.r.t. \(Y\) for Player \(E\) (namely states from which all moves of \(E\) result in a \(Y\)-state): \[\begin{array}{rcl}\mathit{CP}_{E}(Y)&\equiv&\forall V^{\prime}(\mathit{ Env}(V,V^{\prime})\implies Y(V^{\prime})).\end{array}\]
Algorithm 2 (ComputeWR-Safety) takes a safety game as input, and iteratively computes the safe controllable predecessors, starting with the given safe set \(X\), until it reaches a fixpoint (\(W_{old}\implies W\)). Here we use a quantifier elimination procedure \(\mathit{QElim}\) which takes a logical formula with quantifiers (like \(\mathit{CP}_{S}^{X}(W)\wedge X\)) and returns an equivalent quantifier-free formula. For example, \(\mathit{QElim}(\exists y(y\leq x\wedge x+y\leq 1\wedge 0\leq y))\) returns \(0\leq x\wedge x\leq 1\). Upon termination the algorithm returns the fixpoint \(W\).
Fig. 3: Schematic overview of GenSys-LTL
```
Input : Safety game \(\mathcal{G}=(V,D,\mathit{Con},\mathit{Env},G(X))\) Output :\(\mathit{winreg}_{C}(\mathcal{G})\), strategy \(\sigma\)
1\(W\) := \(X\);
2do
3\(W_{old}\) := \(W\);
4\(W\) := \(\mathit{Qelim}(\mathit{CP}_{S}^{X}(W)\wedge X)\);
5while\(\neg(W_{old}\Rightarrow W)\);
6\(\sigma\) := ExtractStrategy\({}_{G}(W)\);
7return\(W\), \(\sigma\);
```
**Algorithm 2**ComputeWR-Safety
```
Input : Reachability game \(\mathcal{G}=(V,D,\mathit{Con},\mathit{Env},F(X))\) Output :\(\mathit{winreg}_{C}(\mathcal{G})\), strategy \(\sigma\)
1\(W\) := \(X\);
2\(C\) := \([W]\);
3do
4\(W_{old}\) := \(W\);
5\(W\) := \(\mathit{Qelim}(\mathit{CP}_{S}^{X}(W)\lor X)\);
6\(C.append(W\wedge\neg W_{old})\);
7while\(\neg(W\Rightarrow W_{old})\);
8\(\sigma\) := ExtractStrategy\({}_{F}(C)\); return\(W\), \(\sigma\);
```
**Algorithm 3**ComputeWR-Reachability
```
Input : Buchi game \(\mathcal{G}=(V,D,\mathit{Con},\mathit{Env},GF(X))\) Output :\(\mathit{winreg}_{C}(\mathcal{G})\), strategy \(\sigma\)
1\(W\) := \(W_{E}\) := \(True\);
2do
3\(W_{old}\) := \(W\);
4\(W\) := \(\mathit{Qelim}(\mathit{CP}_{C}^{X}(W)\lor X)\);
5\(W_{E}\) := \(\mathit{Qelim}(\mathit{CP}_{E}(W)\wedge X)\);
6\(H\) := \(H_{E}\) := \(False\);
7\(C\) := \([H]\);
8
9do
10\(H_{E_{old}},H_{old}\) := \(H_{E},H\);
11\(H\) := \(\mathit{Qelim}(\mathit{CP}_{C}(H_{E})\lor W)\);
12\(H_{E}\) := \(\mathit{Qelim}(\mathit{CP}_{E}(H)\lor W_{E})\);
13\(C.append(H\wedge\neg H_{old})\);
14while\(\neg(H_{E}\Rightarrow H_{E_{old}}\ \wedge\ H\Rightarrow H_{old})\);
15\(W_{E},W\) := \(H_{E},H\);
16while\(\neg(W_{E_{old}}\Rightarrow W_{E}\ \wedge\ W_{old}\Rightarrow W)\);
17\(\sigma\) := ExtractStrategy\({}_{GF}(W,C)\);
18 return\(W,\sigma\);
```
**Algorithm 4**ComputeWR - Buchi
```
Input : Co-Buchi game \(\mathcal{G}=(V,D,\mathit{Con},\mathit{Env},FG(X))\) Output :\(\mathit{winreg}_{C}(\mathcal{G})\), strategy \(\sigma\)
1\(W\) := \(W_{E}\) := \(False\);
2\(C\) := \([W]\);
3
4\(W_{E_{old}},W_{old}\) := \(W_{E},W\);
5\(W\) := \(\mathit{Qelim}(\mathit{CP}_{C}(W_{E})\lor X)\);
6\(W_{E}\) := \(\mathit{Qelim}(\mathit{CP}_{E}(W)\lor X)\);
7\(H\) := \(H_{E}\) := \(True\);
8
9 do
10\(H_{E_{old}},H_{old}\) := \(H_{E},H\);
11\(H\) := \(\mathit{Qelim}(\mathit{CP}_{C}(H_{E})\wedge W)\);
12\(H_{E}\) := \(\mathit{Qelim}(\mathit{CP}_{E}(H)\wedge W_{E})\);
13while\(\neg(H_{E_{old}}\Rightarrow H_{E}\ \wedge\ H_{old}\Rightarrow H)\);
14\(W_{E},W\) := \(H_{E},H\);
15\(C.append(W\wedge\neg W_{old})\);
16while\(\neg(W_{E}\Rightarrow W_{old}\ \wedge\ W\Rightarrow W_{old})\);
17\(\sigma\) := ExtractStrategy\({}_{GF}(W,C)\); return\(W,\sigma\);
```
**Algorithm 5**ComputeWR - Co-Buchi
When the input game is FND (with \(\mathit{Con}=\mathit{Con}_{1}\vee\cdots\lor\mathit{Con}_{k}\)), the call to \(\mathit{ExtractStrategy}_{G}(W)\) does the following. Let
\(U_{i}=\begin{array}{c}W\wedge\mathit{Qelim}(\exists V^{\prime}(\begin{array} []{c}\mathit{Con}_{i}(V,V^{\prime})\ \wedge W(V^{\prime})\ \wedge\\ \forall V^{\prime\prime}(\mathit{Env}(V^{\prime},V^{\prime\prime})\implies W (V^{\prime\prime})))\end{array})\end{array}\)
Then the memoryless strategy \(\sigma\) extracted simply offers the move \(\mathit{Con}_{i}\) whenever Player \(C\) is in region \(U_{i}\). The corresponding strategy automaton essentially maintains a controller state for each constraint \(U_{i}\), labelled by the move \(\mathit{Con}_{i}\). For the strategy extraction in the rest of this section, we assume that the input game is FND.
Similarly, Algorithm 3 (ComputeWR-Reachability) takes a reachability game as input, and iteratively computes the reachable controllable predecessors, starting with the given safe set \(X\), until it reaches a fixpoint (\(W\implies W_{old}\)).
To compute the memoryless strategy for reachability, we compute \(C\) that ensures that each move made by the controller from a given state ensures that it moves one step closer to \(X\).
Let the reachability controllable predecessor for move \(\mathit{Con}_{i}\) be:
\[\begin{array}{rcl}\mathit{CP}_{R_{i}}^{X}(Y)&\equiv&\exists V^{\prime}( \mathit{Con}_{i}(V,V^{\prime})\ \wedge(X(V^{\prime})\ \vee\\ &\forall V^{\prime\prime}(\mathit{Env}(V^{\prime},V^{\prime\prime})\implies Y (V^{\prime\prime})))).\end{array}\]
Then \(\mathit{ExtractStrategy}_{F}(C)\) does the following:
\(U_{i}=\begin{array}{c}\bigvee_{j=0}^{|C|-2}\mathit{Qelim}(\mathit{CP}_{R_{i}}^ {X_{j}}(X_{j}))\wedge C_{j+1}\end{array}\)
where \(X_{j}=C_{j}\lor C_{j-1}\lor C_{j-2}\vee\cdots\lor C_{0}\).
Thus, \(U_{i}\) is the set of states exclusively in \(W_{j+1}\) (which we denote by \(C_{j-1}\) which is constructed in Algorithm 3) from where Player \(C\) has a strategy to reach \(X\) by first ensuring a move to \(W_{j}\), thereby ensuring moving one step closer to \(X\). Then the memoryless strategy \(\sigma\) extracted offers
the move \(\mathit{Con}_{i}\) whenever Player \(C\) is in the region \(U_{i}\). The corresponding strategy automaton essentially maintains a controller state for each constraint \(U_{i}\), labelled by the move \(\mathit{Con}_{i}\).
Algorithm 4 (ComputeWR- Buchi) takes a Buchi game as an input, and computes a winning region from where Player \(C\) has a strategy to visit \(X\) infinitely often. In this algorithm, we require two levels of nesting to compute the winning region. Using two-step controllable predecessors (such as \(\mathit{CP}_{S}^{X}\), and \(\mathit{CP}_{R}^{X}\)), that reason about two moves at a time causes unsoundness, if used directly. Using \(\mathit{CP}_{S}^{X}\) in the nested Buchi algorithm causes an underapproximation of the winning region as it is not necessary that the intermediate environment states be safe. Similarly, using \(\mathit{CP}_{R}^{X}\) is too weak as the intermediate states of the environment are not reasoned with correctly. It assumes that a finite play reaching an intermediate environment state in \(X\) satisfies the property, which is not true for an infinite Buchi play. Thus, we use one step controllable predecessors \(\mathit{CP}_{C}\) and \(\mathit{CP}_{E}\) (for controller and environment respectively) that reason about the game play one move at a time in the style of [6]. The strategy is also extracted similarly.
As a dual of Algorithm 4, Algorithm 5 (ComputeWR- Co-Buchi), takes a co-buchi game as an input, and computes a winning region from where Player \(C\) has a strategy to eventually visit \(X\) always.
We can now state (see App. A for proof):
**Theorem 2**: _Whenever Algorithms 2, 3, 4, and 5 terminate, they compute the exact winning region for Player \(C\) in safety, reachability, Buchi, and co-Buchi games, respectively. For FND games, upon termination, they also output a winning strategy automaton for Player \(C\) for this region. Furthermore, for safety games this strategy is maximally permissive. \({}_{\blacksquare}\)_
## VII Deterministic LTL Games
In this section we discuss how to solve a game with an LTL condition \(\psi\) which is not simple, but is nevertheless _deterministic_ in that either \(\mathcal{A}_{\psi}\) or \(\mathcal{A}_{\neg\psi}\) is deterministic. We begin with the case when \(\mathcal{A}_{\psi}\) is deterministic.
Let \(\mathcal{G}=(V,D,\mathit{Con},\mathit{Env},\psi)\) be an LTL game, and let \(\mathcal{A}_{\psi}=(Q,\{q_{0}\},\mathcal{T},F)\) be a deterministic and complete Buchi automaton for \(\psi\) over the set of variables \(V\). We define the _product game_ corresponding to \(\mathcal{G}\) and \(\mathcal{A}_{\psi}\) to be
* \(q\) is a new variable representing the state of the automaton such that \(D(q)=\{1,2,\cdots|Q|\}\)
* \(\mathit{Con}^{\prime}=\mathit{Con}\land\bigvee_{(p,\delta,p^{\prime})\in \mathcal{T}}(q=p\land\delta\wedge q^{\prime}=p^{\prime})\).
* \(\mathit{Env}^{\prime}=\mathit{Env}\land\bigvee_{(p,\delta,p^{\prime})\in \mathcal{T}}(q=p\land\delta\wedge q^{\prime}=p^{\prime})\).
* \(\psi^{\prime}=GF(\bigvee_{p\in F}q=p)\).
Similarly, for the case when \(\mathcal{A}_{\neg\psi}\) is a deterministic and complete Buchi automaton for \(\neg\psi\), we define the _product game_ corresponding to \(\mathcal{G}\) and \(\mathcal{A}_{\neg\psi}\) to be
* \(\psi^{\prime}=FG(\bigvee_{p\in F}q=p)\).
The definitions of \(\mathit{Con}^{\prime}\) and \(\mathit{Env}^{\prime}\) remain the same as that of the product game \(\mathcal{G}\otimes\mathcal{A}_{\psi}\). For the product game \(\mathcal{G}\otimes\mathcal{A}_{\neg\psi}\), in order to satisfy the specification \(\psi\), we need to visit the final states of \(\mathcal{A}_{\neg\psi}\) finitely often. This is equivalent to visiting the non-final states eventually always, as the definition of \(\psi^{\prime}\) states. We note that if \(\mathcal{G}\) is finitely non-deterministic, so is \(\mathcal{G}\otimes\mathcal{A}_{\psi}\) and \(\mathcal{G}\otimes\mathcal{A}_{\neg\psi}\).
**Theorem 3**: _Let \(\mathcal{G}\), with \(\mathcal{A}_{\psi}\) (resp. \(\mathcal{A}_{\neg\psi}\)) deterministic, be as above. Let \(W^{\prime}\) be the winning region for Player \(C\) in \(\mathcal{G}\otimes\mathcal{A}_{\psi}\) (resp. \(\mathcal{G}\otimes\mathcal{A}_{\neg\psi}\)). Then the winning region for Player \(C\) in \(\mathcal{G}\) is \(W=\{s\mid(s,q_{0})\in W^{\prime}\}\). Furthermore, when \(\mathcal{G}\) is finitely non-deterministic, given a finitely-representable memoryless strategy for \(C\) in \(\mathcal{G}\otimes\mathcal{A}_{\psi}\) (resp. \(\mathcal{G}\otimes\mathcal{A}_{\neg\psi}\)), we can construct a finitely-representable finite-memory strategy for \(C\) in \(\mathcal{G}\)._
Proof:: See Appendix B. \({}_{\blacksquare}\)
## VIII On The Fly Determinization Approach
When the automata \(A_{\psi}\) and \(A_{\neg\psi}\) are non-deterministic, the product game \(\mathcal{H}_{\psi}\) of the given game \(\mathcal{G}\) with \(A_{\psi}\) and the product game \(\mathcal{H}_{\neg\psi}\) of \(\mathcal{G}\) with \(A_{\neg\psi}\) both will be non-deterministic. It has been recognized in the literature [9] that non-deterministic automata need to be determinized to enable a precise winning region to be inferred.
### _Overview of determinization_
We adopt the basic idea of \(k\)-safety determinization from the _Acacia_ approach [10] for finite games and extend it to the setting of infinite games. We introduce our determinized product game construction intuitively below, and later formally in Sec. VIII-B. The underlying game graph \(\mathcal{G}\) we use here for illustration is based on the Elevator game in Example 4. We simplify the example to admit just two controller moves, namely, \(x^{\prime}=1\) and \(x^{\prime}=2\), while the environment does not change the floor \(x\) in its moves. The given LTL property \(\psi\) is \(G(F(x=1)\wedge F(x=2))\). Fig. 4 depicts the Universal Co-Buchi automaton \(A_{\neg\psi}\) for this property, which happens to be non-deterministic.
The approach takes as parameter an integer \(k\geq 0\), and generates a determinized version of the product of the game \(\mathcal{G}\) and the automaton \(A_{\neg\psi}\). A portion of the (infinite) determinized product for our example under consideration is depicted in
Fig. 5, for \(k=2\). Each state of the determinized product is a pair of the form \((s,v)\), where \(s\) a state of the underlying game \(\mathcal{G}\) (i.e., a value of \(x\) in the example), and \(v\) is a vector of counts (the vectors are depicted within angle brackets). Each vector intuitively represents the subset of automaton states that the game could be in currently, with \(v(i)>-1\) indicating that the automaton state \(q_{i}\in Q\) belongs to the subset. If \(v(i)>-1\), the value \(v(i)\) further indicates the count of the maximum number of final states that can be visited along plays in the underlying game graph that reach automaton state \(q_{i}\) and that correspond to plays of the product graph that reach the current state \((s,v)\). The moves of the two players in the product graph are alternating. For conciseness, we avoid showing the environment states in square., which do not make any updates to the game state. The initial states of the product graph are the ones whose vector component is \(c_{0}=\langle 0,-1,-1,-1\rangle\), which represents the _initial_ subset \(\{q_{0}\}\). One of the initial states of the product graph is depicted in Fig. 5 (there are an infinite number of them, corresponding to all possible values of the game variable \(x\)).
We pick state \(E\) in Fig. 5 to illustrate the subset construction. \(q_{2}\) is not present in \(E\) because from none of the automaton states that are present in the subset in product state \(D\) (i.e., \(q_{0},q_{2}\) or \(q_{3}\)), transitions to \(q_{2}\) are possible as per the automaton in Fig. 4, when the value of \(x\) is 1 (as \(x\) has value 1 in product state \(D\)). And \(q_{3}\) has a count of 2 in \(E\) because \(q_{2}\) had count of 2 in state \(D\) and a \(q_{2}\) to \(q_{3}\) transition is possible when \(x\) has value 1 as per Fig. 4.
The product game shown in Fig. 5 can be seen to deterministic. This means that if a product state \((s,v)\) has two successors \((s_{1},v_{1})\) and \((s_{2},v_{2})\), then \(s_{1}\neq s_{2}\). _Safe_ states of the product graph are ones where no element of the vector exceeds \(k\). Successor states of unsafe states will themselves be unsafe. In the figure unsafe states are indicated in red color (and have entries greater than 2 in the vectors).
A unique product game graph exists as per the construction mentioned above for a given value of \(k\). This product game graph is said to be _winning_ for Player \(C\) if satisfies the following conditions: (I) At least one of the safe product states has \(c_{0}=\langle 0,-1,\ldots,-1\rangle\) as its vector, (II) For every safe product state from which the controller moves, at least one of the successors is a safe state, and (III) for every safe product state from which the environment moves, none of the successors are unsafe states. Otherwise, higher values of \(k\) will need to be tried, as indicated in the loop in Lines 16-19 in Algorithm 1. The product game graph in Fig. 5 happens to be winning.
If a product game graph is winning, then the game state components \(s\) of the product states of the form \((s,c_{0})\) in the product graph, where \(c_{0}=\langle 0,-1,\ldots,-1\rangle\), constitute, in general, an under-approximation of the winning region \(\mathit{winreg}_{C}(\mathcal{G})\). The under-approximation in general increases in size as the value of \(k\) increases. Note, in the loop we also compute a strategy for Player \(E\) by constructing a determinized product using \(A_{\psi}\). Using this it can be detected when the current value of \(k\) yields the precise region \(\mathit{winreg}_{C}(\mathcal{G})\). (If the underlying game is finite, such a \(k\) is guaranteed to exist.)
### _Formal presentation of deterministic product construction_
We present here our SMT-based fixpoint computation for computing the product game graph of the kind introduced above, for a given bound \(k\). We use formulas to represent (finite or infinite) portions of product graphs symbolically. The free variables in any formula are underlying game variables \(V\) and a vector-typed variable \(c\). The solution to a formula is a (finite or infinite) set of states of a product graph.
\(\mathit{Aut}(P,V,Q)\) is a given formula that encodes the logical transition relation \(\mathcal{T}\) of the Buchi automaton \(A_{\neg\psi}\). A triple \((q,s,q^{\prime})\) is a solution to \(\mathit{Aut}(P,V,Q)\) iff \((q,s,q^{\prime})\in\Delta_{\mathcal{T}}\). For instance, for the automaton in Fig. 4, \(\mathit{Aut}(P,V,Q)\) would be \((P=q_{0}\wedge Q=q_{0})\vee(P=q_{0}\wedge Q=q_{1}\wedge x\neq 2)\vee\cdots\), \(\mathit{final}(P)\) is a given formula that evaluates to 1 if \(P\) is a final state in the automaton \(A_{\neg\psi}\) and otherwise evaluates to 0.
We define a formula \(\mathit{Succ}_{k}(c,V,c^{\prime})\) as follows:
\[\forall q.\,c^{\prime}(q)= \mathit{max}\{\mathit{min}(c(p)+\mathit{final}(q),k+1)\,|\] \[\qquad\qquad p\in Q,\mathit{Aut}(p,V,q),c(p)\geq 0\},\] \[\qquad\qquad\text{if }\exists p\text{ such that }\mathit{Aut}(p,V,q)\wedge c(p)\geq 0\] \[= -1,\text{ otherwise}.\]
Intuitively, a triple \((v,s,v^{\prime})\) is a solution to \(\mathit{Succ}_{k}(c,V,c^{\prime})\) iff the product state \((s^{\prime},v^{\prime})\) is a successor of the product state \((s,v)\) for some \(s^{\prime}\).
Our approach is to use an iterative shrinking fixpoint computation to compute the _greatest fixpoint_ (GFP) \(W\) of the function \(\mathit{CP}\) defined below.
\[\mathit{CP}_{k}(X) \ \equiv G(V,c)\wedge\] \[\exists V^{\prime},c^{\prime}.\ (\mathit{Con}(V,V^{\prime})\wedge \mathit{Succ}_{k}(c,V,c^{\prime})\wedge\mathit{G}(V^{\prime},c^{\prime})\] \[\qquad\qquad\wedge V^{\prime\prime},c^{\prime\prime}.\ ((Env(V^{\prime},V^{\prime \prime})\wedge\mathit{Succ}_{k}(c^{\prime},V,c^{\prime\prime}))\] \[\qquad\qquad\qquad\implies X(V^{\prime\prime},c^{\prime\prime}))).\]
The argument of and the return value from the function above are both formulas in \(V,c\), representing sets of product graph states. \(G(V,c)\) represents safe product states, and checks that all elements of \(c\) are \(\geq\) -1 and \(\leq k\). The fixpoint computation is not guaranteed to terminate due to the infiniteness in the underlying game graph \(\mathcal{G}\). If it does terminate, the formula \(W\), after replacing the free variable \(c\) with the initial vector \(c_{0}=\langle 0,-1,\ldots,-1\rangle\), is returned. This formula will have solutions iff the value of \(k\) considered was sufficient to identify a non-empty under-approximation of \(\mathit{winreg}_{C}(\mathcal{G})\). The formula's solution is guaranteed to represent the _maximal_ winning product graph that exists (and hence the maximal subset of \(\mathit{winreg}_{C}(\mathcal{G})\)) for the given value of \(k\).
If \(W\) has solutions, we infer a strategy \(\sigma\) for Player \(C\) as follows. The following utility function \(\sigma_{\mathit{prod}}\) returns a formula in free variables \(V\), whose solutions are the next game states to transition to when at a product state \((s_{1},c_{1})\) in order to ensure a winning play.
\[\sigma_{\mathit{prod}}(s_{1},c_{1})\ =\ \exists c_{2}.\ \mathit{Con}(s_{1},V) \wedge\mathit{Succ}_{k}(c_{1},s_{1},c_{2})\wedge W(V,c_{2})\]
We introduce a utility function \(\mathit{DestPair}\), whose argument is a play in the underlying game \(\mathcal{G}\), and that returns the product state in the determinized product graph reached by the play.
\[\mathit{DestPair}(s) =(s,c_{0})\] \[\mathit{DestPair}(w\cdot s) =(s,c),\ \text{such that}\] \[(\mathit{DestPair}(w)=(s^{\prime},c^{\prime})\wedge\mathit{Succ} _{k}(c^{\prime},s^{\prime},c)\]
Finally, the strategy in terms of the underlying game \(\mathcal{G}\) is defined as follows (where \(w\) is a play in the underlying game):
\[\sigma(w)\ =\ \sigma_{\mathit{prod}}(\mathit{DestPair}(w)).\]
## IX Implementation
We implement all fixpoint approaches in our prototop \(\textsc{GenSys-LTL}\) which extends our earlier tool GenSys [24] to support general LTL specifications. \(\textsc{GenSys-LTL}\) is implemented using Python and uses the Z3 theorem prover [25] from Microsoft Research as the constraint solver under the hood. \(\textsc{GenSys-LTL}\) uses Z3 to eliminate quantifiers from formulas resulting from the fixpoint iterations and check satisfiability. In all fixpoint approaches mentioned in this paper, large formulas are generated in every iteration, containing nested quantifiers. This formula blowup can quickly cause a bottleneck affecting scalability. The reason is that Z3 chokes over large formulas involving nested quantifiers. Thus, it is necessary to eliminate quantifiers at every step. We use quantifier elimination tactics inherent in Z3 to solve this issue. We use variations of [26] and simplification tactics in parallel, to achieve efficient quantifier elimination.
To convert a given LTL formula into an equivalent Buchi automaton, we use the Spot library [27] which efficiently returns a complete and state based accepting automaton for a given LTL specification. We also constrain Spot to return a deterministic Buchi automaton whenever possible, and then choose our approach appropriately. However, in this prototype version of \(\textsc{GenSys-LTL}\), this encoding is done manually. \(\textsc{GenSys-LTL}\) is available as an open source tool on GitHub1.
Footnote 1: [https://github.com/stanlysamuel/gensys/tree/gensys-ltl](https://github.com/stanlysamuel/gensys/tree/gensys-ltl)
## X Evaluation
To evaluate \(\textsc{GenSys-LTL}\) we collect from the literature a corpus of benchmarks (and corresponding temporal specifications) that deal with the synthesis of strategies for two-player infinite-state logical LTL games. The first set of benchmarks were used in the evaluation of the _ConSynth_[11] approach. These target program repair applications, program synchronization and synthesis scenarios for single and multi-threaded programs, and variations of the Cinderella-Stepmother game [28, 29], which is considered to be a challenging program for automated synthesis tasks. The second set of benchmarks were used to evaluate the _Raboniel_[15] approach, which contains elevator, sorting, and cyber-physical examples, and specification complexity ranging from simple LTL games to ones that need products with Buchi automata. The third set of benchmarks are from _DTSynth_ approach evaluation [16], and involve safety properties on robot motion planning over an infinite grid.
We compare our tool \(\textsc{GenSys-LTL}\) against two comparable tools from the literature: \(\textsc{ConSynth}\)[11] and \(\textsc{Raboniel}\)[15]. We do not compare against tools such as DTSynth [16] that only handle safety (not general LTL) specifications. We executed \(\textsc{GenSys-LTL}\) and \(\textsc{Raboniel}\) on our benchmarks on a desktop computer having six Intel i7-8700 cores at 3.20GHz each and 64 GB RAM. We were able to obtain a binary for \(\textsc{ConSynth}\) from other authors [16], but were unable to run it due to incompatibilities with numerous versions of Z3 that we tried with it. Hence, for the benchmarks in our suite that previous papers [11, 16] had evaluated \(\textsc{ConSynth}\) on, we directly picked up results from those papers. There is another comparable synthesis tool we are aware of, _Temos_[17]. We were unable to install this tool successfully from their code available on their artifact and from their GitHub repository, due to numerous dependencies that we could not successfully resolve despite much effort.
Table I shows the experimental results of all our approaches in comparison with \(\textsc{ConSynth}\) and \(\textsc{Raboniel}\), with a timeout of 15 minutes per benchmark. The first column depicts the name of the benchmark: each benchmark includes a logical game specification and a temporal property winning condition. Column **Type** indicates whether the game variables in the underlying game \(\mathcal{G}\) are reals or integers. Column **P** indicates the player (\(C\) or \(E\)) for which we are synthesizing a winning
region. Column **S** indicates whether the given benchmark falls in the Simple LTL category (G, F, FG, GF), or whether it needs an automaton to be constructed from the LTL property (Gen). \(|\text{V}|\) is the number of game variables. Letting \(\psi\) denote the given temporal property, Column **DB?** indicates whether the automaton \(A_{\psi}\) is deterministic, while Column **DCB?** indicates whether the automaton \(A_{\neg\psi}\) is deterministic. In both these columns, the numbers within brackets indicate the number of automaton states.
The remainder of this section summarizes our results for the two main problems we address in this paper, namely, winning region computation, and realizability (see Section IV).
### _Winning region computation_
Columns **G-S** to **OTF** in Table I indicate the running times of different variants of our approach, in seconds, for winning region computation (i.e., when an initial set of states is not given). The variant **G-S** is applicable when the given game is a simple game, and it involves no automaton construction or product-game formation (see Section VI). Variant **GF-P** (resp. **FG-P**), involves product constructions with property automata, and is applicable either when the given game is simple or when \(A_{\psi}\) (resp. \(A_{\neg\psi}\)) is deterministic (see Section VII). The **OTF** variant (see Section VIII) is applicable in all cases, as it is the most general. Any entry **T/O** in the table denotes a timeout, of 15 minutes while "N/A" indicates not-applicable.
We observe that when the game is simple, computing the winning region is fastest using simple game fixpoint approaches (Variant G-S). If both automatons are deterministic, then the FG-P computation is faster than the GF-P computation. This is because the former does not require a nested loop, as compared to the latter. The OTF approach is slower than the other approaches in most of the cases due to the cost of determinization, but is the only approach that was applicable in one of the benchmarks (Cinderella \(C=1.4\) with a non-simple temporal property). OTF took 7.7 seconds in this case, and returned a non-empty under-approximation of the winning region. The \(k\) parameter value given to OTF is indicated in Column **K**.
Our approach is very efficient as per our evaluations. Only on one of the benchmarks did none of the variants terminate within the timeout (Repair-Critical, with non-simple temporal property). On each of the remaining benchmarks, at least one variant of our approach terminated within 43 seconds at most.
The other approaches ConSynth and Raboniel are only applicable when an initial set of states is given, and not for general winning region computation.
### _Realizability_
Recall that in this problem, a set of _initial states_ is given in addition to the temporal property, with the aim being to check if the chosen player wins from every state in this set. The last three columns in Table I pertain to this discussion. Column **G** indicates the running time of the _most suitable_ variant of our approach for the corresponding benchmark; what we mean by this, Variant **G-S** whenever it is applicable, else **FG-P** if it is applicable, else **GF-P** if it is applicable, otherwise **OTF**.
Column **C** indicates ConSynth's running times, for the benchmarks for which results were available in other papers. The rows where we show ConSynth's results in red color are ones where we are unsure of its soundness; this is because ConSynth does not determinize non-deterministic automatons, whereas in the literature it has been recognized that in general determinization is required for synthesis [9].
Column **R** indicates Raboniel's running times, obtained from our own runs of their tool. We were not able to encode three benchmarks into Raboniel's system due to the higher complexity of manually encoding these benchmarks; in these cases we have indicated dashes in the corresponding rows.
It is observable that our approach is much more efficient than the two baseline approaches. We terminate within the given timeout on one all but one benchmark, whereas ConSynth times out on three benchmarks and Raboniel on eight. Considering the benchmarks where both our approach and Raboniel terminate, our approach is **46x** faster on average (arithmetic mean of speedups). Considering the benchmarks where both our approach and ConSynth terminate, our approach is **244x** faster on average.
A case-by-case analysis reveals that we scale in the challenging Cinderella case where the bucket size \(C\) is \(1.9(20)\) (i.e., 9 repeated 20 times). We also scale gracefully in the simple elevator examples (Simple-3 to Simple-10), as the number of floors increases from 3 to 10, as compared to Raboniel. We solve the Cinderella benchmark for \(C=1.4\) with the general LTL specification in 301 seconds (using OTF, with \(k=1\)), which is another challenging case. Raboniel times out for this case.
A detailed list of the specifications used is given in Appendix. E.
### _Discussion on non-termination_
There do exist specifications where GenSys-LTL will not terminate. We share this issue in common with Raboniel. Consider the game specification: \(V=\{x\},\mathit{Con}(x,x^{\prime}):=x^{\prime}==x-1\lor x^{\prime}=x+1,\mathit{ Env}(x,x^{\prime}):=x^{\prime}==x,\mathit{Init}(x):=x\geq 0,\psi(x):=F(x<0)\). This example is realizable. However, GenSys-LTL will not terminate as it will keep generating predicates \(x\leq 1\), \(x\leq 2\), \(x\leq 3\), and so on, which can never cover the initial region \(x\geq 0\).
## XI Related Work
_Explicit-state techniques for finite-state games._ This line of work goes back to Buchi and Landweber [5], who essentially studied finite-state games with a Buchi winning condition, and showed that a player always has a finite-memory strategy (if she has one at all). Games with LTL winning conditions, where the players play symbols from an input/output alphabet respectively, were first studied by Pnueli and Rosner [30] who showed the realizabilty question was decidable in double exponential time in the size of the LTL specification. A recent line of work [8, 9] proposes a practically efficient solution to
these games, which avoids the expensive determinization step, based on Universal Co-Buchi Tree Automata. [10] extend this direction by reducing the problem to solving a series of safety games, based on a \(k\)-safety automaton (\(k\)-UCW) obtained from a Universal Co-Buchi Word (UCW) automaton.
_Symbolic fixpoint techniques._ One of the first works to propose a symbolic representation of the state space in fixpoint approaches was [31] in the setting of discrete-event systems. More recently Spectra [32] uses BDDs to represent states symbolically in finite-state safety games. For infinite-state systems, [1] proposes a logical representation of fixpoint algorithms for boolean and timed-automata based systems, while [33] characterizes classes of arenas for which fixpoint algorithms for safety/reachability/Buchi games terminate. An earlier version of our tool called GenSys [24] uses a symbolic fixpoint approach, but is restricted to safety games only. In contrast to all these works, we target the general class of LTL games. A recent preprint [34] uses acceleration-based techniques to alleviate divergence issues in the fixpoint-based game solving approach. Their approach can terminate in certain cases where our approach does not terminate, such as the one explained in Sec. X-C. However their technique does not attempt to compute the exact winning region.
_Symbolic CEGAR approaches._[36] considers infinite-state LTL games and proposes a CEGAR-based approach for realizability and synthesis. Several recent works consider games specified in Temporal Stream Logic (TSL). [13] considers uninterpreted functions and predicates and convert the game to a bounded LTL synthesis problem, refining by adding new constraints to rule out spurious environment strategies. [14, 15, 17] consider TSL modulo theories specifications and give techniques based on converting to an LTL synthesis problem, using EUF and Presburger logic, and Sygus based techniques, respectively. In contrast to our techniques, these techniques are not guaranteed to compute precise winning regions or to synthesize maximally permissive controllers.
_Symbolic deductive approaches._ In [11] Beyene et al propose a constraint-based approach for solving logical LTL games, which encodes a strategy as a solution to a system of extended Horn Constraints. The work relies on user-given templates for the unknown relations. [12] considers reachability games, and tries to find a strategy by first finding one on a finite unrolling of the program and then generalizing it. [16, 18, 37] consider safety games, and try to find strategies using forall-exists solvers, a decision-tree based learning technique, and enumerative search using a solver, respectively. In contrast, our work aims for precise winning regions for general LTL games.
## XII Conclusion
In this paper we have shown that symbolic fixpoint techniques are effective in solving logical games with general LTL specifications. Going forward, one of the extensions we would like to look at is strategy extraction for general (non-FND) games. Here one could use tools like AE-Val [38] that synthesize valid Skolem functions for forall-exists formulas.
\begin{table}
\begin{tabular}{l c c c c c c c|c c c c|c c c} \hline
**Game** & **Type** & **P** & **S** & \(|\)**V** & **DB?(Q)** & **DCB?(Q)** & **G-S** & **GF-P** & **FG-P** & **OTF** & **K** & **C** & **R** & **G** \\ \hline Cinderella (\(C=2\)) & Real & C & G & 5 & Y (2) & Y (2) & 0.4 & 2.4 & 0.8 & 0.7 & 0 & **T/O** & **T/O** & 0.4 \\ Cinderella (\(C=3\)) & Real & C & G & 5 & Y (2) & Y (2) & 0.3 & 2.8 & 0.7 & 0.7 & 0 & 765.3 & **T/O** & 0.3 \\ Repair-Lock & Int & C & G & 3 & Y (2) & Y (2) & 0.3 & 1.0 & 0.4 & 0.4 & 0 & 2.5 & 3.1 & 0.3 \\ Repair-Critical & Int & C & G & 8 & Y (2) & Y (2) & 29.0 & 666.0 & 29.5 & 123.0 & 0 & 19.5 & - & 29.0 \\ Synth-Synchronization & Int & C & G & 7 & Y (2) & Y (2) & 0.3 & 0.6 & 0.3 & 0.4 & 0 & 10.0 & - & 0.3 \\ Cinderella (\(C=1.4\)) & Real & E & F & 5 & Y (2) & Y (2) & 0.3 & 1.0 & 0.3 & 2.7 & 3 & 18.0 & **T/O** & 0.3 \\ Cinderella (\(C=1.4\)) & Real & C & GF & 5 & Y (2) & N (**3**) & 43.0 & 130.0 & N/A & 101.0 & 1 & 436.0 & **T/O** & 43.0 \\ Cinderella (\(C=1.4\)) & Real & C & Gen & 5 & N (**7**) & N (**5**) & N/A & N/A & N/A & N/A & 7.7 & 0 & 4.7 & **T/O** & 301.0 \\ Cinderella (\(C=1.9(20)\)) & Real & C & G & 5 & Y (2) & Y (2) & 42.0 & **T/O** & **T/O** & **T/O** & **T/O** & - & **T/O** & 42.0 \\ Repair-Critical & Int & C & Gen & 8 & Y(40) & N (**6**) & N/A & **T/O** & N/A & **T/O** & **T/O** & 53.3 & - & **T/O** \\ Simple-3 & Int & C & Gen & 1 & Y (5) & N (**6**) & N/A & 3.3 & N/A & 309.0 & 6 & - & 1.8 & 3.3 \\ Simple-4 & Int & C & Gen & 1 & Y (6) & N (**7**) & N/A & 4.1 & N/A & **T/O** & **T/O** & - & 2.2 & 4.1 \\ Simple-5 & Int & C & Gen & 1 & Y (7) & N (**8**) & N/A & 5.8 & N/A & **T/O** & **T/O** & - & 5.1 & 5.8 \\ Simple-8 & Int & C & Gen & 1 & Y(10) & N(**11**) & N/A & 15.6 & N/A & **T/O** & **T/O** & - & 27.4 & 15.6 \\ Simple-10 & Int & C & Gen & 1 & Y(12) & N(**13**) & N/A & 30.3 & N/A & **T/O** & **T/O** & - & 108.0 & 30.3 \\ Wateratn-safety & Real & C & G & 2 & Y (2) & Y (2) & 0.3 & 0.6 & 0.3 & 0.3 & 0 & - & 19.4 & 0.3 \\ Wateratn-liveness & Real & C & Gen & 1 & Y (3) & N (**4**) & N/A & 2.5 & N/A & 0.7 & 0 & - & 51.0 & 2.5 \\ Sort-3 & Int & C & FG & 3 & Y (2) & N/A & 1.2 & 1.1 & N/A & 0.3 & 0 & - & 51.0 & 1.1 \\ Sort-4 & Int & C & FG & 4 & Y (2) & N/A & 2.2 & 1.2 & N/A & 0.4 & 0 & - & 650.1 & 1.2 \\ Sort-5 & Int & C & FG & 5 & Y (2) & N/A & 5.2 & 1.2 & N/A & 0.4 & 0 & - & **T/O** & 1.2 \\ Box & Int & C & G & 2 & Y (2) & Y (2) & 0.3 & 0.6 & 0.3 & 0.4 & 0 & 3.7 & 1.2 & 0.3 \\ Box Limited & Int & C & G & 2 & Y (2) & Y (2) & 0.2 & 0.6 & 0.3 & 0.3 & 0.0 & 0 & 0.4 & 0.3 & 0.2 \\ Diagonal & Int & C & G & 2 & Y (2) & Y (2) & 0.2 & 0.6 & 0.2 & 0.4 & 0 & 1.9 & 6.4 & 0.2 \\ Evasion & Int & C & G & 4 & Y (2) & Y (2) & 0.7 & 1.8 & 0.8 & 0.6 & 0 & 1.5 & 3.4 & 0.7 \\ Follow & Int & C & G & 4 & Y (2) & Y (2) & 0.7 & 1.8 & 0.8 & 0.6 & 0 & **T/O** & 94.0 & 0.7 \\ Solitary Box & Int & C & G & 2 & Y (2) & Y (2) & 0.3 & 0.5 & 0.2 & 0.4 & 0 & 0.4 & 0.3 & 0.3 \\ Square 5 * 5 & Int & C & G & 2 & Y (2) & Y (2) & 0.3 & 0.6 & 0.3 & 0.4 & 0 & **T/O** & **T/O** & 0.3 \\ \hline \end{tabular}
\end{table} TABLE I: Comparison of all approaches of GenSys-LTL with ConSynth and Raboniel. Times are in seconds. **T/O** denotes a timeout after 15 minutes. Abbreviations: **P** for Player, **S** for Specification, **DB** for Deterministic Büchi, **DCB** for Deterministic Co- Büchi, **G-S** for GenSys-Simple Game, **GF-P** for Product Büchi Game, **FG-P** for Product Co- Büchi Game, **OTF** for On-The-Fly approach, **K** for OTF bound for which solution was found, **C** for ConSynth, **R** for Raboniel, **G** for GenSys-LTL.
A theoretical question that appears to be open is whether the class of games we consider (with real domains in general) are _determined_ (in that one of the players always has a winning strategy from a given starting state).
## Acknowledgment
The authors would like to thank Rayna Dimitrova and Philippe Heim for their comments on a preprint of our paper, which helped us improve key parts of our paper.
|
2307.03410 | Scalable High-Dimensional Multivariate Linear Regression for
Feature-Distributed Data | Feature-distributed data, referred to data partitioned by features and stored
across multiple computing nodes, are increasingly common in applications with a
large number of features. This paper proposes a two-stage relaxed greedy
algorithm (TSRGA) for applying multivariate linear regression to such data. The
main advantage of TSRGA is that its communication complexity does not depend on
the feature dimension, making it highly scalable to very large data sets. In
addition, for multivariate response variables, TSRGA can be used to yield
low-rank coefficient estimates. The fast convergence of TSRGA is validated by
simulation experiments. Finally, we apply the proposed TSRGA in a financial
application that leverages unstructured data from the 10-K reports,
demonstrating its usefulness in applications with many dense large-dimensional
matrices. | Shuo-Chieh Huang, Ruey S. Tsay | 2023-07-07T06:24:56Z | http://arxiv.org/abs/2307.03410v2 | # Scalable High-Dimensional Multivariate Linear Regression for Feature-Distributed Data
###### Abstract
Feature-distributed data, referred to data partitioned by features and stored across multiple computing nodes, are increasingly common in applications with a large number of features. This paper proposes a two-stage relaxed greedy algorithm (TSRGA) for applying multivariate linear regression to such data. The main advantage of TSRGA is that its communication complexity does not depend on the feature dimension, making it highly scalable to very large data sets. In addition, for multivariate response variables, TSRGA can be used to yield low-rank coefficient estimates. The fast convergence of TSRGA is validated by simulation experiments. Finally, we apply the proposed TSRGA in a financial application that leverages unstructured data from the 10-K reports, demonstrating its usefulness in applications with many dense large-dimensional matrices.
F
Based on the rationale that the empirical minimizers of certain optimization problems are desirable statistical estimators, prior works have proposed various optimization algorithms with feature-distributed data. Richtarik and Takac (2016) and Fercoq et al. (2014) employed randomized coordinate descent to solve \(\ell_{1}\)-regularized problems and to exploit parallel computation from the distributed computing system. In addition, random projection techniques were used in Wang et al. (2017) and Heinze et al. (2016) for \(\ell_{2}\)-regularized convex problems. However, for estimating linear models, the existing approaches usually incur a high communication complexity for very large data sets. To illustrate, consider the Lasso problem for example. The Hydra algorithm of Richtarik and Takac (2016) requires \(O(np\log(1/\epsilon))\) bytes of communication to reach \(\epsilon\)-close to the optimal loss, where \(n\) is the sample size and \(p\) is the number of features. For data with extremely large \(p\) and \(n\) that do not fit in a single modern computer, such communication complexity appears prohibitively expensive. Similarly, the distributed iterative dual random projection (DIDRP) algorithm of Wang et al. (2017) needs \(O(n^{2}+n\log(1/\epsilon))\) bytes of total communication for estimating the ridge regression, where the dominating \(n^{2}\) factor comes from each node sending the sketched data matrix to a coordinator node. Thus it incurs not only a high communication cost but also a storage bottleneck.
This paper proposes a two-stage relaxed greedy algorithm (TSRGA) for feature-distributed data to mitigate the high communication complexity. TSRGA first applies the conventional relaxed greedy algorithm (RGA) to feature-distributed data. But we terminate the RGA with the help of a just-in-time stopping criterion, which aims to save excessive communication. In the second stage, we employ a modification of RGA to estimate the coefficient matrices associated with the selected predictors from the first stage. The modified second-stage RGA yields low-rank coefficient matrices, that exploit information across tasks and improve statistical performance.
Instead of treating TSRGA as merely an optimization means, we directly analyze the convergence of TSRGA to the unknown parameters, which in turn implies the communication costs of TSRGA. The key insight of the proposed method is that the conventional RGA often incurs a high communication cost because it takes many iterations to minimize its loss function, but it tends to select relevant predictors in its early iterations. Therefore, one should decide when the RGA has done screening the predictors _before_ it iterates too many steps. To this end, the just-in-time stopping criterion tracks the reduction in training error in each step, and calls for halting the RGA as soon as the reduction becomes smaller than some threshold. With the potential predictors narrowed down in the first stage, the second-stage employs a modified RGA and focuses on the more amenable problem of estimating the coefficient matrices of the screened predictors. The two-stage design enables TSRGA to substantially cut the communication costs and produce even more accurate estimates than the original RGA.
Our theoretical results show that the proposed TSRGA enjoys a communication complexity of \(O_{p}(\mathfrak{s}_{n}(n+d_{n}))\) bytes, up to a multiplicative term depending logarithmically on the problem dimensions, where \(d_{n}\) is the dimension of the response vector (or the number of tasks), and \(\mathfrak{s}_{n}\) is a sparsity parameter defined later. This communication complexity improves that of Hydra by a factor of \(p/\mathfrak{s}_{n}\), and is much smaller than that of DIDRP and other one-shot algorithms (for example, Wang et al. 2016 and Heinze et al. 2016) if \(\mathfrak{s}_{n}\ll n\). The RGA was also employed by Bellet et al. (2015) as a solver for \(\ell_{1}\)-constrained problems,
but it requires \(O(n/\epsilon)\) communication since it only converges at a sub-linear rate (see also Jaggi, 2013 and Garber, 2020), where \(\epsilon\) is again the optimization tolerance. Hence TSRGA offers a substantial speedup for estimating sparse models compared to the conventional RGA.
To validate the performance of TSRGA, we apply it to both synthetic and real-world data sets and show that TSRGA converges much faster than other existing methods. In the simulation experiments, TSRGA achieved the smallest estimation error using the least number of iterations. It also outperforms other centralized iterative algorithms both in speed and statistical accuracy. In a large-scale simulation experiment, TSRGA can effectively estimate the high-dimensional multivariate linear regression model with more than 16 GB data in less than 5 minutes. For an empirical application, we apply TSRGA to predict simultaneously some financial outcomes (volatility, trading volume, market beta, and returns) of the S&P 500 component companies using textual features extracted from their 10-K reports. The results show that TSRGA efficiently utilizes the information provided by the texts and works well with high dimensional feature matrices.
Finally, we also considered applying TSRGA to big feature-distributed data which have not only many features but also a large number of observations. Thus, in addition to separately storing each predictors in different computing nodes, it is also necessary to partition the observations of each feature into chunks that could fit in one node. In this case, the computing nodes shall coordinate both horizontally and vertically, and we show that the communication cost to carry out TSRGA in this setting is still free of \(p\), but could be larger than that of the purely feature-distributed case.
For ease in reading, we collect the notations used throughout the paper here. The transpose of a matrix \(\mathbf{A}\) is denoted by \(\mathbf{A}^{\top}\) and that of a vector \(\mathbf{v}\) is \(\mathbf{v}^{\top}\). The inner product between two vectors \(\mathbf{u}\) and \(\mathbf{v}\) is denoted interchangeably as \(\langle\mathbf{u},\mathbf{v}\rangle=\mathbf{u}^{\top}\mathbf{v}\). If \(\mathbf{A},\mathbf{B}\) are \(\mathbb{R}^{m\times n}\), \(\langle\mathbf{A},\mathbf{B}\rangle=\operatorname{tr}(\mathbf{A}^{\top} \mathbf{B})\) denotes their trace inner product. The minimum and maximum eigenvalues of a matrix \(\mathbf{A}\) are denoted by \(\lambda_{\min}(\mathbf{A})\) and \(\lambda_{\max}(\mathbf{A})\), respectively. We also denote by \(\sigma_{l}(\mathbf{A})\) the \(l\)-th singular value of \(\mathbf{A}\), in descending order. When the argument is a vector, \(\|\cdot\|\) denotes the usual Euclidean norm and \(\|\cdot\|_{p}\) the \(\ell_{p}\) norm. If the argument is a matrix, \(\|\cdot\|_{F}\) denotes the Frobenius norm, \(\|\cdot\|_{op}\) the operator norm, and \(\|\cdot\|_{*}\) the nuclear norm. For a set \(J\), \(\sharp(J)\) denotes its cardinality. For an event \(\mathcal{E}\), its complement is denoted as \(\mathcal{E}^{c}\) and its associated indicator function is denoted as \(\mathbf{1}\{\mathcal{E}\}\). For two positive (random) sequences \(\{x_{n}\}\) and \(\{y_{n}\}\), we write \(x_{n}=o_{p}(y_{n})\) if \(\lim_{n\to\infty}\mathbb{P}(x_{n}/y_{n}<\epsilon)=1\) for any \(\epsilon>0\) and write \(x_{n}=O_{p}(y_{n})\) if for any \(\epsilon>0\) there exists some \(M_{\epsilon}<\infty\) such that \(\limsup_{n\to\infty}\mathbb{P}(x_{n}/y_{n}>M_{\epsilon})<\epsilon\).
## 2 Distributed framework and two-stage relaxed greedy algorithm
In this section, we first formally introduce the multivariate linear regression model considered in the paper and show how the data are distributed across the nodes. Then we lay out the implementation details of the proposed TSRGA, which consists of two different implementations of the conventional RGA and a just-in-time stopping criterion to guide the termination of the first-stage RGA. The case of needing horizontal partition will be discussed in Section 6.
### Model and distributed framework
Consider the following multivariate linear regression model:
\[\mathbf{y}_{t}=\sum_{j=1}^{p_{n}}\mathbf{B}_{j}^{*\top}\mathbf{x}_{t,j}+\mathbf{ \epsilon}_{t},\quad t=1,\ldots,n, \tag{1}\]
where \(\mathbf{y}_{t}\in\mathbb{R}^{d_{n}}\) is the response vector, \(\mathbf{x}_{t,j}\in\mathbb{R}^{q_{n,j}}\) a multivariate predictor, for \(j=1,2,\ldots,p_{n}\), and \(\mathbf{B}_{j}^{*}\) is the \((q_{n,j}\times d_{n})\) unknown coefficient matrix, for \(j=1,\ldots,p_{n}\). In particular, we are most interested in the case \(p_{n}\gg n\) and \(q_{n,j}<n\). Clearly, when \(d_{n}=q_{n,1}=\ldots=q_{n,p_{n}}=1\), (1) reduces to the usual multiple linear regression model. Without loss of generality, we assume \(\mathbf{y}_{t}\), \(\mathbf{x}_{t,j}\) and \(\mathbf{\epsilon}_{t}\) are mean zero.
There are several motivations for considering general \(d_{n}\) and \(q_{n,j}\)'s. First, imposing group-sparsity can be advantageous when the predictors display a natural grouping structure (e.g. Lounici et al., 2011). This advantage is inherited by (1) when only a limited number of \(\mathbf{B}_{j}^{*}\)'s are non-zero. Second, it is not uncommon that we are interested in modeling more than one response variable (\(d_{n}>1\)). In this case, one can gain statistical accuracy if the prediction tasks are related, which is often embodied by the assumption that \(\mathbf{B}_{j}^{*}\)'s are of low rank (see, e.g., Reinsel et al., 2022). Finally, in modern machine learning, some predictors may be constructed from unstructured data sources. For instance, for functional data, \(\mathbf{x}_{t,j}\)'s may be the first few Fourier coefficients (Fan et al., 2015). On the other hand, for textual data, \(\mathbf{x}_{t,j}\)'s may be topic loading or outputs from some pre-trained neural networks (Kogan et al., 2009; Yeh et al., 2020; Bybee et al., 2021).
Next, we specify how the data are distributed across nodes. In matrix notations, we can write (1) as
\[\mathbf{Y}=\sum_{j=1}^{p_{n}}\mathbf{X}_{j}\mathbf{B}_{j}^{*}+ \mathbf{E}, \tag{2}\]
where \(\mathbf{Y}=(\mathbf{y}_{1},\ldots,\mathbf{y}_{n})^{\top}\), \(\mathbf{X}_{j}=(\mathbf{x}_{1,j},\ldots,\mathbf{x}_{n,j})^{\top}\in\mathbb{R} ^{n\times q_{n,j}}\), for \(j=1,2,\ldots,p_{n}\), and \(\mathbf{E}=(\mathbf{\epsilon}_{1},\ldots,\mathbf{\epsilon}_{n})^{\top}\). As discussed in the Introduction, since pooling the large matrices \(\mathbf{X}_{1},\ldots,\mathbf{X}_{p_{n}}\) in a central node may not be feasible, a common strategy is to store them across nodes. In the following, we suppose that \(M\) nodes are available. Furthermore, the \(i\)-th node contains the data \(\{\mathbf{Y},\mathbf{X}_{j}:j\in\mathcal{I}_{i}\}\) for \(i=1,2,\ldots,M\), where \(\cup_{i=1}^{M}\mathcal{I}_{i}=\{1,2,\ldots,p_{n}\}:=[p_{n}]\). For ease in exposition, we assume a master node coordinates the other computing nodes. In particular, each worker node is able to send and receive data from the master node.
### First-stage relaxed greedy algorithm and a just-in-time stopping criterion
We now introduce the first-stage RGA and describe how it can be applied to feature-distributed data. First, initialize \(\hat{\mathbf{G}}^{(0)}=\mathbf{0}\) and \(\hat{\mathbf{U}}^{(0)}=\mathbf{Y}\). For iteration \(k=1,2,\ldots\), RGA finds \((\hat{j}_{k},\tilde{\mathbf{B}}_{\hat{j}_{k}})\) such that
\[(\hat{j}_{k},\tilde{\mathbf{B}}_{\hat{j}_{k}})\in\arg\max_{ \begin{subarray}{c}1\leq j\leq p_{n}\\ \|\mathbf{B}_{j}\|_{*}\leq L_{n}\end{subarray}}|\langle\hat{\mathbf{U}}^{(k-1 )},\mathbf{X}_{j}\mathbf{B}_{j}\rangle|, \tag{3}\]
where \(L_{n}=d_{n}^{1/2}L_{0}\) for some large \(L_{0}>0\). Then RGA constructs updates by
\[\hat{\mathbf{G}}^{(k)}= (1-\hat{\lambda}_{k})\hat{\mathbf{G}}^{(k-1)}+\hat{\lambda}_{k} \mathbf{X}_{\hat{j}_{k}}\tilde{\mathbf{B}}_{\hat{j}_{k}}, \tag{4}\] \[\hat{\mathbf{U}}^{(k)}= \mathbf{Y}-\hat{\mathbf{G}}^{(k)},\]
where \(\hat{\lambda}_{k}\) is determined by
\[\hat{\lambda}_{k}\in\arg\min_{0\leq\lambda\leq 1}\|\mathbf{Y}-(1-\lambda) \hat{\mathbf{G}}^{(k-1)}-\lambda\mathbf{X}_{\hat{j}_{k}}\tilde{\mathbf{B}}_{ \hat{j}_{k}}\|_{F}. \tag{5}\]
RGA has important computational advantages that are attractive for big data computation. First, for fixed \(j\), the maximum in (3) is achieved at \(\mathbf{B}_{j}=L_{n}\mathbf{u}\mathbf{v}^{\top}\), where \((\mathbf{u},\mathbf{v})\) is the leading pair of singular vectors (i.e., corresponding to the largest singular value) of \(\mathbf{X}_{j}^{\top}\hat{\mathbf{U}}^{(k-1)}\). Since computing the leading singular vectors is much cheaper than full SVD, RGA is computationally lighter than algorithms using singular value soft-thresholding, such as alternating direction method of multipliers (ADMM). This feature has already been exploited in Zheng et al. (2018) and Zhuo et al. (2020) for nuclear-norm constrained optimization. Second, \(\hat{\lambda}_{k}\) is easy to compute and has the closed-form \(\hat{\lambda}_{k}=\max\{\min\{\hat{\lambda}_{k,uc},1\},0\}\), where
\[\hat{\lambda}_{k,uc}=\frac{\langle\hat{\mathbf{U}}^{(k-1)},\mathbf{X}_{\hat{j}_ {k}}\tilde{\mathbf{B}}_{\hat{j}_{k}}-\hat{\mathbf{G}}^{(k-1)}\rangle}{\| \mathbf{X}_{\hat{j}_{k}}\tilde{\mathbf{B}}_{\hat{j}_{k}}-\hat{\mathbf{G}}^{(k- 1)}\|_{F}^{2}}\]
is the unconstrained minimizer of (5).
When applied to feature-distributed data, we can leverage these advantages. Observe from (3)-(5) that the history of RGA is encoded in \(\hat{\mathbf{G}}^{(k)}\). That is, to construct \(\tilde{\mathbf{G}}^{(k+1)}\), which predictors were chosen and the order in which they were chosen are irrelevant, provided \(\hat{\mathbf{G}}^{(k)}\) is known. In particular, each node only needs \(\hat{\lambda}_{k+1}\) and \(\mathbf{X}_{\hat{j}_{k+1}}\tilde{\mathbf{B}}_{\hat{j}_{k+1}}\) to construct \(\hat{\mathbf{G}}^{(k+1)}\). As argued in the previous paragraph, \(\mathbf{X}_{\hat{j}_{k+1}}\tilde{\mathbf{B}}_{\hat{j}_{k+1}}\) is a rank-one matrix. Thus transmitting this matrix only requires \(O(n+d_{n})\) bytes of communication, which are much lighter than that of the full matrix with \(O(nd_{n})\) bytes. In addition, each node requires only the extra memory to store \(\hat{\mathbf{G}}^{(k)}\) throughout the training. This is less burdensome than random projection techniques, which require at least one node to make extra room to store the sketched matrix of size \(O(n^{2})\).
The above discussions are summarized in Algorithm 1, detailing how workers and the master node communicate to implement RGA with feature-distributed data. Clearly, each node sends and receives data of size \(O(n+d_{n})\) bytes (line 4 and 15) in each iteration. We remark that Algorithm 1 asks each node to send the potential updates to the master (line 15). This is for reducing rounds of communications, which can be a bottleneck in practice. If bandwidth limit is more stringent, one can instead first ask the workers to send \(\rho_{c}\) to the master. After master decides \(c^{*}\), it only asks the \(c^{*}\)-th node to send the update, so that only one node is transmitting the data.
```
1:\(\hat{\lambda}_{k}\in\arg\min_{0\leq\lambda\leq 1}\|\mathbf{Y}-\sum_{j=1}^{p_{n}} \mathbf{X}_{j}\hat{\mathbf{B}}_{\hat{j}_{k}}\|_{F}^{2}\)
2:for\(i=1,\ldots,p_{n}\)do
3:\(\hat{\lambda}_{k}\in\arg\min_{0\leq\lambda\leq 1}\|\mathbf{Y}-\sum_{j=1}^{p_{n}} \mathbf{X}_{j}\hat{\mathbf{B}}_{\hat{j}_{k}}\|_{F}^{2}\)
4:\(\hat{\lambda}_{k}\in\arg\min_{0\leq\lambda\leq 1}\|\mathbf{Y}-\sum_{j=1}^{p_{n}} \mathbf{X}_{j}\hat{\mathbf{B}}_{\hat{j}_{k}}\|_{F}^{2}\)
5:for\(i=1,\ldots,p_{n}\)do
6:\(\hat{\lambda}_{k}\in\arg\min_{0\leq\lambda\leq 1}\|\mathbf{Y}-\sum_{j=1}^{p_{n}} \mathbf{X}_{j}\hat{\mathbf{B}}_{\hat{j}_{k}}\|_{F}^{2}\)
7:\(\hat{\lambda}_{k}\in\arg\min_{0\leq\lambda\leq 1}\|\mathbf{Y}-\sum_{j=1}^{p_{n}} \mathbf{X}_{j}\hat{\mathbf{B}}_{\hat{j}_{k}}\|_{F}^{2}\)
8:endfor
9:\(\hat{\lambda}_{k}\in\arg\min_{0\leq\lambda\leq 1}\|\mathbf{Y}-\sum_{j=1}^{p_{n}} \mathbf{X}_{j}\hat{\mathbf{B}}_{\hat{j}_{k}}\|_{F}^{2}\)
10:endfor
11:\(\hat{\lambda}_{k}\in\arg\min_{0\leq\lambda\leq 1}\|\mathbf{Y}-\sum_{j=1}^{p_{n}} \mathbf{X}_{j}\hat{\mathbf{B}}_{\hat{j}_{k}}\|_{F}^{2}\)
12:endfor
13:\(\hat{\lambda}_{k}\in\arg\min_{0\leq\lambda\leq 1}\|\mathbf{Y}-\sum_{j=1}^{p_{n}} \mathbf{X}_{j}\hat{\mathbf{B}}_{\hat{j}_{k}}\|_{F}^{2}\)
14:endfor
15:\(\hat{\lambda}_{k}\in\arg\min_{0\leq\lambda\leq 1}\|\mathbf{Y}-\sum_{j=1}^{p_{n}} \mathbf{X}_{j}\hat{\mathbf{B}}_{\hat{j}_{k}}\|_{F}^{2}\)
16:endfor
17:\(\hat{\lambda}_{k}\in\arg\min_{0\leq\lambda\leq 1}\|\mathbf{Y}-\sum_{j=1}^{p_{n}} \mathbf{X}_{j}\hat{\mathbf{B}}_{\hat{j}_{k}}\|_{F}^{2}\)
18:endfor
19:\(\hat{\lambda}_{k}\in\arg\min_{0\leq\lambda\leq 1}\|\mathbf{Y}-\sum_{j=1}^{p_{n}} \mathbf{X}_{j}\hat{\mathbf{B}}_{\hat{j}_{k}}\|_{F}^{2}\)
20:endfor
21:\(\hat{\lambda}_{k}\in\arg\min_{0\leq\lambda\leq 1}\|\mathbf{Y}-\sum_{j=1}^{p_{n}} \mathbf{X}_{j}\hat{\mathbf{B}}_{\hat{j}_{k}}\|_{F}^{2}\)
22:endfor
23:\(\hat{\lambda}_{k}\in\arg\min_{0\leq\lambda\leq 1}\|\mathbf{Y}-\sum_{j=1}^{p_{n}} \mathbf{X}_{j}\hat{\mathbf{B}}_{\hat{j}_{k}}\|_{F}^{2}\)
24:endfor
25:\(\hat{\lambda}_{k}\in\arg\min_{0\leq\lambda\leq 1}\|\mathbf{Y}-\sum_{j=1}^{p_{n}} \mathbf{X}_{j}\hat{\mathbf{B}}_{\hat{j}_{k}}\|_{F}^{2}\)
26:endfor
27:\(\hat{\lambda}_{k}\in\arg\min_{0\leq\lambda\leq 1}\|\mathbf{Y}-\sum_{j=1}^{p_{n}} \mathbf{X}_{j}\hat{\mathbf{B}}_{\hat{j}_{k}}\|_{F}^{2}\)
28:endfor
29:\(\hat{\lambda}_{k}\in\arg\min_{0\leq\lambda\leq 1}\|\mathbf{Y}-\sum_{j=1}^{p_{n}} \mathbf{X}_{j}\hat{\mathbf{B}}_{\hat{j}_{k}}\|_{F}^{2}\)
30:endfor
21:\(\hat{\lambda}_{k}\in\arg\min_{0\leq\lambda\leq 1}\|\mathbf{Y}-\sum_{j=1}^{p_{n}} \mathbf{X}_{j}\hat{\mathbf{B}}_{\hat{j}_{k}}\|_{F}^{2}\)
31:endfor
22:\(\hat{\lambda}_{k}\in\arg\min_{0\leq\lambda\leq 1}\|\mathbf{Y}-\sum_{j=1}^{p_{n}} \mathbf{X}_{j}\hat{\mathbf{B}}_{\hat{j}_{k}}\|_{F}^{2}\)
32:endfor
23:endfor
24:\(\hat{\lambda}_{k}\in\arg\min_{0\leq\lambda\leq 1}\|\mathbf{Y}-\sum_{j=1}^{p_{n}} \mathbf{X}_{j}\hat{\mathbf{B}}_{\hat{j}_{k}}\|_{F}^{2}\)
33:endfor
25:endfor
26:endfor
27:\(\hat{\lambda}_{k}\in\arg\min_{0\leq\lambda\leq 1}\|\mathbf{Y}-\sum_{j=1}^{p_{n}} \mathbf{X}_{j}\hat{\mathbf{B}}_{\hat{j}_{k
variants of RGA that converge faster (see Jaggi and Lacoste-Julien, 2015; Lei et al., 2019; Garber, 2020 and references therein). Instead of adapting these increasingly sophisticated optimization schemes with feature-distributed data, we propose to terminate RGA early with the help of a just-in-time stopping criterion. The key insight, as to be shown in Theorem 1, is that RGA is capable of screening relevant predictors in the early iterations. The stopping criterion is defined as follows. Let \(\hat{\sigma}_{k}^{2}=(nd_{n})^{-1}\|\mathbf{Y}-\hat{\mathbf{G}}^{(k)}\|_{F}^{2}\). We terminate the
first-stage RGA at step \(\hat{k}\), defined as
\[\hat{k}=\min\left\{1\leq k\leq K_{n}:\frac{\hat{\sigma}_{k}^{2}}{\hat{\sigma}_{k- 1}^{2}}\geq 1-t_{n}\right\}, \tag{6}\]
and \(\hat{k}=K_{n}\) if \(\hat{\sigma}_{k}^{2}/\hat{\sigma}_{k-1}^{2}<1-t_{n}\), for all \(1\leq k\leq K_{n}\), where \(t_{n}\) is some threshold specified later and \(K_{n}\) is a prescribed maximum number of iterations. Intuitively, \(\hat{k}\) is determined based on whether the current iteration provides sufficient improvement in reducing the training error. Note that \(\hat{k}\) is determined just-in-time without fully iterating \(K_{n}\) steps. The algorithm is halted once the criterion is triggered, thereby saving excessive communication costs. This is in sharp contrast to the model selection criteria used in prior works to terminate greedy-type algorithms that compare all \(K_{n}\) models, such as the information criteria (Ing and Lai, 2011; Ing, 2020).
### Second-stage relaxed greedy algorithm
After the first-stage RGA is terminated, the second-stage RGA focuses on estimation of the coefficient matrices. In this stage, we implement a modified version of RGA so that the coefficient estimates are of low rank.
For predictors with "large" coefficient matrices, failing to account for their low-rank structure may result in statistical inefficiency. To see this, let \(\hat{J}:=\hat{J}_{\hat{k}}\) be the predictors selected by the first-stage RGA, and let \(\hat{\mathbf{B}}_{j}\), \(j\in\hat{J}\), be the corresponding coefficient estimates produced by the first-stage RGA. Assume for now \(q_{n,j}=q_{n}\). If \(\min\{q_{n},d_{n}\}>\hat{r}=\sum_{j\in j}\hat{r}_{j}\), where \(\hat{r}_{j}=\text{rank}(\hat{\mathbf{B}}_{j})\), then estimating this coefficient matrix alone without regularization amounts to estimating \(d_{n}q_{n}\) parameters. It will be shown later in Theorem 1 that \(\hat{r}_{j}\geq\text{rank}(\mathbf{B}_{j}^{*})\) with probability tending to one. Since \(d_{n}q_{n}\asymp\min\{d_{n},q_{n}\}(q_{n}+d_{n})>\hat{r}(q_{n}+d_{n})\), estimating this coefficient matrix would cost us more than the best achievable degrees of freedom (Reinsel et al., 2022).
To avoid loss in efficiency for these large coefficient estimators, we impose a constraint on the space in which our final estimators reside. Suppose the \(j\)-th predictor, \(j\in\hat{J}\), satisfies \(\min\{q_{n,j},d_{n}\}>\hat{r}\). We require its coefficient estimator to be of the form \(\hat{\mathbf{\Sigma}}_{j}^{-1}\mathbf{U}_{j}\mathbf{S}\mathbf{V}_{j}^{\top}\), where \(\hat{\mathbf{\Sigma}}_{j}=n^{-1}\mathbf{X}_{j}^{\top}\mathbf{X}_{j}\); \(\mathbf{U}_{j}=(\mathbf{u}_{1,j},\ldots,\mathbf{u}_{\hat{r},j})\) and \(\mathbf{V}_{j}=(\mathbf{v}_{1,j},\ldots,\mathbf{v}_{\hat{r},j})\) form the leading \(\hat{r}\) pairs of singular vectors of \(\mathbf{X}_{j}^{\top}\mathbf{Y}\), and \(\mathbf{S}\) is an \(\hat{r}\times\hat{r}\) matrix to be optimized.
The second-stage RGA proceeds as follows. Initialize again \(\hat{\mathbf{G}}^{(0)}=\mathbf{0}\) and \(\hat{\mathbf{U}}^{(0)}=\mathbf{Y}\). For \(k=1,2,\ldots\), choose
\[(\hat{j}_{k},\hat{\mathbf{S}}_{k})\in\arg\max_{\begin{subarray}{c}j\in\hat{J} \\ \|\mathbf{S}\|_{*}\leq L_{n}\end{subarray}}|\langle\hat{\mathbf{U}}^{(k-1)}, \mathbf{X}_{j}\hat{\mathbf{\Sigma}}_{j}^{-1}\mathbf{U}_{j}\mathbf{S}\mathbf{V} _{j}^{\top}\rangle|, \tag{7}\]
where the maximum is searching over \(\mathbf{S}\in\mathbb{R}^{\hat{r}\times\hat{r}}\) if \(\hat{r}<\min\{q_{n,j},d_{n}\}\). For \(j\) such that \(\hat{r}\geq\min\{q_{n,j},d_{n}\}\), we define \(\mathbf{U}_{j}\) and \(\mathbf{V}_{j}\) to be the full set of singular vectors and the maximum is searching over \(\mathbf{S}\in\mathbb{R}^{q_{n,j}\times d_{n}}\). Next, we construct the update by
\[\begin{split}\hat{\mathbf{G}}^{(k)}=&(1-\hat{\lambda }_{k})\hat{\mathbf{G}}^{(k-1)}+\hat{\lambda}_{k}\mathbf{X}_{j_{k}}\hat{ \mathbf{\Sigma}}_{j_{k}}^{-1}\mathbf{U}_{\hat{j}_{k}}\hat{\mathbf{S}}_{k} \mathbf{V}_{\hat{j}_{k}}^{\top},\\ \hat{\mathbf{U}}^{(k)}=&\mathbf{Y}-\hat{\mathbf{G}} ^{(k)},\end{split} \tag{8}\]
where \(\hat{\lambda}_{k}\) is, again, determined by
\[\hat{\lambda}_{k}\in\arg\min_{0\leq\lambda\leq 1}\|\mathbf{Y}-(1- \lambda)\hat{\mathbf{G}}^{(k-1)}-\lambda\mathbf{X}_{\hat{j}_{k}}\hat{\Sigma}_{ \hat{j}_{k}}^{-1}\mathbf{U}_{\hat{j}_{k}}\hat{\mathbf{S}}_{k}\mathbf{V}_{\hat{ j}_{k}}^{\top}\|_{F}^{2}. \tag{9}\]
At first glance, the updating scheme (7)-(9) may appear similar to those proposed by Ding et al. (2021) or Ding et al. (2020), but we note one important difference here: the matrices \(\mathbf{U}_{j}\) and \(\mathbf{V}_{j}\) are fixed at the onset of the second stage. Thus our estimators' ranks remain controlled, which is not the case in the aforementioned works. More comparisons between TSRGA and these works will be made in Section 3.2.
We briefly comment on the computational aspects of the second-stage RGA. First, similarly to the first-stage, for a fixed \(j\) the maximum in (7) is attained at \(\mathbf{S}=L_{n}\mathbf{u}\mathbf{v}^{\top}\), where \((\mathbf{u},\mathbf{v})\) is the leading pair of singular vectors of \(\mathbf{U}_{j}^{\top}\hat{\Sigma}_{j}^{-1}\mathbf{X}_{j}^{\top}\hat{\mathbf{U} }^{(k-1)}\mathbf{V}_{j}\), which can be computed locally by each node. As a result, the per-iteration communication is still \(O(n+d_{n})\) for each node. For \(j\in\hat{J}\) with \(\hat{r}\geq\min\{q_{n,j},d_{n}\}\), since \(\mathbf{U}_{j}\) and \(\mathbf{V}_{j}\) are non-singular, the parameter space is not limited except for the bounded nuclear norm constraint. Indeed, it is not difficult to see that for such \(j\),
\[\max_{\|\mathbf{S}\|_{*}\leq L_{n}}|\langle\hat{\mathbf{U}}^{(k-1) },\mathbf{X}_{j}\hat{\Sigma}_{j}^{-1}\mathbf{U}_{j}\mathbf{S}\mathbf{V}_{j}^{ \top}\rangle|\]
is equivalent to
\[\max_{\|\mathbf{B}\|_{*}\leq L_{n}}|\langle\hat{\mathbf{U}}^{(k-1) },\mathbf{X}_{j}\hat{\Sigma}_{j}^{-1}\mathbf{B}\rangle| \tag{10}\]
with the correspondence \(\mathbf{B}=\mathbf{U}_{j}\mathbf{S}\mathbf{V}_{j}^{\top}\). Thus, for such \(j\), it is not necessary to compute the singular vectors \(\mathbf{U}_{j}\) and \(\mathbf{V}_{j}\). Instead, one can directly solve (10). Finally, it is straightforward to modify Algorithm 1 to implement the second-stage RGA with feature-distributed data. We defer the details to Appendix A.
### Related algorithms
In this subsection, we view TSRGA in several contexts and compare it with related algorithms. By viewing TSRGA as either a novel feature-distributed algorithm, an improvement over the Frank-Wolfe algorithm, a new method to estimate the integrative multi-view regression (Li et al., 2019), or a close relative of the greedy-type algorithms (Temlyakov, 2000), we highlight both its computational ease in applying to feature-distributed data and its theoretical applicability in estimating high-dimensional linear models.
Over the last decade, a few methods for estimating linear regression with feature-distributed data have been proposed. For instance, Richtarik and Takac (2016) and Fercoq et al. (2014) use randomized coordinate descent to solve \(\ell_{1}\)-regularized optimization problem, and Hu et al. (2019) proposes an asynchronous stochastic gradient descent algorithm, to name just a few. These methods either require a communication complexity that scales with \(p_{n}\), or converge only at sub-linear rates, both of which translate to high communication costs. The screen-and-clean approach of Yang et al. (2016), similar in spirit to TSRGA, first applies sure independence screening (SIS, Fan and Lv, 2008) to identify a subset of potentially relevant predictors. Then it uses an iterative procedure similar to the iterative Hessian sketch (Pilanci and Wainwright, 2016) to estimate the associated coefficients. While
SIS does not require communication, it imposes stronger assumptions on the predictors and the error term. In contrast, the proposed TSRGA can be applied at low communication complexity without succumbing to those assumptions.
TSRGA also adds to the line of studies that attempt to modify the conventional Frank-Wolfe algorithm (Frank and Wolfe, 1956). RGA, more often called the Frank-Wolfe algorithm in the optimization literature, has been widely adopted in big data applications for its computational simplicity. Recently, various modifications of the Frank-Wolfe algorithm have been proposed to attain a linear convergence rate that does not depend on the feature dimension \(p_{n}\)(Lei et al., 2019; Garber, 2020; Ding et al., 2021, 2020). However, strong convexity or quadratic growth of the loss function is typically assumed in these works, which precludes high-dimensional data (\(n\ll p_{n}\)). Frank-Wolfe algorithm has also been found useful in distributed systems, though most prior works employed the horizontally-partitioned data (Zheng et al., 2018; Zhuo et al., 2020). That is, data are partitioned and stored across nodes by observations instead of by features. A notable exception is Bellet et al. (2015), who found that Frank-Wolfe outperforms ADMM in communication and wall-clock time for sparse scalar regression with feature-distributed data, despite that Frank-Wolfe still suffers from sub-linear convergence. In this paper, we neither assume strong convexity (or quadratic growth) nor limit ourselves to scalar regression, and TSRGA demands much less computation than the usual Frank-Wolfe algorithm.
Model (1) was also employed by Li et al. (2019), and they termed it the integrative multi-view regression. They propose an ADMM-based algorithm, integrative reduced-rank regression (iRRR), for optimization in a centralized computing framework. The major drawback, as discussed earlier, is a computationally-expensive step of singular value soft-thresholding. Thus, TSRGA can serve as a computationally attractive alternative. In Section 4, we compare their empirical performance and find that TSRGA is much more efficient.
Other closely related greedy algorithms such as the orthogonal greedy algorithm (OGA) have also been applied to high-dimensional linear regression. OGA, when used in conjunction with an information criterion, attains the optimal prediction error (Ing, 2020) under various sparsity assumptions. However, it is computationally less adaptable to feature-distributed data. To keep the per-iteration communication low, the sequential orthogonalization scheme of Ing and Lai (2011) can be used with feature-distributed data, but the individual nodes would not have the correct coefficients to use at the prediction time when new data, possibly not orthogonalized, become available. Alternatively, one needs to allocate extra memory in each node to store the history of the OGA path to compute the projection in each iteration.
## 3 Communication complexity of TSRGA
In this section we derive the communication complexity of TSRGA by establishing the convergence rate to the unknown parameters. The communication complexity does not depend on the feature dimension \(p_{n}\), but depends instead on the sparsity of the underlying problem. To prove the result, we work with assumptions that are mostly standard in the high-dimensional regression literature, except for a local revelation assumption that is unique to the feature-distributed setting.
### Assumptions
The following assumptions of model (1) will be used in our theoretical derivations.
**(C1)**: There exists some \(\mu<\infty\) such that with probability approaching one,
\[\mu^{-1}\leq\min_{1\leq j\leq p_{n}}\lambda_{\min}(\hat{\mathbf{\Sigma}}_{j}) \leq\max_{1\leq j\leq p_{n}}\lambda_{\max}(\hat{\mathbf{\Sigma}}_{j})\leq\mu,\]
where \(\hat{\mathbf{\Sigma}}_{j}=n^{-1}\mathbf{X}_{j}^{\top}\mathbf{X}_{j}\) with \(\mathbf{X}_{j}\) being defined in (2).
**(C2)**: Put \(\xi_{E}=\max_{1\leq j\leq p_{n}}\left\|\mathbf{X}_{j}^{\top}\mathbf{E}\right\| _{op}\). There exists a sequence of \(K_{n}\to\infty\) such that \(K_{n}\xi_{E}=O_{p}(nd_{n}^{1/2})\), where \(K_{n}>0\).
**(C3)**: \[\lim_{n\to\infty}\mathbb{P}\left(\min_{\sharp(J)\leq 2K_{n}}\lambda_{\min}(n^ {-1}\mathbf{X}(J)^{\top}\mathbf{X}(J))>\mu^{-1}\right)=1,\]
where \(\mathbf{X}(J)=(\mathbf{X}_{j}:j\in J)\in\mathbb{R}^{n\times(\sum_{j\in J}q_{n,j})}\).
**(C4)**: There exists some large \(L<\infty\) such that \(d_{n}^{-1/2}\sum_{j=1}^{p_{n}}\|\mathbf{B}_{j}^{*}\|_{*}\leq L\). Moreover, there exists a non-decreasing \(\{s_{n}\}\) such that \(s_{n}^{2}=o(K_{n})\) and
\[\min_{j\in J_{n}}\sigma_{r_{j}}^{2}\left(d_{n}^{-1/2}\mathbf{B}_{j}^{*}\right) \geq s_{n}^{-1},\]
where \(J_{n}=\{1\leq j\leq p_{n}:\mathbf{B}_{j}^{*}\neq\mathbf{0}\}\) is the set of indices corresponding the relevant predictors, and \(r_{j}^{*}=\text{rank}(\mathbf{B}_{j}^{*})\).
Let \(\tilde{\mathbf{Y}}=\sum_{j=1}^{p_{n}}\mathbf{X}_{j}\mathbf{B}_{j}^{*}\) be the noiseless part of \(\mathbf{Y}\).
**(C5)**: Let \(\bar{r}_{j}=\text{rank}(\mathbf{X}_{j}^{\top}\tilde{\mathbf{Y}})\) and \(J_{o}=J_{n}\cap\{j:\min\{q_{n,j},d_{n}\}>\bar{r}_{j}\}\). There exists \(\delta_{n}>0\) such that \(\xi_{E}=o_{p}(n\delta_{n})\) and with probability approaching one,
\[\min_{j\in J_{o}}\sigma_{\bar{r}_{j}}(\mathbf{X}_{j}^{\top}\tilde{\mathbf{Y}} )\geq n\delta_{n}.\]
**(C6)**: (Local revelation) If the column vectors of \(\tilde{\mathbf{U}}_{j}\in\mathbb{R}^{q_{n,j}\times\bar{r}_{j}}\) and \(\tilde{\mathbf{V}}_{j}\in\mathbb{R}^{d_{n}\times\bar{r}_{j}}\) are the leading pairs of singular vectors corresponding to the non-zero singular values of \(\mathbf{X}_{j}^{\top}\tilde{\mathbf{Y}}\), then with probability approaching one, there exists an \(\bar{r}_{j}\times\bar{r}_{j}\) matrix \(\mathbf{\Lambda}_{j}\) such that
\[\hat{\mathbf{\Sigma}}_{j}\mathbf{B}_{j}^{*}=\tilde{\mathbf{U}}_{j}\mathbf{ \Lambda}_{j}\tilde{\mathbf{V}}_{j}^{\top} \tag{11}\]
for all \(j\in J_{o}\).
We now explain in more detail what these assumptions may entail. (C1) roughly requires the variances of the predictors to be on the same order of magnitude. This is not very restrictive since in applications the predictors are often normalized. \(\xi_{E}\) in (C2) is typically regarded as the effect size of the noise, which is often controlled by auxiliary concentration inequalities in the literature. We will verify (C2) in the examples following the main result. (C3) assumes a lower bound on the minimum eigenvalue of the covariance matrices formed by "small" subsets of predictors. Note that (C3) could hold when \(p_{n}\gg n\), even with dependent observations. We refer to Ing and Lai (2011) and Ing (2020) for related discussions on (C3). The sequence \(s_{n}\) in (C4) imposes a lower bound on the minimum non-zero singular value of the (normalized) coefficient matrices \(d_{n}^{-1/2}\mathbf{B}_{j}^{*}\). It is easy to show that \(\sharp(J_{n})\leq s_{n}L\), so it also controls the degree of sparsity in the model.
(C5) and (C6) are assumptions that endow the local nodes sufficient information in the feature-distributed setting. However, both assumptions are often vacuous when the predictors are of small dimensions. For instance, for scalar group-sparse linear regression, \(\min\{d_{n},q_{n,j}\}=\min\{1,q_{n,j}\}=1\leq\bar{r}_{j}\). Hence \(J_{o}=\emptyset\) and the two assumptions are immaterial. Intuitively, (C5) requires, for relevant predictors which are of large dimension, the marginal correlations between these predictors and the noiseless part \(\tilde{\mathbf{Y}}\) are sufficiently large. The local revelation condition (C6) assumes each node could use its local data to re-construct \(\hat{\mathbf{\Sigma}}_{j}\mathbf{B}_{j}^{*}\) for \(j\in J_{o}\). This would simplify information sharing between the nodes. In the special case where \(\mathbf{X}_{j}\)'s are orthogonal (i.e., \(\mathbf{X}_{j}^{\top}\mathbf{X}_{j}=\mathbf{0}\) for \(i\neq j\)), (C6) holds automatically because \(\mathbf{X}_{j}^{\top}\hat{\mathbf{Y}}=\sum_{l=1}^{p_{n}}\mathbf{X}_{j}^{\top} \mathbf{X}_{l}\mathbf{B}_{l}^{*}=n\hat{\mathbf{\Sigma}}_{j}\mathbf{B}_{j}^{*}\).
To better understand (11), consider the following simple model.
\[\mathbf{Y}=\mathbf{x}_{1}{\boldsymbol{\beta}_{1}^{*}}^{\top}+\mathbf{X}_{2} \mathbf{B}_{2}^{*}+\mathbf{E},\]
in which \(\mathbf{x}_{1}\in\mathbb{R}^{n}\), \({\boldsymbol{\beta}_{1}^{*}}\in\mathbb{R}^{d}\), \(\mathbf{X}_{2}\in\mathbb{R}^{n\times q}\), and \(\mathbf{B}_{2}^{*}\in\mathbb{R}^{q\times d}\). Assume \(\text{rank}(\mathbf{B}_{2}^{*})=1\) and \(\min\{q,d\}>q+1\), so \(J_{o}=\{2\}\). We can write \(\mathbf{B}_{2}^{*}=\sigma\mathbf{u}\mathbf{v}^{\top}\) for some unit vectors \(\mathbf{u}\) and \(\mathbf{v}\). Then it can be shown that, (C6) holds if \(\mathbf{X}_{2}^{\top}\mathbf{X}_{2}\mathbf{u}\) is linearly independent of \(\mathbf{X}_{2}^{\top}\mathbf{x}_{1}\) and that \(\mathbf{v}\) is linearly independent of \({\boldsymbol{\beta}_{1}^{*}}\). In particular, (C6) holds if the vectors \(\mathbf{u}\), \(\mathbf{v}\) and \({\boldsymbol{\beta}_{1}^{*}}\) are independently drawn from the uniform distribution on the unit sphere. Thus, (C6) can be viewed as a requirement that each \(\mathbf{X}_{j}\mathbf{B}_{j}^{*}\), \(j\in J_{o}\), must offer novel contributions to \(\tilde{\mathbf{Y}}\).
### Main results
We now present some theoretical properties of TSRGA, with proofs relegated to Appendix B. In the following, we assume \(L_{n}\), the hyperparameter input to the TSRGA algorithm, is chosen to be \(L_{n}=d_{n}^{1/2}L_{0}\) with \(L_{0}\geq L/(1-\epsilon_{L})\), where \(1-\epsilon_{L}\leq\mu^{-2}/4\).
Our first result proves that RGA, coupled with the just-in-time stopping criterion, can screen the relevant predictors. Moreover, it provides an upper bound on the rank of the corresponding coefficient matrices.
**Theorem 1**: _Assume (C1)-(C4) hold. Suppose there exists an \(M<\infty\) such that \(M^{-1}\leq(nd_{n})^{-1}\|\mathbf{E}\|_{F}^{2}\leq M\) with probability tending to one. Write \(\hat{\mathbf{G}}^{(k)}=\sum_{j=1}^{p_{n}}\mathbf{X}_{j}\hat{\mathbf{B}}_{j}^{(k)}\), \(k=1,2,\ldots,K_{n}\), for the iterates of the first-stage RGA. If \(\hat{k}\) is defined by (6) with
_for some sufficiently small \(C>0\), then_
\[\lim_{n\to\infty}\mathbb{P}\left(\operatorname{rank}(\mathbf{B}_{j}^{*})\leq \operatorname{rank}(\hat{\mathbf{B}}_{j}^{(\hat{k})})\text{ for all }j\right)=1. \tag{12}\]
Although Theorem 1 only provides an upper bound for the ranks of \(\mathbf{B}_{j}^{*}\)'s, it renders a useful diagnosis for the rank of the coefficient matrices for model (1). When \(p_{n}=1\), Bunea et al. (2011) proposed a rank selection criterion (RSC) to select the optimal reduced rank estimator, which is shown to be a consistent estimator of the effective rank. However, rank selection for model (1) with \(p_{n}>1\) is less investigated. Moreover, we can bound \(\hat{k}\) by the following lemma.
Under the assumptions of Theorem 1, \(\hat{k}=O_{p}(s_{n}^{2})\).
Lemma 2 ensures the just-in-time stopping criterion is triggered in no more than \(O(s_{n}^{2})\) iterations, which is much smaller than \(O(K_{n})\) by (C4). Thus compared to the model selection rules using information criteria that iterate \(K_{n}\) steps in full, it greatly reduces communication costs.
Next, we derive the required number of iterations for TSRGA to converge near the unknown parameters, which translates to its communication costs. With a slight abuse of notation, we also write the second-stage RGA iterates as \(\hat{\mathbf{G}}^{(k)}=\sum_{j\in\hat{J}}\mathbf{X}_{j}\hat{\mathbf{B}}_{j}^{ (k)}\).
Assume the same as Theorem 1, and additionally (C5) and (C6) also hold. If \(\xi_{E}=O_{p}(\xi_{n})\) and \(m_{n}=\lceil\rho\kappa_{n}\log(n^{2}d_{n}/\xi_{n}^{2})\rceil\) for some sequence \(\{\xi_{n}\}\) of positive numbers, where \(\rho=64\mu^{5}/\tau^{2}\) with \(0<\tau<1\) being arbitrary, and
\[\kappa_{n}=\hat{\mathfrak{h}}(\hat{J})\max\left\{\max_{j\in\hat{J}-\hat{J}_{o} }(q_{n,j}\wedge d_{n}),\hat{r}\mathbf{1}\{\hat{J}_{o}\neq\emptyset\}\right\},\]
with \(a\wedge b=\min\{a,b\}\) and \(\hat{J}_{o}=\{j\in\hat{J}:\hat{r}<\min\{q_{n,j},d_{n}\}\}\), then the proposed second-stage RGA satisfies
\[\sup_{m\geq m_{n}}\frac{1}{d_{n}}\sum_{j=1}^{p_{n}}\|\mathbf{B}_{j}^{*}-\hat{ \mathbf{B}}_{j}^{(m)}\|_{F}^{2}=O_{p}\left(\frac{\kappa_{n}\xi_{n}^{2}}{n^{2} d_{n}}\log\frac{n^{2}d_{n}}{\xi_{n}^{2}}+\frac{\xi_{n}^{2}}{n^{2}\delta_{n}^{2}} \mathbf{1}\{J_{o}\neq\emptyset\}\right).\]
Since the per-iteration communication cost of TSRGA is \(O(n+d_{n})\), Theorem 3, together with Lemma 2, directly imples the communication complexity of TSRGA, which we state as the following corollary.
If \(\kappa_{n}=O_{p}(\mathfrak{s}_{n})\) for some sequence \(\{\mathfrak{s}_{n}\}\) of positive numbers, then TSRGA achieves an error of order
\[O_{p}\left(\frac{\mathfrak{s}_{n}\xi_{n}^{2}}{n^{2}d_{n}}\log\frac{n^{2}d_{n}}{ \xi_{n}^{2}}+\frac{\xi_{n}^{2}}{n^{2}\delta_{n}^{2}}\mathbf{1}\{J_{o}\neq \emptyset\}\right),\]
with a communication complexity of order
\[O_{p}\left((n+d_{n})\mathfrak{s}_{n}\log\frac{n^{2}d_{n}}{\xi_{n}^{2}}\right).\]
Thus, the communication complexity, up to a logarithmic factor, scales mainly with the sparsity parameter \(\mathfrak{s}_{n}\). In general, Lemma 2 implies \(\kappa_{n}=O_{p}(s_{n}^{4})\). But in the important special case of sparse linear regression, \(\kappa_{n}=O_{p}(s_{n}^{2})\) since \(d_{n}=1\) and \(\hat{J}_{o}=\emptyset\). To demonstrate this result more concretely, we discuss the communication complexity of TSRGA when applied to several well-known models below.
**Example 1** (High-dimensional sparse linear regression): _Consider the model \(y_{t}=\sum_{j=1}^{p_{n}}\beta_{j}x_{t,j}+\epsilon_{t}\). Under suitable conditions, such as \(\{\epsilon_{t}\}\) being i.i.d. sub-Gaussian random variables, it can be shown that \(\xi_{E}=O_{p}(\sqrt{n\log p_{n}})\) (see, for example, Ing and Lai, 2011 and Ing, 2020). Then TSRGA achieves an error of order_
\[\sum_{j=1}^{p_{n}}|\beta_{j}-\hat{\beta}_{j}|^{2}=O_{p}\left(\frac{s_{n}^{2} \log p_{n}}{n}\right) \tag{13}\]
_with a communication complexity of_
\[O_{p}\left(ns_{n}^{2}\log\frac{n}{\log p_{n}}\right).\]
To reach \(\epsilon\)-close to the minimizer of the Lasso problem, the communication complexity of the Hydra algorithm (Richtarik and Takac, 2016) is
\[O\left(\frac{np_{n}}{M\tau}\log\frac{1}{\epsilon}\right),\]
where \(M\) is the number of nodes and \(\tau\) is the number of coordinates to update in each iteration. Given limited computational resources, \(\tau M\) may still be of order smaller than \(p_{n}\). Thus the communication complexity of TSRGA, which does not scale with \(p_{n}\), is more favorable for large data sets with huge \(p_{n}\). In our simulation studies, we also observe that TSRGA converges near \((\beta_{1},\ldots,\beta_{p_{n}})\) much more faster than Hydra-type algorithms.
**Example 2** (Multi-task linear regression with common relevant predictors): _Suppose we are interested in modeling \(T\) tasks simultaneously. Let \(\mathbf{y}_{1},\mathbf{y}_{2},\ldots,\mathbf{y}_{T}\) be the vectors of \(n\) observations of the \(T\) responses, and \(\mathbf{X}\) be the \(n\times p\) design matrix consisting of \(p\) predictors. Consider the system of linear regressions_
\[\mathbf{y}_{t}= \mathbf{X}\mathbf{b}_{t}+\mathbf{e}_{t},\quad t=1,\ldots,T, \tag{14}\]
_where \(\mathbf{b}_{i}=(\beta_{i,1},\beta_{i,2},\ldots,\beta_{i,p})^{T}\), for \(i=1,2,\ldots,T\), and \(\mathbf{e}_{i}\), for \(1\leq i\leq T\), are independent standard Gaussian random vectors. Let \(\mathbf{x}_{j}\) be the \(j\)-th column vector of \(\mathbf{X}\). Then we may rearrange (14) as_
\[\begin{pmatrix}\mathbf{y}_{1}\\ \mathbf{y}_{2}\\ \vdots\\ \mathbf{y}_{T}\end{pmatrix}=\sum_{j=1}^{p}\mathbf{X}_{j}\mathbf{B}_{j}+ \begin{pmatrix}\mathbf{e}_{1}\\ \mathbf{e}_{2}\\ \vdots\\ \mathbf{e}_{T}\end{pmatrix}, \tag{15}\]
_where \(\mathbf{B}_{j}=(\beta_{1,j},\beta_{2,j},\ldots,\beta_{T,j})^{T}\) and \(\mathbf{X}_{j}=\mathbf{I}_{T}\otimes\mathbf{x}_{j}\), where \(\mathbf{I}_{T}\) is the \(T\times T\) identity matrix and \(\mathbf{A}\otimes\mathbf{B}\) denotes the Kronecker product of \(\mathbf{A}\) and \(\mathbf{B}\). Now (15) falls under our general model (1). Sparsity of the \(\mathbf{B}_{j}\)'s promotes that each task is driven by the same small set of predictors, or equivalently, \(\mathbf{b}_{j}\)'s in (14) have a common support. By a similar argument used in Lemma 3.1 of Lounici et al. (2011), it can be shown that \(\xi_{E}=O_{p}(\sqrt{nT(1+T^{-1}\log p)})\). Hence Corollary 4 implies that TSRGA applied to (15) achieves an error of order_
\[\sum_{j=1}^{p}\|\mathbf{B}_{j}-\hat{\mathbf{B}}_{j}\|^{2}=O_{p}\left(\frac{s_{ n}^{2}}{nT}(1+\frac{\log p}{T})\right) \tag{16}\]
_with the communication complexity_
\[O_{p}\left(nTs_{n}^{2}\log\frac{nT}{1+T^{-1}\log p}\right).\]
Notice again that the iteration complexity scales primarily with the strong sparsity parameter \(s_{n}\), not with \(p\). As illustrated by Lounici et al. (2011), (14) can be motivated from a variety of applications, such as seemingly unrelated regressions (SUR) in econometrics and conjoint analysis in marketing research.
**Example 3** (Integrative multi-view regression): _Consider the general model (1), which is called the integrative multi-view regression by Li et al. (2019). Assume \(\mathbf{E}\) has i.i.d. Gaussian entries, and for simplicity that \(q_{n,1}=q_{n,2}=\ldots=q_{n,p_{n}}=q_{n}\). Then by a similar argument used by Li et al. (2019) it follows that \(\xi_{E}=O_{p}(\sqrt{n\log p_{n}}(\sqrt{d_{n}}+\sqrt{q_{n}}))\). Suppose the predictors \(\mathbf{X}_{j}\), for \(j=1,2,\ldots,p_{n}\), are distributed across computing nodes. TSRGA achieves_
\[\frac{1}{d_{n}}\sum_{j=1}^{p_{n}}\|\mathbf{B}_{j}^{*}-\hat{\mathbf{B}}_{j}\|_ {F}^{2}=O_{p}\left(\frac{s_{n}^{4}(d_{n}+q_{n})\log p_{n}}{nd_{n}}+\frac{(d_{ n}+q_{n})\log p_{n}}{n\delta_{n}}\right) \tag{17}\]
_with a communication complexity of_
\[O_{p}\left((n+d_{n})s_{n}^{4}\log\frac{nd_{n}}{(d_{n}+q_{n})\log p_{n}}\right).\]
Although Li et al. (2019) did not consider the feature-distributed data, they offer an ADMM-based algorithm, iRRR, for estimating (1). However, updating many parameters in each iteration causes significant computational bottleneck. In our Monte Carlo simulation, iRRR is unable to run efficiently with \(p_{n}\geq 50\) even with centralized computing and a moderate sample size, whereas TSRGA can handle such data sizes easily.
In general, the statistical errors of TSRGA in the above examples ((13), (16), and (17)) are sub-optimal compared to the minimax rates unless \(s_{n}=O(1)\), in which case the model is strongly sparse with a fixed number of relevant predictors. One reason is that Theorem 1 only guarantees sure-screening instead of predictor and rank selection consistency. In Examples 1 and 2, the statistical error could be improved if one applies hard-thresholding after the second-stage RGA, and then estimates the coefficients associated with the survived predictors again. This would not hurt the communication complexity in terms of the order
of magnitude since this step takes even less number of iterations. Nevertheless, in our simulation studies, TSRGA performs on par with and in many cases even outperforms strong benchmarks in the finite-sample case.
Another reason for the sub-optimality comes from the dependence on \(\delta_{n}\) in the error. In the second-stage, TSRGA relies on the sample SVD of the (scaled) marginal covariance \(\mathbf{X}_{j}^{\top}\mathbf{Y}\) to estimate the singular subspaces of the unknown coefficient matrices. How well these sample singular vectors recover their noiseless counterparts depends on the strength of the marginal covariance, which is controlled by \(\delta_{n}\) in Assumption (C5). This is needed because we try to avoid searching for the singular subspaces of the coefficient matrices, a challenging task for greedy algorithms. Unlike the scalar case, for the multivariate linear regression the dictionary for RGA contains all rank-one matrices and therefore the geometric structure is more intricate to exploit. For example, the argument used in Ing (2020) will not work with this dictionary.
Recently, Ding et al. (2020) and Ding et al. (2021) proposed new modifications of the Frank-Wolfe algorithm that directly search within the nuclear norm ball, under the assumptions of strict complementarity and quadratic growth. These algorithms rely on solving more complicated sub-problems. To illustrate one main difference between these modifications and TSRGA, note that for the usual reduced rank regression where \(\min\{d_{n},q_{n,1}\}>1\) and \(p_{n}=1\), one of the leading examples in Ding et al. (2020) and Ding et al. (2021), our theoretical results for TSRGA still hold (though in this case the data are not feature-distributed because \(p_{n}\) is only one). In this case, (C5) and (C6) automatically hold with \(\delta_{n}\leq d_{n}^{1/2}/(\mu s_{n}^{1/2})\). Consequently, Corollary 4 implies the error is of order \(O_{p}(\frac{s_{n}^{2}\xi_{n}^{2}}{n^{2}d_{n}}\log\frac{n^{2}d_{n}}{\xi_{n}^{2}})\) using \(O_{p}(s_{n}^{2}\log\frac{n^{2}d_{n}}{\xi_{n}^{2}})\) iterations, regardless of whether strict complementarity holds. This advantage precisely comes from that TSRGA uses the singular vectors of \(\mathbf{X}_{1}^{\top}\mathbf{Y}\) in its updates in the second stage instead of searching over the intricate space of nuclear norm ball in each iteration.
## 4 Simulation experiments
In this section, we apply TSRGA to synthetic data sets and compare its performance with some existing methods. We first examine how well TSRGA and other distributed as well as centralized algorithms estimate the unknown parameters. Then we investigate the empirical performance by applying TSRGA to large-scale feature-distributed data. In both experiments, TSRGA delivered superior performance.
### Statistical performance of TSRGA
The goal of this subsection is to evaluate the performance of TSRGA on some well-known models. Specifically, we compare the effectiveness of TSRGA in estimating unknown parameters against some existing methods.
Consider first the high-dimensional linear regression model:
\[y_{t}=\sum_{j=1}^{p_{n}}\beta_{j}^{*}x_{t,j}+\epsilon_{t},\quad t=1,\ldots,n,\]
which is sparse with only \(a_{n}=\lfloor p_{n}^{1/3}\rfloor\) non-zero \(\beta_{j}^{*}\)'s, where \(\lfloor x\rfloor\) denotes the largest integer that is less than or equal to \(x\). We also generate \(\{\epsilon_{t}\}\) as i.i.d. \(t\)-distributed random variables with five degrees of freedom.
To estimate this model, we employ the Hydra (Richtarik and Takac, 2016) and Hydra\({}^{2}\)(Fercoq et al., 2014) algorithms to solve the Lasso problem, namely,
\[\min_{\{\beta_{j}\}_{j=1}^{p_{n}}}\left\{\frac{1}{2n}\sum_{t=1}^{n}\left(y_{t} -\sum_{j=1}^{p_{n}}\beta_{j}x_{t,j}\right)^{2}+\lambda\sum_{j=1}^{p_{n}}|\beta _{j}|\right\}. \tag{18}\]
The predictors are divided into 10 groups at random; each of the groups is owned by one node in the Hydra-type algorithm. The step size of the Hydra-type algorithms is set to the lowest value so that we observe convergence of the algorithms instead of divergence. As a benchmark, we also solve the Lasso problem with 5-fold cross validation using glmnet package in R. To further reduce the computational burden, we use the \(\lambda\) selected by 5-fold cross-validated Lasso via glmnet in implementing Hydra-type algorithms.
Choosing the hyperparameter for RGA-type methods is more straightforward, but there is one subtlety. It is well-known that the Lasso problem corresponds to the constrained minimization problem
\[\min_{\{\beta_{j}\}_{j=1}^{p_{n}}}\frac{1}{2n}\sum_{t=1}^{n}\left(y_{t}-\sum_{ j=1}^{p_{n}}\beta_{j}x_{t,j}\right)^{2}\text{ subject to }\sum_{j=1}^{p_{n}}|\beta_{j}|\leq L_{n}.\]
Moreover, setting \(L_{n}\) to \(\sum_{j=1}^{p_{n}}|\beta_{j}^{*}|\), which is nonetheless unknown in practice, would yield the usual Lasso statistical guarantee (see, e.g., Theorem 10.6.1 of Vershynin, 2018). However, our theoretical results in Section 3.2 recommend setting \(L_{n}\) to a larger value than this conventionally recommended value. To illustrate the advantage of a larger \(L_{n}\), we employ two versions of RGA: one with \(L_{n}=500\) and the other with \(L_{n}=\sum_{j=1}^{p_{n}}|\beta_{j}^{*}|\). For TSRGA, we simply set \(L_{n}=500\) and \(t_{n}=1/(10\log n)\), and the performance is not too sensitive to these choices.
**Specification 1**: In the first experiment, we generate \(x_{t,j}\) as i.i.d. \(t(6)\) random variables for all \(i=1,2,\ldots,n\), and \(j=1,2,\ldots,p_{n}\). Hence the predictors have heavy tails with only 6 finite moments. The nonzero coefficients are generated independently by \(\beta_{j}^{*}=z_{j}u_{j}\), where \(z_{j}\) is uniform over \(\{-1,+1\}\) and \(u_{j}\) is uniform over \([2.5,5.5]\). The coefficients are drawn at the start of each of the 100 Monte Carlo simulations. We consider three cases with \((n,p_{n})\in\{(800,1200),(1200,2000),(1500,3000)\}\).
Figure 1 plots the logarithm of the parameter estimation error against the number of iterations. The parameter estimation error is defined as \(\sum_{j=1}^{p_{n}}(\beta_{j}^{*}-\hat{\beta}_{j})^{2}\),where \(\{\hat{\beta}_{j}\}\) are the estimates made by the aforementioned methods. In the plot, the trajectories are averaged across 100 simulations. TSRGA (black) clearly converges using the least number of iterations, which also implies lowest communication overhead. Furthermore, its parameter estimation error is also the smallest among the employed methods. RGA with \(L_{n}=500\) (solid red) follows the same trajectories as TSRGA at first, but without the two-step design, it suffers from over-fitting in later iterations and hence an increasing parameter estimation
error. On the other hand, RGA with oracle \(L_{n}=\sum_{j=1}^{p_{n}}|\beta_{j}^{*}|\) (dashed red) converges much slower than TSRGA and suffers from the sub-linear convergence rate. For Hydra (blue lines) and Hydra\({}^{2}\) (green lines) algorithms, we consider updating 25% of the coordinates in each node (solid) and updating 50% of the coordinates in each node (dashed). Surprisingly, Hydra achieves even lower estimation error than the centralized Lasso (dashed grey). However, Hydra\({}^{2}\) converges much slower.
**Specification 2**: In the second experiment, we generate the predictors by
\[x_{t,j}=\nu_{t}+w_{t,j},\quad t=1,\ldots,n;\quad j=1,\ldots,p_{n},\]
Figure 1: Logarithm of parameter estimation errors of various methods under Specification 1, where \(n\) is the sample size and \(p_{n}\) is the dimension of predictors.
where \(\{\nu_{t}\}\) and \(\{w_{t,j}\}\) are independent \(N(0,1)\) random variables. Consequently, \(\text{Cor}(x_{t,k},x_{t,j})=0.5\) for \(k\neq j\). The coefficients are set to \(\beta_{j}^{*}=2.5+1.2(j-1)\) for \(j=1,2,\ldots,a_{n}\). The rest of the specification is the same as that of Specification 1.
Figure 2 plots the parameter estimation errors under Specification 2. We note that the plots bear a qualitative resemblance to that of Specification 1. TSRGA remains the most effective method for estimating the unknown parameters, which converges within 100 iterations in all cases. On the other hand, RGA with \(L_{n}=500\) is still susceptible to overfitting. It is worth noting that the Hydra-type algorithms display a substantially slower rate of convergence under this specification, highlighting their sensitivity to the dependence between predictors, and potentially high computational expe
Figure 2: Parameter estimation errors of various estimation methods under Specification 2, where \(n\) is the sample size and \(p_{n}\) is the number of predictors.
Next we consider the general model:
\[\mathbf{y}_{t}=\sum_{j=1}^{p_{n}}\mathbf{B}_{j}^{*\top}\mathbf{x}_{t,j}+\mathbf{ \epsilon}_{t},\quad t=1,\ldots,n, \tag{19}\]
where \(\mathbf{y}_{t}\in\mathbb{R}^{d_{n}}\) and \(\mathbf{x}_{t,j}\in\mathbb{R}^{q_{n}}\), for \(j=1,2,\ldots,p_{n}\). We generate \(\mathbf{\epsilon}_{t}\) as i.i.d. random vectors with each entry having independent \(t(5)\) distributions. In the following cases, the model is sparse with \(a_{n}\) non-zero \(\mathbf{B}_{j}^{*}\)'s, each of which is only of rank \(r_{n}\). In particular, we generate \(\mathbf{B}_{j}^{*}\), \(j\leq a_{n}\), independently by
\[\mathbf{B}_{j}^{*}=\sum_{k=1}^{r_{n}}\sigma_{k,n}\mathbf{u}_{k,j}\mathbf{v}_{k,j}^{\top}, \tag{20}\]
where \(\{\mathbf{u}_{k,j}\}_{k=1}^{r_{n}}\) and \(\{\mathbf{v}_{k,j}\}_{k=1}^{r_{n}}\) are independently drawn (\(q_{n}\)- and \(d_{n}\)-dimensional) orthonormal vectors and \(\sigma_{k,n}\) are i.i.d. uniform over [7, 15].
We also employ the iRRR method (Li et al., 2019) to estimate (19). To select its tuning parameter, we execute iRRR with a grid of tuning parameter values and opt for the one with the lowest mean square prediction error on an independently generated validation set of 500 observations. Since iRRR is not a feature-distributed algorithm, we directly report their parameter estimation errors (averaged across 500 Monte Carlo simulations) defined as
\[\sqrt{\sum_{j=1}^{p_{n}}\|\mathbf{B}_{j}^{*}-\hat{\mathbf{B}}_{j}\|_{F}^{2}}, \tag{21}\]
where \(\{\hat{\mathbf{B}}_{j}\}\) are the estimated coefficient matrices. We consider the cases \((n,d_{n},q_{n},p_{n},a_{n},r_{n})\in\{(200,10,12,20,1,2)\), \((400,15,18,50,2,2)\), \((600,20,25,400,3,2)\), \((1200,40,45,800,3,3)\}\). Although centralized computation is used to implement iRRR, it is too computationally demanding to implement the algorithm for the two cases with \(n=600\) and \(n=1200\). Additionally, we use the least squares estimator with only the relevant variables as another benchmark. Finally, we set \(L_{n}=10^{5}\) for TSRGA, and the outcomes are robust to this choice. For both Specifications 3 and 4, \(t_{n}\) is set to \(1/\log n\).
**Specification 3**: In this specification, we consider (19) with the predictors generated as in Specification 1. Note that \(\{\mathbf{B}_{j}^{*}:j\leq a_{n}\}\) are drawn at the start of each of the 500 Monte Carlo simulations.
Table 1 reports the parameter estimation errors of various methods averaged over 500 Monte Carlo simulations under Specification 3. TSRGA achieved the lowest estimation error in all constellations of problem sizes. On the other hand, iRRR yielded larger estimation error than the least squares method using exactly the relevant predictors when \(n=200\), but when \(n\) increases, iRRR outperforms least squares. However, the computational costs of iRRR became so high that completing 500 simulations would require more than days, even when parallelism with 15 cores is used. TSRGA circumvents such computational overhead and delivers superior estimates.
**Specification 4**: In this specification, we generalize (19) to group predictors as follows. Let \(\{\mathbf{\nu}_{t}:t=1,2,\ldots\}\) and \(\{\mathbf{w}_{t,j}:t=1,2,\ldots;j=1,2,\ldots,p_{n}\}\) be independent \(N(\mathbf{0},\mathbf{I}_{q_{n}})\) random vectors. The group predictors are then constructed as \(\mathbf{x}_{t,j}=2\mathbf{\nu}_{t}+\mathbf{w}_{t,j}\), \(1\leq t\leq n\), \(1\leq j\leq p_{n}\). Hence \(\mathbb{E}(\mathbf{x}_{t,j}\mathbf{x}_{t,i}^{\top})=4\mathbf{I}_{q_{n}}\), for \(1\leq i<j\leq p_{n}\). Note that \(\text{Corr}(x_{t,i,l},x_{t,j,l})=0.8\) for \(i\neq j\), \(1\leq l\leq q_{n}\), where \(\mathbf{x}_{t,i}=(x_{t,i,1},\ldots,x_{t,i,q_{n}})^{\top}\). Hence, the \(l\)-th components in each of the group predictors are highly correlated.
Table 2 reports the results for Specification 4. As in the previous specification, TSRGA continues to surpass the benchmarks. When \(n=400\), iRRR gains a clear advantage over the least squares method, despite of a high computational cost. The results in Tables 1 and 2 suggest that TSRGA is not only fast, but also a statistically effective tool for parameter estimation for (19).
### Large-scale performance of TSRGA
In this subsection, we apply TSRGA to large feature-distributed data. We have an MPI implementation of TSRGA through OpenMPI and the Python binding mpi4py (Dalcin et al., 2005; Dalcin and Fang, 2021). The algorithm runs on the high-performance computing cluster of the university, which comprises multiple computing nodes equipped with Intel Xeon Gold 6248R processors. We consider again Specification 4 in the previous subsection, with \((n,d_{n},q_{n},p_{n},a_{n},r_{n})=(20000,100,100,1024,4,4)\). In the following experiments we
\begin{table}
\begin{tabular}{c c c c} \hline \hline \((n,d_{n},q_{n},p_{n},a_{n},r_{n})\) & TSRGA & iRRR & Oracle LS \\ \hline (200, 10, 12, 20, 1, 2) & 0.7412 & 0.9328 & 0.8400 \\ (400, 15, 18, 50, 2, 2) & 0.9695 & 1.2477 & 1.2904 \\ (600, 20, 25, 400, 3, 2) & 1.1112 & - & 1.7883 \\ (1200, 40, 45, 800, 3, 3) & 1.5914 & - & 2.3768 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Parameter estimation errors of various methods under Specification 3. We do not report the results for iRRR with sample sizes of 600 and 1200 since the computation required for these cases is excessively time-consuming. In the table, \(n,d_{n},q_{n},p_{n},a_{n}\) and \(r_{n}\) are the sample size, number of targeted variables, dimension of predictors, number of predictors, number of non-zero coefficient matrices, and rank of coefficient matrices, respectively.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \((n,d_{n},q_{n},p_{n},a_{n},r_{n})\) & TSRGA & iRRR & Oracle LS \\ \hline (200, 10, 12, 20, 1, 2) & 0.4323 & 0.6190 & 0.4601 \\ (400, 15, 18, 50, 2, 2) & 0.5519 & 0.9925 & 1.1743 \\ (600, 20, 25, 400, 3, 2) & 0.6749 & - & 1.8177 \\ (1200, 40, 45, 800, 3, 3) & 0.7371 & - & 2.4205 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Parameter estimation errors under specification 4. We do not report the results for iRRR with sample sizes of 600 and 1200 since the computation required for these sample sizes is excessively time-consuming. The same notations as those of Table 1 are used.
employ \(M/4\) nodes, each of which runs 4 processes and each process owns \(p_{n}/M\) predictors, with \(M\) varying from 16 to 64. When combined, the data are approximately over 16 GB of size, exceeding the usual RAM capacity on most laptops.
There are two primary goals for the experiments. The first goal is to investigate the wall-clock time required by TSRGA to estimate (19). The second goal is to examine the effect of the number of nodes on the required wall-clock time. Each experiment is repeated 10 times, and we average the wall-clock time needed to complete the \(k\)-th iteration as well as the parameter estimation error (21) at the \(k\)-th iteration.
Figure 3 plots the (log) estimation errors against the wall-clock time of TSRGA iterations. When using 16 processes, TSRGA took about 16 minutes to estimate (19), and the time reduced to less than 5 minutes when 64 processes were employed. The acceleration primarily occurred in the first stage, because solving (3) becomes faster when each process handles only a small number of predictors. After screening, there is a drastic increase in estimation error because we re-initialized the estimators, but the subsequent second-stage RGA runs extremely fast in all cases and yields accurate estimates. Indeed, Figure 4 shows that the estimation error of TSRGA quickly drops below that of the oracle least squares in the second stage. We remark that with more diligent programming, one can apply the advanced protocols introduced in Section 6 of Richtarik and Takac (2016) to TSRGA, using both multi-process and multi-thread techniques. It is anticipated that the required time will be further shortened.
## 5 Empirical application
This section showcases an application of TSRGA to financial data. In addition to the conventional financial data, we further collect the annual 10-K reports of firms under study
Figure 3: Logarithm of the average parameter estimation errors at each iteration of TSRGA, plotted against the average time elapsed at the end of each iteration. Various number of processes are employed for feature-distributed implementation.
to extract useful features for augmenting the predictors. Thus, in this application, both the response and predictors are multivariate, and the predictors may consist of large dense matrices, leading to potential computational challenges in practice.
### Financial data and 10-K reports
We aim to predict four key financial outcomes for companies in the S&P 500 index: volatility, trading volume, market beta, and return. We obtain daily return series for each company from 2010 through 2019, calculate the sample variances of the daily returns in each month, and transform them by taking the logarithm to get the volatility series \(\{V_{it}(m):m=1,2,\ldots,12\}\) for the \(i\)-th company in the \(m\)-th month of year \(t\in\{2010,\ldots,2019\}\). Next, we regress each company's daily returns on the daily returns of the S&P 500 index for each month and use the slope estimates as market beta, \(\{B_{it}(m):m=1,2,\ldots,12\}\). Finally, we also obtain data of the monthly returns series \(\{R_{it}(m):m=1,2,\ldots,12\}\) and the logarithm of the trading volumes \(\{M_{it}(m):m=1,2,\ldots,12\}\), for the \(i\)th company. All series are obtained from Yahoo! Finance via the tidyquant package in R.
After obtaining these series, some data cleaning is performed to facilitate subsequent analysis. First, the volume series exhibits a high degree of serial dependence, which could be due to unit-roots caused by the persistence in trading activities. Therefore, we apply a year-to-year difference, i.e., \(\Delta M_{it}(m)=M_{i,t}(m)-M_{i,t-1}(m)\) for all \(i\), \(1\leq m\leq 12\), and \(t=2011,\ldots,2019\). Additionally, we remove companies that have outlying values in these series. The histograms of the resulting series are plotted in Figure 5.
In addition to these financial time series, we also capitalize the information from a pertinent collection of textual data: 10-K reports. Publicly traded companies in the U.S. are required to file these annual reports with the aim of increasing transparency and satisfying the regulation of exchanges. The reports are maintained by the Securities and Exchange Com
Figure 4: Logarithm of the estimation errors of TSRGA (running with 16 processes) and the oracle least squares. The oracle least squares method is performed by applying the second-stage RGA with exactly the relevant predictors and no rank constraints.
mission (SEC) in the Electronic Data Gathering, Analysis, and Retrieval system (EDGAR), and provide information about a company's risks, liabilities, and corporate agreements and operations. Due to their significance in communicating information to the public, the 10-K reports have fueled much research in finance, economics, and computational social sciences (Hanley and Hoberg, 2019; Kogan et al., 2009; Gandhi et al., 2019; Jegadeesh and Wu, 2013).
The corpus utilized in this application is sourced from the EDGAR-CORPUS, originally prepared by Loukas et al. (2021). Our analysis specifically focuses on Section 7, titled "Management's Discussion and Analysis." To process the reports, we preprocess each document using the default functionality in the gensim package in Python and discard the documents that consist of fewer than 50 tokens. As a result, we have data of both the financial time series and 10-K reports of 256 companies over the period from 2011 through 2019.
To extract features from the textual data, we employ a technique called Latent Semantic Indexing (LSI, see, e.g., Deerwester et al., 1990). We first construct the term-document matrix as follows. Suppose we have \(D\) documents in the training set, and there are \(V\) distinct tokens in these documents. The term-document matrix \(\boldsymbol{\Theta}\) is a \(V\times D\) matrix, whose entries are given by
\[\boldsymbol{\Theta}_{ij}= (\text{number of times the $i$-th token appears in document $j$})\times\] \[\log\frac{D}{\sharp\{1\leq k\leq D:\text{the $i$-th token appears in document $k$}\}},\]
Figure 5: Histograms of the four financial variables after data cleaning. The sample period is from 2010 to 2019.
for \(1\leq i\leq V\), \(1\leq j\leq D\). The entries are known as one form of the term-frequency inverse document frequency (TFIDF, see, e.g., Salton and Buckley, 1988). Then, to extract \(K\) features from the text data, LSI uses the singular value decomposition,
\[\mathbf{\Theta}=\mathbf{U}_{\mathbf{\Theta}}\mathbf{\Sigma}_{\mathbf{\Theta}}\mathbf{V}_{\mathbf{ \Theta}}^{\top},\]
and the first \(K\) rows of \(\mathbf{\Sigma}_{\mathbf{\Theta}}\mathbf{V}_{\mathbf{\Theta}}^{\top}\) are used as the features in the training set. For a new document in the test set, we compute its TFIDF representation \(\mathbf{\theta}\in\mathbb{R}^{V}\), and then use \(\mathbf{x}=\mathbf{U}_{K}^{\top}\mathbf{\theta}\) as its textual features, where \(\mathbf{U}_{K}\) is the sub-matrix of the first \(K\) columns of \(\mathbf{U}_{\mathbf{\Theta}}\).
### Results
For each of the four financial response variables, we estimate the following model.
\[\mathbf{y}_{it}=\mathbf{\beta}_{0}+\mathbf{A}_{1}^{\top}\mathbf{v}_{i,t-1}+ \mathbf{A}_{2}^{\top}\mathbf{m}_{i,t-1}+\mathbf{A}_{3}^{\top}\mathbf{b}_{i,t- 1}+\mathbf{A}_{4}^{\top}\mathbf{r}_{i,t-1}+\mathbf{A}_{5}^{\top}\mathbf{x}_{i,t-1}+\mathbf{\epsilon}_{it}, \tag{22}\]
where \(\mathbf{y}_{it}=(y_{it}(1),\ldots,y_{it}(12))^{\top}\) is the response variable under study, \(\mathbf{v}_{it}=(V_{it}(1),\ldots,V_{it}(12))^{\top}\), \(\mathbf{m}_{it}=(\Delta M_{it}(1),\ldots,\Delta M_{it}(12))^{\top}\), \(\mathbf{b}_{it}=(B_{it}(1),\ldots,B_{it}(12))^{\top}\), \(\mathbf{r}_{it}=(R_{it}(1),\ldots,R_{it}(12))^{\top}\), \(\mathbf{x}_{it}\in\mathbb{R}^{K}\) is the extracted text features, and \(\{\mathbf{\beta}_{0},\mathbf{A}_{1},\ldots,\mathbf{A}_{5}\}\) are unknown parameters. When predicting each of the four financial outcomes, we replace \(\mathbf{y}_{it}\) in (22) with the corresponding vector (\(\mathbf{v}_{it}\), \(\mathbf{m}_{it}\), \(\mathbf{b}_{it}\), or \(\mathbf{r}_{it}\)), while keeping the same model structure. Since predicting next-year's financial outcomes in one month is related to predicting the same variable in other months, it is natural to expect low-rank coefficient matrices. (22) can also be viewed as a multi-step ahead prediction model, since we are predicting the next twelve months simultaneously.
In addition to applying TSRGA to estimate (22), we also employ several benchmark prediction methods, including the vector autoregression (VAR), the group-wise VAR (gVAR henceforth), and the Lasso. For VAR, we concatenate all response variables and estimate the model
\[\mathbf{z}_{it}=\mathbf{A}^{\top}\mathbf{z}_{i,t-1}+\mathbf{e}_{it},\]
where \(\mathbf{z}_{it}=(\mathbf{v}_{it}^{\top},\mathbf{m}_{it}^{\top},\mathbf{b}_{it }^{\top},\mathbf{r}_{it}^{\top})^{\top}\in\mathbb{R}^{48}\). For the group-wise VAR (gVAR henceforth), we separately estimate the model
\[\mathbf{y}_{it}=\mathbf{A}^{\top}\mathbf{y}_{i,t-1}+\mathbf{e}_{it},\]
for each response variable \(\mathbf{y}_{it}\in\{\mathbf{v}_{it},\mathbf{m}_{it},\mathbf{b}_{it},\mathbf{r }_{it}\}\). Finally, we apply Lasso separately to each row of (22). Namely, we run Lasso on the model
\[y_{it}(m)=\beta_{0}+\sum_{j=1}^{12} \alpha_{j,1}V_{i,t-1}(j)+\sum_{j=1}^{12}\alpha_{j,2}\Delta M_{i, t-1}(j)\] \[+\sum_{j=1}^{12}\alpha_{j,3}B_{i,t-1}(j)+\sum_{j=1}^{12}\alpha_{ j,4}R_{i,t-1}(j)+\epsilon_{it},\]
for \(m=1,2,\ldots,12\).
Table 3 presents the root mean squared prediction errors (RMSE) for different methods on the test set, for which we reserved the last year of data. The results show that gVAR consistently outperformed the usual VAR in all four financial variables, suggesting using simple least squares could be harmful for prediction when incorporating other financial series as predictors. In the case of predicting volatility, the text data proved to be quite useful, and both TSRGA and Lasso outperformed VAR and gVAR by more than 5% with different number of textual features \(K\). TSRGA, utilizing both the text information and low-rank coefficient estimates, yielded the smallest prediction errors. For trading volume and market beta, Lasso and TSRGA did not perform very differently from VAR-type methods. As for the return series, TSRGA showed a 5% reduction in RMSEs compared to VAR, though the performance was similar to gVAR.
In addition to the prediction performance, we make two remarks on the models selected by TSRGA. First, our finding that textual features are useful in predicting volatility is consistent with previous studies. For instance, Kogan et al. (2009) reported that one-hot text features are already effective in predicting volatility in a scalar linear regression, and Yeh et al. (2020) also observed gains of using neural word embedding to predict volatility. Our results suggest an alternative modeling choice: text data could explain each month's volatility via a low-rank channel. Second, trading volume may not be well-suited for low-rank models as TSRGA iterated more steps for this response variable than the others before the just-in-time stopping criterion was triggered.
The data set used in the application is relatively small, and can fit in most personal computer's memory. However, incorporating more sections of the 10-K reports or other financial corpus may pose computational challenges due to the increased number of dense text feature matrices. TSRGA can easily handle such cases when feature-distributed data are inevitable.
\begin{table}
\begin{tabular}{l r r r r} \hline \hline & Volatility & Volume & Beta & Return \\ VAR & 0.782 & 0.323 & 0.583 & 0.077 \\ gVAR & 0.750 & 0.319 & 0.556 & 0.073 \\ \hline \hline \(K=50\) & & & & \\ Lasso & 0.718\({}^{\dagger}\) & 0.310 & 0.574 & 0.075 \\ TSRGA & **0.703\({}^{\ddagger}\)** & 0.328 & 0.572 & 0.073\({}^{\dagger}\) \\ \(K=100\) & & & & \\ Lasso & **0.700\({}^{\ddagger}\)** & 0.308 & 0.574 & 0.074 \\ TSRGA & **0.678\({}^{\ddagger}\)** & 0.330 & 0.571 & 0.073\({}^{\dagger}\) \\ \(K=150\) & & & & \\ Lasso & **0.693\({}^{\ddagger}\)** & 0.308 & 0.571 & 0.073\({}^{\dagger}\) \\ TSRGA & **0.681\({}^{\ddagger}\)** & 0.332 & 0.573 & 0.073\({}^{\dagger}\) \\ \(K=200\) & & & & \\ Lasso & **0.684\({}^{\ddagger}\)** & 0.309 & 0.574 & 0.073\({}^{\dagger}\) \\ TSRGA & **0.654\({}^{\ddagger}\)** & 0.334 & 0.574 & 0.073\({}^{\dagger}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Root mean squared prediction errors on the test set. Figures in boldface are at least 5% below gVAR; \(\dagger\) means 5% below VAR, and \(\ddagger\) means 10% below VAR.
## 6 Horizontal partition for big feature-distributed data
In this section, we briefly discuss the usage of TSRGA when the sample size \(n\), in addition to the dimension \(p_{n}\), is also large so that storing \((\mathbf{Y},\mathbf{X}_{j})\) in one machine is undesirable. In particular, we also horizontally partition the (feature-distributed) data matrices and employ more computing nodes.
To fix ideas, for \(h=1,2,\ldots,H\), let
\[\mathbf{Y}_{(h)}=(\mathbf{y}_{m_{h-1}+1},\ldots,\mathbf{y}_{m_{h}})^{\top}, \text{ and }\quad\mathbf{X}_{j,(h)}=(\mathbf{x}_{m_{h-1}+1,j},\ldots,\mathbf{x}_{m_{h},j })^{\top}\]
be horizontal partitions of \(\mathbf{Y}\) and \(\mathbf{X}_{j}\), \(j=1,\ldots,p_{n}\), where \(0=m_{0}<m_{1}<\ldots<m_{H}=n\). In the distributed computing system, we label the nodes by \((h,c)\), so that the \((h,c)\)-th node owns data \(\mathbf{Y}_{(h)}\) and \(\{\mathbf{X}_{j,(h)}:j\in\mathcal{I}_{c}\}\), where \(h\in[H]\), \(c\in[M]\) and \(\cup_{c\in[M]}\mathcal{I}_{c}=[p_{n}]\). For ease in illustration, we further assume \(\{\mathcal{I}_{c}:c\in[M]\}\) forms a partition of \([p_{n}]\). Therefore, each computing node only owns a slice of the samples on a subset \(\mathcal{I}_{c}\) of the predictors as well as the same slice of the response variables. Moreover, let \(I(j)=\{(h,c):j\in\mathcal{I}_{c}\}\) be the indices of the nodes that have some observations of predictor \(j\).
We call the nodes that own the \(h\)-th slice of data "segment \(h\)". That is, \(\{(k,c):k=h\}\). Note that each segment is essentially the feature-distributed framework discussed in the previous sections. In what follows, quantities computed at nodes in segment \(h\) carry a subscript \((h)\). For example, \(\hat{\mathbf{\Sigma}}_{j,(h)}=n_{h}^{-1}\mathbf{X}_{j,(h)}^{\top}\mathbf{X}_{j,(h)}\), where \(n_{h}=m_{h}-m_{h-1}\). For simplicity, we also assume \(n_{1}=\ldots=n_{H}\) in this section. Finally, we again assume there is at least one master node to coordinate all the computing nodes \(\{(h,c):h\in[H],c\in[M]\}\).
To estimate (2) with the horizontally partitioned feature-distributed data described above, we suggest the following procedure. First, we obtain a set of potentially relevant predictors \(\hat{J}\) and their respective upper bounds on the coefficient ranks \(\hat{r}_{j}\) by running the first-stage RGA with the just-in-time stopping criterion. This can be done by applying Algorithm 1 to one segment. Alternatively, one can apply it to multiple segments in parallel and set \(\hat{J}=\cap_{h}\hat{J}_{(h)}\) and \(\hat{r}_{j}=\min_{h}\hat{r}_{j,(h)}\). In either case, Theorem 1 ensures the sure-screening property as \(n_{1}\to\infty\) if (C1)-(C4) hold in each of the segments. By Lemma 2, this step costs \(O_{p}(s_{n}^{2}(n_{1}+d_{n}))\) bytes of communication per node in the segment(s) involved.
Next, for each \(j\in\hat{J}\), each node \((h,c)\in I(j)\) computes \(\mathbf{X}_{j,(h)}^{\top}\mathbf{X}_{j,(h)}\) and, if \(q_{n_{j}}\wedge d_{n}>\hat{r}=\sum_{j}\hat{r}_{j}\), additionally computes \(\mathbf{X}_{j,(h)}^{\top}\mathbf{Y}_{(h)}\). Then, send these matrices to the master node. The master node computes \(\hat{\mathbf{\Sigma}}_{j}^{-1}=(\sum_{h=1}^{H}\mathbf{X}_{j,(h)}^{\top} \mathbf{X}_{j,(h)})^{-1}\) and the leading \(\hat{r}\) singular vectors of \(\sum_{h=1}^{H}\mathbf{X}_{j,(h)}^{\top}\mathbf{Y}_{(h)}\), which form the column vectors of \(\mathbf{U}_{j}\) and \(\mathbf{V}_{j}\). Then \((\hat{\mathbf{\Sigma}}_{j}^{-1},\mathbf{U}_{j},\mathbf{V}_{j})\) (or just \(\hat{\mathbf{\Sigma}}_{j}^{-1}\) if \(q_{n,j}\wedge d_{n}\leq\hat{r}\)) are sent back to \(I(j)\). This step costs \(O_{p}(\sum_{j\in\hat{J}}\{q_{n,j}^{2}+(q_{n,j}d_{n}+\hat{r}(q_{n,j}+d_{n})) \mathbf{1}\{q_{n,j}\wedge d_{n}>\hat{r}\}\}\) bytes of communication per node.
Now we can start the second-stage RGA iterations. Initializing \(\hat{\mathbf{G}}_{(h)}^{(0)}=\mathbf{0}\) and \(\hat{\mathbf{U}}_{(h)}^{(0)}=\mathbf{Y}_{(h)}\) for each computing nodes. At iteration \(k\), for each \(j\in\hat{J}\), nodes in \(I(j)\) send \(\mathbf{U}_{j}^{\top}\hat{\mathbf{\Sigma}}_{j}^{-1}\mathbf{X}_{j,(h)}^{\top} \hat{\mathbf{U}}_{(h)}^{(k-1)}\mathbf{V}_{j}\) to the master. The master aggregates the matrices
\[\left\{\mathbf{P}_{j}=\sum_{h=1}^{H}\mathbf{U}_{j}^{\top}\hat{\mathbf{\Sigma}}_ {j}^{-1}\mathbf{X}_{j,(h)}^{\top}\hat{\mathbf{U}}_{(h)}^{(k-1)}\mathbf{V}_{j}: j\in\hat{J}\right\},\]
and decides \(\hat{j}_{k}=\arg\max_{j\in\hat{J}}\sigma_{1}(\mathbf{P}_{j})\) and \(\hat{\mathbf{S}}_{k}=L_{n}\mathbf{u}\mathbf{v}^{\top}\), where \((\mathbf{u},\mathbf{v})\) are the leading singular vectors of \(\mathbf{P}_{\hat{j}_{k}}\). The master node sends \(\hat{\mathbf{S}}_{k}\) to the nodes in \(I(\hat{j}_{k})\). Sending the matrix \(\mathbf{U}_{j}^{\top}\hat{\mathbf{S}}_{j}^{-1}\mathbf{X}_{j,(h)}^{\top}\hat{ \mathbf{U}}_{(h)}^{(k-1)}\mathbf{V}_{j}\) requires \(O(\hat{r}^{2})\) bytes of communication if \(q_{n,j}\wedge d_{n}>\hat{r}\), and \(O(q_{n,j}d_{n})\) bytes otherwise. Each computing node also receives \(O(\hat{r})\) or \(O(q_{n,j}+d_{n})\) bytes of data from the master, depending on whether \(q_{n,\hat{j}_{k}}\wedge d_{n}\) is greater than \(\hat{r}\).
To compute \(\hat{\lambda}_{k}\), each node \((h,c)\in I(\hat{j}_{k})\) computes and sends to the master
\[\mathbf{A}_{h}=\hat{\mathbf{U}}_{(h)}^{(k-1)\top}\mathbf{X}_{\hat{j}_{k},(h) }\hat{\mathbf{\Sigma}}_{\hat{j}_{k}}^{-1}\mathbf{U}_{\hat{j}_{k}}\hat{\mathbf{ S}}_{k}\mathbf{V}_{\hat{j}_{k}}^{\top}-\hat{\mathbf{U}}_{(h)}^{(k-1)\top}\hat{ \mathbf{G}}_{(h)}^{(k-1)},\]
and
\[a_{h}=\|\mathbf{X}_{\hat{j}_{k},(h)}\hat{\mathbf{\Sigma}}_{\hat{j}_{k}}^{-1} \mathbf{U}_{\hat{j}_{k}}\hat{\mathbf{S}}_{k}\mathbf{V}_{\hat{j}_{k}}^{\top}- \hat{\mathbf{G}}_{(h)}^{(k-1)}\|_{F}^{2}.\]
The master then is able to compute \(\hat{\lambda}_{k}=\max\{\min\{\hat{\lambda}_{k,uc},1\},0\}\), where
\[\hat{\lambda}_{k,uc}=\frac{\operatorname{tr}(\sum_{h=1}^{H}\mathbf{A}_{h})}{ \sum_{h=1}^{H}a_{h}}.\]
Subsequently, \(\hat{\lambda}_{k}\) is sent to all nodes. In this step, because \(\hat{\mathbf{G}}_{h}^{(k-1)}\) is of rank at most \(k-1\), sending \(\mathbf{A}_{h}\) costs \(O(d_{n}(k\wedge d_{n}))\) bytes of communication.
Finally, each node \((h,c)\in I(\hat{j}_{k})\) updates
\[\hat{\mathbf{G}}_{(h)}^{(k)}= (1-\hat{\lambda}_{k})\hat{\mathbf{G}}_{(h)}^{(k-1)}+\hat{\lambda }_{k}\mathbf{X}_{\hat{j}_{k},(h)}\hat{\mathbf{\Sigma}}_{\hat{j}_{k}}^{-1} \mathbf{U}_{\hat{j}_{k}}\hat{\mathbf{S}}_{k}\mathbf{V}_{\hat{j}_{k}}^{\top},\] \[\hat{\mathbf{U}}_{(h)}^{(k)}= \mathbf{Y}_{(h)}-\hat{\mathbf{G}}_{(h)}^{(k)},\] \[\hat{\mathbf{B}}_{\hat{j}_{k}}^{(k)}= (1-\hat{\lambda}_{k})\hat{\mathbf{B}}_{\hat{j}_{k}}^{(k-1)}+\hat{ \lambda}_{k}\hat{\mathbf{\Sigma}}_{\hat{j}_{k}}^{-1}\mathbf{U}_{\hat{j}_{k}} \hat{\mathbf{S}}_{k}\mathbf{V}_{\hat{j}_{k}}^{\top},\] \[\hat{\mathbf{B}}_{j}^{(k)}= (1-\hat{\lambda}_{k})\hat{\mathbf{B}}_{j}^{(k-1)},\quad j\in \mathcal{I}_{c}-\{\hat{j}_{k}\},\]
and also sends (possibly via the master node) the matrix \(\mathbf{X}_{\hat{j}_{k},(h)}\hat{\mathbf{\Sigma}}_{\hat{j}_{k}}^{-1}\mathbf{U} _{\hat{j}_{k}}\hat{\mathbf{S}}_{k}\mathbf{V}_{\hat{j}_{k}}^{\top}\) (which is of rank one and costs \(O(n_{1}+d_{n})\) bytes of communication) to the nodes \(\{(h,c^{\prime}):c^{\prime}\neq c\}\). Then the node \((h,c^{\prime})\notin I(\hat{j}_{k})\) is able to update \(\hat{\mathbf{G}}_{(h)}^{(k)}\), \(\hat{\mathbf{U}}_{(h)}^{(k)}\), and \(\hat{\mathbf{B}}_{j}^{(k)}\) as above.
It can be verified the above procedure implements the second-stage RGA. Moreover, the communication cost for node \((h,c)\) at the \(k\)-th iteration is at most
\[O\left(\sum_{j\in\hat{J}\cap\mathcal{I}_{c}}\left(\hat{r}^{2}\mathbf{1}\{q_{n,j }\wedge d_{n}>\hat{r}\}+q_{n,j}d_{n}\mathbf{1}\{q_{n,j}\wedge d_{n}\leq\hat{r} \}\right)+d_{n}k+n_{1}\right).\]
As a result, the above procedure to implement TSRGA has the following guarantee.
**Corollary 5**: _Suppose \(\hat{J}\) and \(\{\hat{r}_{j}:j\in\hat{J}\}\) satisfy the sure-screening property (12) as \(n_{1}\rightarrow\infty\), and assume (C1)-(C6). If \(\max_{1\leq j\leq p_{n}}q_{n,j}=O(n_{1}^{\alpha})\), then the above procedure achieves an error of order_
\[\frac{1}{d_{n}}\sum_{j=1}^{p_{n}}\|\mathbf{B}_{j}^{*}-\hat{\mathbf{B}}_{j}\|_{F }^{2}=O_{p}\left(\frac{\mathfrak{s}_{n}\xi_{n}^{2}}{n^{2}d_{n}}\log\frac{n^{2}d_{ n}}{\xi_{n}^{2}}+\frac{\xi_{n}^{2}}{n^{2}\delta_{n}^{2}}\mathbf{1}\{J_{o}\neq \emptyset\}\right)\]
_with a communication complexity per computing node of order_
\[O_{p}\left(n_{1}^{\max\{2\alpha,1\}}s_{n}^{2}+(s_{n}^{2}n_{1}^{\alpha}d_{n}+n_{1 })\log\frac{n^{2}d_{n}}{\xi_{n}^{2}}+s_{n}^{10}\log\frac{n^{2}d_{n}}{\xi_{n}^{2 }}+d_{n}s_{n}^{8}\left(\log\frac{n^{2}d_{n}}{\xi_{n}^{2}}\right)^{2}\right).\]
The proof of Corollary 5 is an accounting on the communication costs shown above, whose details are relegated to Appendix C. The communication complexity is still free of the ambient dimension \(p_{n}\), but the dimension of the predictors \(\max_{1\leq j\leq p_{n}}q_{n,j}\) comes into play, which was not a factor in the purely feature-distributed case. The additional communication between segments could inflate the communication complexity compared to the purely feature-distributed case. If \(\alpha\leq 0.5\) and \(s_{n}=O(1)\), the communication complexity, up to poly-logarithmic factors, reduces to \(O_{p}(n_{1}+n_{1}^{\alpha}d_{n}+d_{n})\), which is no larger than the purely feature-distributed case \(O_{p}(n_{1}+d_{n})\) if \(d_{n}=O(n_{1}^{1-\alpha})\). On the other hand, if \(\alpha>0.5\) and \(s_{n}=O(1)\), the communication complexity becomes \(O_{p}(n_{1}^{2\alpha}+n_{1}^{\alpha}d_{n})\) (again ignoring poly-logarithmic terms), which is higher than the purely feature-distributed case. These costs are incurred in the greedy search as well as in the determination of \(\hat{\lambda}_{k}\). Finally, we note that the above procedure is sequential, and certain improvements can be achieved with some carefully designed communication protocol. However, methods or algorithms for speeding up convergence or lowering communication of the proposed TSRGA with horizontal partition is left for future research.
## 7 Conclusion
This paper presented a two-stage relaxed greedy algorithm (TSRGA) for estimating high-dimensional multivariate linear regression models with feature-distributed data. Our main contribution is that the communication complexity of TSRGA is independent of the feature dimension, which is often very large in feature-distributed data. Instead, the complexity depends on the sparsity of the underlying model, making the proposed approach a highly scalable and efficient method for analyzing large data sets. We also briefly discussed applying TSRGA to huge data sets that require both vertical and horizontal partitions.
We would like to point out a possible future extension. In some applications, it is of paramount importance to protect the privacy of each node's data. Thus, modifying TSRGA so that privacy can be guaranteed for feature-distributed data is an important direction for future research.
## Acknowledgments and Disclosure of Funding
We acknowledge the University of Chicago Research Computing Center for support of this work.
## Appendix A Second-stage RGA with feature-distributed data
The following algorithm presents the pseudo-code for the implementation of the second-stage RGA with feature-distributed data.
```
Input: Number of required iterations \(K_{n}\), \(L_{n}>0\), pre-selected \(\hat{J}\). Output: Each worker \(1\leq c\leq M\) has the coefficient matrices \(\{\hat{\mathbf{B}}_{j}:j\in\mathcal{I}_{c}\}\) to use for prediction. Initialization:\(\hat{\mathbf{B}}_{j}=\mathbf{0}\), for all \(j\), and \(\hat{\mathbf{G}}^{(0)}=\mathbf{0}\)
1for\(k=1,2,\ldots,K_{n}\)do
2 Workers\(c=1,2,\ldots,M\)in parallel do
3if\(k>1\)then
4 Receive \((c^{*},\hat{\lambda}_{k-1},\sigma_{\hat{\jmath}_{k-1}},\mathbf{u}_{\hat{ \jmath}_{k-1}},\mathbf{v}_{\hat{\jmath}_{k-1}})\) from the master. \(\hat{\mathbf{G}}^{(k-1)}=(1-\hat{\lambda}_{k-1})\hat{\mathbf{G}}^{(k-2)}+\hat{ \lambda}_{k-1}\sigma_{\hat{\jmath}_{k-1}}\mathbf{u}_{\hat{\jmath}_{k-1}} \mathbf{v}_{\hat{\jmath}_{k-1}}^{\top}\). \(\hat{\mathbf{B}}_{j}=(1-\hat{\lambda}_{k-1})\hat{\mathbf{B}}_{j}\) for \(j\in\mathcal{I}_{c}\cap\hat{J}\). if\(c=c^{*}\)then
5\(\hat{\mathbf{B}}_{\hat{\jmath}_{k-1}^{(c)}}=\hat{\mathbf{B}}_{\hat{\jmath}_{k-1 }}+\hat{\lambda}_{k-1}\hat{\mathbf{\Sigma}}_{\hat{j}_{k-1}^{(c)}}^{-1}\, \mathbf{U}_{\hat{\jmath}_{k-1}^{(c)}}\,\hat{\mathbf{S}}_{k-1}^{(c)}\,\mathbf{ V}_{\hat{\jmath}_{k-1}^{(c)}}^{\top}\)
6 end if
7\(\hat{\mathbf{U}}^{(k-1)}=\mathbf{Y}-\hat{\mathbf{G}}^{(k-1)}\) \((\hat{\jmath}_{k}^{(c)},\hat{\mathbf{S}}_{k}^{(c)})\in\arg\max\limits_{ \begin{subarray}{c}j\in\mathcal{I}_{c}\cap\hat{J}\\ \|\hat{\mathbf{S}}_{k}\|_{*}\leq L_{n}\end{subarray}}\ |(\hat{\mathbf{U}}^{(k-1)},\mathbf{X}_{j}\hat{ \mathbf{S}}_{j}^{-1}\mathbf{U}_{\hat{\jmath}}\mathbf{S}_{k}\mathbf{V}_{j}^{ \top})|\) \(\rho_{c}=|\langle\hat{\mathbf{U}}^{(k-1)},\mathbf{X}_{\hat{\jmath}_{k}^{(c)}} \hat{\mathbf{\Sigma}}_{\hat{j}_{k}^{(c)}}^{-1}\mathbf{U}_{\hat{\jmath}_{k}^{( c)}}\hat{\mathbf{S}}_{k}^{(c)}\mathbf{V}_{\hat{\jmath}_{k}^{(c)}}^{\top}\rangle|\) Find the leading singular value decomposition: \[\mathbf{X}_{\hat{\jmath}_{k}^{(c)}}\hat{\mathbf{\Sigma}}_{\hat{j}_{k}^{(c)}}^{- 1}\,\mathbf{U}_{\hat{\jmath}_{k}^{(c)}}\hat{\mathbf{S}}_{k^{(c)}}\mathbf{V}_ {\hat{\jmath}_{k}^{(c)}}^{\top}=\sigma_{\hat{\jmath}_{k}^{(c)}}\mathbf{u}_{ \hat{\jmath}_{k}^{(c)}}\mathbf{v}_{\hat{\jmath}_{k}^{(c)}}^{\top}\] Send \((\sigma_{\hat{\jmath}_{k}^{(c)}},\mathbf{u}_{\hat{\jmath}_{k}^{(c)}},\mathbf{ v}_{\hat{\jmath}_{k}^{(c)}}^{\top},\rho_{c})\) to the master.
8 end for
9 Master do
10 Receives \(\{(\sigma_{\hat{\jmath}_{k}^{(c)}}^{(c)},\mathbf{u}_{\hat{\jmath}_{k}^{(c)}}, \mathbf{v}_{\hat{\jmath}_{k}^{(c)}},\rho_{c}):c=1,2,\ldots,M\}\) from the workers. \(c^{*}=\arg\max_{1\leq c\leq M}\rho_{c}\) \(\sigma_{\hat{\jmath}_{k}}=\sigma_{\hat{\jmath}_{k}^{(c^{*})}},\mathbf{u}_{\hat{ \jmath}_{k}}=\mathbf{u}_{\hat{\jmath}_{k}^{(c^{*})}},\mathbf{v}_{\hat{\jmath }_{k}}=\mathbf{v}_{\hat{\jmath}_{k}^{(c^{*})}}\) \(\hat{\mathbf{G}}^{(k)}=(1-\hat{\lambda}_{k})\hat{\mathbf{G}}^{(k-1)}+\hat{ \lambda}_{k}\sigma_{\hat{\jmath}_{k}}\mathbf{u}_{\hat{\jmath}_{k}}\mathbf{v}_ {\hat{\jmath}_{k}^{\top}}^{\top}\), where \(\hat{\lambda}_{k}\) is determined by \[\hat{\lambda}_{k}\in\arg\min\limits_{0\leq\lambda\leq 1}\|\mathbf{Y}-(1-\lambda)\hat{ \mathbf{G}}^{(k-1)}-\lambda\sigma_{\hat{\jmath}_{k}}\mathbf{u}_{\hat{\jmath}_{k }}\mathbf{v}_{\hat{\jmath}_{k}}^{\top}\|_{F}^{2}.\] Broadcast \((c^{*},\hat{\lambda}_{k},\sigma_{\hat{\jmath}_{k}},\mathbf{u}_{\hat{\jmath }_{k}},\mathbf{v}_{\hat{\jmath}_{k}})\) to all workers.
11 end if
12
13 end for
```
**Algorithm 2**Feature-distributed second-stage RGA
## Appendix B Proofs
This section presents the essential elements of the proofs of our main results. Further technical details are relegated to Appendix C.
The analysis of TSRGA relies on what we call the "noiseless updates," a theoretical device constructed as follows. Initialize \(\mathbf{G}^{(0)}=\mathbf{0}\) and \(\mathbf{U}^{(0)}=\tilde{\mathbf{Y}}\). For \(1\leq k\leq K_{n}\), suppose \((\hat{j}_{k},\tilde{\mathbf{B}}_{\hat{j}_{k}})\) is chosen according to (3) by the first-stage RGA. The noiseless updates are defined as
\[\mathbf{G}^{(k)}= (1-\lambda_{k})\hat{\mathbf{G}}^{(k-1)}+\lambda_{k}\mathbf{X}_{ \hat{j}_{k}}\tilde{\mathbf{B}}_{\hat{j}_{k}}, \tag{23}\]
where
\[\lambda_{k}\in\arg\min_{0\leq\lambda\leq 1}\|\tilde{\mathbf{Y}}-(1- \lambda)\hat{\mathbf{G}}^{(k-1)}-\lambda\mathbf{X}_{\hat{j}_{k}}\tilde{ \mathbf{B}}_{\hat{j}_{k}}\|_{F}^{2}. \tag{24}\]
Recall that \(\tilde{\mathbf{Y}}=\sum_{j=1}^{p_{n}}\mathbf{X}_{j}\mathbf{B}_{j}^{*}\) is the noise-free part of the response. Thus \(\mathbf{G}^{(k)}\) is unattainable in practice. Similarly, we can define the noiseless updates for the second-stage RGA, with \(\tilde{\mathbf{B}}_{\hat{j}_{k}}\) replaced by \(\hat{\mathbf{\Sigma}}_{\hat{j}_{k}}^{-1}\mathbf{U}_{\hat{j}_{k}}\hat{S}_{k} \mathbf{V}_{\hat{j}_{k}}^{\top}\) in (23) and (24). By definition of the updates, for first- and second-stage RGA,
\[\|\tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(k)}\|_{F}^{2}\leq \|\tilde{\mathbf{Y}}-\mathbf{G}^{(k)}\|_{F}^{2}+2\langle\mathbf{E},\hat{\mathbf{G}}^{(k)}-\mathbf{G}^{(k)}\rangle\] \[\leq \|\tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(k-1)}\|_{F}^{2}+2\langle \mathbf{E},\hat{\mathbf{G}}^{(k)}-\mathbf{G}^{(k)}\rangle \tag{25}\]
Recursively applying (25), we have for any \(1\leq l\leq k\),
\[\|\tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(k)}\|_{F}^{2}\leq \|\tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(k-l)}\|_{F}^{2}+2\sum_{j= 1}^{l}\langle\mathbf{E},\hat{\mathbf{G}}^{(k-j+1)}-\mathbf{G}^{(k-j+1)}\rangle. \tag{26}\]
(26) bounds the empirical prediction error at step \(k\) by the empirical prediction error at step \(k-l\) and a remainder term involving the noise and the noiseless updates up to step \(l\). This will be handy in numerous places throughout the proofs.
Two other useful identities are
\[\max_{\begin{subarray}{c}1\leq j\leq p_{n}\\ \|\mathbf{B}_{j}\|_{*}\leq L_{n}\end{subarray}}\langle\mathbf{A},\mathbf{X}_{ j}\mathbf{B}_{j}\rangle=\sup_{\begin{subarray}{c}\mathbf{B}_{j}\in\mathbb{R}^{q_{n},j \times d_{n}},j=1,2,\ldots,p_{n}\\ \sum_{j}\|\mathbf{B}_{j}\|_{*}\leq L_{n}\end{subarray}}\left\langle\mathbf{A},\sum_{j=1}^{p_{n}}\mathbf{X}_{j}\mathbf{B}_{j}\right\rangle \tag{27}\]
and
\[\max_{\begin{subarray}{c}j\in J\\ \|\mathbf{S}\|_{*}\leq L_{n}\end{subarray}}\langle\mathbf{A},\mathbf{X}_{j} \hat{\mathbf{\Sigma}}_{j}^{-1}\mathbf{U}_{j}\mathbf{S}\mathbf{V}_{j}^{\top} \rangle=\sup_{\sum_{j\in J}\|\mathbf{S}_{j}\|_{*}\leq L_{n}}\left\langle\mathbf{ A},\sum_{j\in J}\mathbf{X}_{j}\hat{\mathbf{\Sigma}}_{j}^{-1}\mathbf{U}_{j} \mathbf{S}_{j}\mathbf{V}_{j}^{\top}\right\rangle, \tag{28}\]
where \(\mathbf{A}\in\mathbb{R}^{n\times d_{n}}\) is arbitrary. These identities hold because the maximum of the inner product is attained at the extreme points in the \(\ell_{1}\) ball. The proofs are omitted for brevity.
We first prove an auxiliary lemma which guarantees sub-linear convergence of the empirical prediction error, whose proof makes use of the noiseless updates introduced above.
**Lemma 6**: _Assume (C1)-(C2) and that \(\sum_{j=1}^{p_{n}}\|{\bf B}_{j}^{*}\|_{*}\leq d_{n}^{1/2}L\). RGA has the following uniform rate of convergence._
\[\max_{1\leq k\leq K_{n}}\frac{(nd_{n})^{-1}\|\tilde{\bf Y}-\hat{\bf G}^{(k)}\|_{ F}^{2}}{k^{-1}}=O_{p}(1). \tag{29}\]
**Proof** Let \(1\leq m\leq K_{n}\) be arbitrary. Note that for any \(1\leq k\leq K_{n}\),
\[\langle\tilde{\bf Y}-\hat{\bf G}^{(k-1)},{\bf X}_{\hat{j}_{k}} \tilde{\bf B}_{\hat{j}_{k}}-\hat{\bf G}^{(k-1)}\rangle\] \[= \langle{\bf Y}-\hat{\bf G}^{(k-1)},{\bf X}_{\hat{j}_{k}}\tilde{ \bf B}_{\hat{j}_{k}}-\hat{\bf G}^{(k-1)}\rangle-\langle{\bf E},{\bf X}_{\hat{j }_{k}}\tilde{\bf B}_{\hat{j}_{k}}-\hat{\bf G}^{(k-1)}\rangle\] \[\geq \max_{\begin{subarray}{c}1\leq j\leq p_{n}\\ \|{\bf B}_{j}\|_{*}\leq L_{n}\end{subarray}}\left\{\langle{\bf Y}-\hat{\bf G} ^{(k-1)},{\bf X}_{j}{\bf B}_{j}-\hat{\bf G}^{(k-1)}\rangle\right\}-2L_{n} \xi_{E}\] \[\geq \max_{\begin{subarray}{c}1\leq j\leq p_{n}\\ \|{\bf B}_{j}\|_{*}\leq L_{n}\end{subarray}}\left\{\langle\tilde{\bf Y}-\hat {\bf G}^{(k-1)},{\bf X}_{j}{\bf B}_{j}-\hat{\bf G}^{(k-1)}\rangle\right\}-4L_{ n}\xi_{E}. \tag{30}\]
Put
\[{\cal E}_{n}(m)= \left\{\min_{1\leq l\leq m}\max_{\begin{subarray}{c}1\leq j\leq p _{n}\\ \|{\bf B}_{j}\|_{*}\leq L_{n}\end{subarray}}\langle\tilde{\bf Y}-\hat{\bf G}^{ (l-1)},{\bf X}_{j}{\bf B}_{j}-\hat{\bf G}^{(l-1)}\rangle>\tilde{\tau}d_{n}^{1/2 }\xi_{E}\right\}, \tag{31}\]
for some \(\tilde{\tau}>4L_{0}\). It follows from (27) and (30) that on \({\cal E}_{n}(m)\), for all \(1\leq k\leq m\),
\[\langle\tilde{\bf Y}-\hat{\bf G}^{(k-1)},{\bf X}_{\hat{j}_{k}} \tilde{\bf B}_{\hat{j}_{k}}-\hat{\bf G}^{(k-1)}\rangle\] \[\geq (1-\frac{4L_{0}}{\tilde{\tau}})\max_{\begin{subarray}{c}1\leq j \leq p_{n}\\ \|{\bf B}_{j}\|_{*}\leq L\end{subarray}}\left\{\langle\tilde{\bf Y}-\hat{\bf G }^{(k-1)},{\bf X}_{j}{\bf B}_{j}-\hat{\bf G}^{(k-1)}\rangle\right\}\] \[\geq (1-\frac{4L_{0}}{\tilde{\tau}})\|\tilde{\bf Y}-\hat{\bf G}^{(k-1) }\|_{F}^{2}\] \[:= \tau\|\tilde{\bf Y}-\hat{\bf G}^{(k-1)}\|_{F}^{2}\] \[\geq 0, \tag{32}\]
where \(\tau=1-4L_{0}/\tilde{\tau}\). This, together with Lemma 10(iii) in Appendix C, implies
\[\lambda_{k}=\frac{\langle\tilde{\bf Y}-\hat{\bf G}^{(k-1)},{\bf X}_{\hat{j}_{ k}}\tilde{\bf B}_{\hat{j}_{k}}-\hat{\bf G}^{(k-1)}\rangle}{\|{\bf X}_{\hat{j}_{k}} \tilde{\bf B}_{\hat{j}_{k}}-\hat{\bf G}^{(k-1)}\|_{F}^{2}}\]
for \(1\leq k\leq m\) on \({\cal E}_{n}(m)\) except for a vanishing event. This, combined with (25) and (32), yields
\[\|\tilde{\bf Y}-\hat{\bf G}^{(k)}\|_{F}^{2}\leq \|\tilde{\bf Y}-\hat{\bf G}^{(k)}\|_{F}^{2}+2\langle{\bf E},\hat{ \bf G}^{(k)}-{\bf G}^{(k)}\rangle\] \[= \|\tilde{\bf Y}-\hat{\bf G}^{(k-1)}-\lambda_{k}({\bf X}_{\hat{j}_ {k}}\tilde{\bf B}_{\hat{j}_{k}}-\hat{\bf G}^{(k-1)})\|_{F}^{2}+2\langle{\bf E},\hat{\bf G}^{(k)}-{\bf G}^{(k)}\rangle\] \[= \|\tilde{\bf Y}-\hat{\bf G}^{(k-1)}\|_{F}^{2}-\frac{\langle\tilde{ \bf Y}-\hat{\bf G}^{(k-1)},{\bf X}_{\hat{j}_{k}}\tilde{\bf B}_{\hat{j}_{k}}-\hat {\bf G}^{(k-1)}\rangle^{2}}{\|{\bf X}_{\hat{j}_{k}}\tilde{\bf B}_{\hat{j}_{k}}- \hat{\bf G}^{(k-1)}\|_{F}^{2}}+2\langle{\bf E},\hat{\bf G}^{(k)}-{\bf G}^{(k)}\rangle\] \[\leq \|\tilde{\bf Y}-\hat{\bf G}^{(k-1)}\|_{F}^{2}\left\{1-\frac{\tau^ {2}\|\tilde{\bf Y}-\hat{\bf G}^{(k-1)}\|_{F}^{2}}{\|{\bf X}_{\hat{j}_{k}}\tilde{ \bf B}_{\hat{j}_{k}}-\hat{\bf G}^{(k-1)}\|_{F}^{2}}\right\}+2\langle{\bf E}, \hat{\bf G}^{(k)}-{\bf G}^{(k)}\rangle \tag{33}\]
for all \(1\leq k\leq m\) on \(\mathcal{E}_{n}(m)\) except for a vanishing event. By (C1), with probability tending to one, \(\|\mathbf{X}_{\hat{j}_{k}}\mathbf{\tilde{B}}_{\hat{j}_{k}}-\mathbf{\hat{G}}^{(k- 1)}\|_{F}^{2}\leq 4L_{n}^{2}n\mu\) and \(\|\tilde{\mathbf{Y}}\|_{F}^{2}\leq(1-\epsilon_{L})^{2}L_{n}^{2}n\mu\). Now by Lemma 11 and Lemma 10(ii) in Appendix C, we have
\[\frac{1}{nd_{n}}\|\tilde{\mathbf{Y}}-\mathbf{\hat{G}}^{(m)}\|_{F}^ {2}\leq \frac{4L_{0}^{2}\mu}{1+m\tau^{2}}+2\sum_{l=1}^{m}\frac{|\langle \mathbf{E},\mathbf{\hat{G}}^{(l)}-\mathbf{G}^{(l)}\rangle|}{nd_{n}}\] \[= \frac{4L_{0}^{2}\mu}{1+m\tau^{2}}+2\sum_{l=1}^{m}|\hat{\lambda}_{ l}-\lambda_{l}|\frac{|\langle\mathbf{E},\mathbf{X}_{\hat{j}_{l}}\mathbf{\tilde{B}}_{ \hat{j}_{l}}-\mathbf{\hat{G}}^{(l-1)}\rangle|}{nd_{n}}\] \[\leq \frac{4L_{0}^{2}\mu}{1+m\tau^{2}}+\frac{8}{1-\epsilon_{L}}\frac{ m\xi_{E}^{2}}{n^{2}d_{n}}, \tag{34}\]
on \(\mathcal{E}_{n}(m)\) except for a vanishing event. Note that by (C2), \(m\xi_{E}^{2}/(n^{2}d_{n})\leq m^{-1}(K_{n}\xi_{E}/(nd_{n}^{1/2}))^{2}=O_{p}(m^ {-1})\). Furthermore, it is shown in Appendix C that on \(\mathcal{E}_{n}^{c}(m)\) except for a vanishing event,
\[\frac{1}{nd_{n}}\|\tilde{\mathbf{Y}}-\mathbf{\hat{G}}^{(m)}\|_{F}^{2}\leq \frac{\tilde{\tau}\xi_{E}}{n\sqrt{d_{n}}}+\frac{8m\xi_{E}^{2}}{(1-\epsilon_{L })n^{2}d_{n}}. \tag{35}\]
Combining (34) and (35) yields the desired result.
Now we are ready to prove the main results.
**Proof** [Proof of Theorem 1] Since \(d_{n}^{1/2}L\geq\sum_{j=1}^{p_{n}}\|\mathbf{B}_{j}^{*}\|_{*}\geq\sharp(J_{n} )\min_{j\in J_{n}}\sigma_{r_{j}^{*}}(\mathbf{B}_{j}^{*})\) and \(s_{n}=o(K_{n}^{2})\), it follows that \(\sharp(J_{n})=o(K_{n})\), and by (C3), with probability tending to one, \(\lambda_{\min}(\mathbf{X}(\hat{J}_{k}\cup J_{n})^{\top}\mathbf{X}(\hat{J}_{k} \cup J_{n}))\geq n\mu^{-1}\), for all \(1\leq k\leq K_{n}\), where \(\hat{J}_{k}=\{\hat{j}_{1},\hat{j}_{2},\ldots,\hat{j}_{k}\}\). Let \(\mathcal{G}_{n}=\{\text{there exists \ some }j\text{ such that }\text{rank}(\mathbf{B}_{j}^{*})>\text{rank}(\hat{\mathbf{B}}_{j}^{(\hat{k})})\}\). Then on \(\mathcal{G}_{n}\) except for a vanishing event, it follows from (27), (C3), Eckart-Young theorem and (C4) that
\[\min_{1\leq m\leq\hat{k}}\max_{\begin{subarray}{c}1\leq j\leq p_{ n}\\ \|\mathbf{B}_{j}\|_{*}\leq L\end{subarray}}\langle\tilde{\mathbf{Y}}-\mathbf{\hat{G}}^{(m )},\mathbf{X}_{j}\mathbf{B}_{j}-\mathbf{\hat{G}}^{(m)}\rangle \geq\min_{1\leq m\leq\hat{k}}\|\tilde{\mathbf{Y}}-\mathbf{\hat{G}} ^{(m)}\|_{F}^{2}\] \[\geq n\mu^{-1}\min_{1\leq m\leq\hat{k}}\|\mathbf{B}_{j}^{*}-\hat{ \mathbf{B}}_{j}^{(m)}\|_{F}^{2}\] \[\geq n\mu^{-1}\min_{\text{rank}(\mathbf{B})<r_{j}^{*}}\|\mathbf{B}_{j} ^{*}-\mathbf{B}\|_{F}^{2}\] \[\geq \frac{nd_{n}}{\mu s_{n}}. \tag{36}\]
By (36), (C2) and (C4), we have \(\lim_{n\to\infty}\mathbb{P}\left(\mathcal{G}_{n}\cap\mathcal{E}_{n}^{c}(\hat{ k})\right)\leq\lim_{n\to\infty}\mathbb{P}(nd_{n}^{1/2}\leq\tilde{\tau}\mu s _{n}\xi_{E})=0\), where \(\mathcal{E}_{n}(\cdot)\) is defined in (31). Hence it suffices to show \(\lim_{n\to\infty}\mathbb{P}(\mathcal{G}_{n}\cap\mathcal{E}_{n}(\hat{k}))=0\). By (36) and the same argument as in (33), on \(\mathcal{G}_{n}\cap\mathcal{E}_{n}(\hat{k})\) except for a vanishing event,
\[\|\tilde{\mathbf{Y}}-\mathbf{\hat{G}}^{(k)}\|_{F}^{2}\leq \|\tilde{\mathbf{Y}}-\mathbf{\hat{G}}^{(k-1)}\|_{F}^{2}\left\{1- \frac{\tau^{2}\|\tilde{\mathbf{Y}}-\mathbf{\hat{G}}^{(k-1)}\|_{F}^{2}}{\| \mathbf{X}_{\hat{j}_{k}}\mathbf{\tilde{B}}_{\hat{j}_{k}}-\mathbf{\hat{G}}^{(k- 1)}\|_{F}^{2}}\right\}+2\langle\mathbf{E},\mathbf{\hat{G}}^{(k)}-\mathbf{G}^{(k )}\rangle\] \[\leq \|\tilde{\mathbf{Y}}-\mathbf{\hat{G}}^{(k-1)}\|_{F}^{2}\left\{1- \frac{\tau^{2}s_{n}^{-1}}{4L_{0}^{2}\mu^{2}}\right\}+2\langle\mathbf{E},\mathbf{ \hat{G}}^{(k)}-\mathbf{G}^{(k)}\rangle,\]
and thus
\[nd_{n}\hat{\sigma}_{k}^{2}\leq\|\tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(k-1)}\|_{F}^ {2}\left(1-\frac{\tau^{2}s_{n}^{-1}}{4L_{0}^{2}\mu^{2}}\right)+\|\mathbf{E}\|_{F }^{2}+2\langle\mathbf{E},\tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(k)},\rangle\]
for \(1\leq k\leq\hat{k}\). It follows that
\[\frac{\hat{\sigma}_{k}^{2}}{\hat{\sigma}_{k-1}^{2}}\leq \frac{(nd_{n})^{-1}\|\tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(k-1)}\| _{F}^{2}+(nd_{n})^{-1}\|\mathbf{E}\|_{F}^{2}+4L_{0}\xi_{E}/(nd_{n}^{1/2})}{(nd _{n})^{-1}\|\tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(k-1)}\|_{F}^{2}+(nd_{n})^{- 1}\|\mathbf{E}\|_{F}^{2}-4L_{0}\xi_{E}/(nd_{n}^{1/2})}\] \[-\frac{\tau^{2}s_{n}^{-1}}{4L_{0}^{2}\mu^{2}}\frac{(nd_{n})^{-1} \|\tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(k-1)}\|_{F}^{2}}{(nd_{n})^{-1}\| \tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(k-1)}\|_{F}^{2}+(nd_{n})^{-1}\|\mathbf{ E}\|_{F}^{2}-4L_{0}\xi_{E}/(nd_{n}^{1/2})}\] \[:= A_{k}-B_{k}, \tag{37}\]
for \(1\leq k\leq\hat{k}\) on \(\mathcal{G}_{n}\cap\mathcal{E}_{n}(\hat{k})\) except for a vanishing event. We show in Appendix C that on \(\mathcal{G}_{n}\cap\mathcal{E}_{n}(\hat{k})\) except for a vanishing event, for all \(1\leq k\leq\hat{k}\),
\[A_{k}\leq 1+\frac{12ML_{0}\xi_{E}}{nd_{n}^{1/2}}, \tag{38}\]
and
\[B_{k}\geq\frac{\tau^{2}}{4L_{0}^{2}\mu^{2}}s_{n}^{-1}\frac{1}{1+ \mu Ms_{n}}\left(1-\frac{4ML_{0}\xi_{E}}{nd_{n}^{1/2}}\right). \tag{39}\]
By (37)-(39), \(\max_{1\leq k\leq\hat{k}}\hat{\sigma}_{k}^{2}/\hat{\sigma}_{k-1}^{2}\leq 1-s _{n}^{-2}C_{n}\), where
\[C_{n}=\frac{\tau^{2}}{4L_{0}^{2}\mu^{2}}\frac{1}{\mu M+s_{n}^{-1}}\left(1- \frac{4ML_{0}\xi_{E}}{nd_{n}^{1/2}}\right)-12ML_{0}\frac{s_{n}^{2}\xi_{E}}{nd_ {n}^{1/2}}.\]
By (C2) and (C4), it can be shown that there exists some \(v>0\) such that \(C_{n}\geq v\) with probability tending to one. Therefore, by the definition of \(\hat{k}\),
\[\mathbb{P}(\mathcal{G}_{n}\cap\mathcal{E}_{n}(\hat{k}))\leq \mathbb{P}(\hat{k}<K_{n},\mathcal{G}_{n}\cap\mathcal{E}_{n}(\hat{ k}))+\mathbb{P}(\hat{k}=K_{n},\mathcal{G}_{n}\cap\mathcal{E}_{n}(\hat{k}))\] \[\leq \mathbb{P}(\max_{1\leq k\leq\hat{k}}\hat{\sigma}_{k}^{2}/\hat{ \sigma}_{k-1}^{2}\leq 1-vs_{n}^{-2},\hat{k}<K_{n})+\mathbb{P}(\hat{k}=K_{n},\mathcal{G}_{n} \cap\mathcal{E}_{n}(\hat{k}))+o(1)\] \[= \mathbb{P}(\hat{k}=K_{n},\mathcal{G}_{n}\cap\mathcal{E}_{n}(\hat{ k}))+o(1), \tag{40}\]
if \(t_{n}=Cs_{n}^{-2}\) in (6) is chosen with \(C<v\). In view of (40), it remains to show \(\mathbb{P}(\hat{k}=K_{n},\mathcal{G}_{n}\cap\mathcal{E}_{n}(\hat{k}))=o(1)\). Since \(s_{n}=o(K_{n})\) by (C4), it follows from (36) and Lemma 6 that
\[\mathbb{P}(\hat{k}=K_{n},\mathcal{G}_{n})\leq \mathbb{P}\left(\frac{1}{nd_{n}}\|\tilde{\mathbf{Y}}-\hat{\mathbf{ G}}^{(K_{n})}\|_{F}^{2}\geq\frac{1}{\mu s_{n}}\right)+o(1)\] \[= \mathbb{P}\left(\frac{(nd_{n})^{-1}\|\tilde{\mathbf{Y}}-\hat{ \mathbf{G}}^{(K_{n})}\|_{F}^{2}}{K_{n}^{-1}}\geq\frac{K_{n}}{\mu s_{n}}\right)+o(1)\] \[= o(1),\]
which completes the proof.
[Proof of Lemma 2] Letting \(a_{n}=\lfloor Ds_{n}^{2}\rfloor\) for some arbitrary \(D>0\), we have
\[\mathbb{P}(\hat{k}>a_{n})\leq \mathbb{P}\left(\frac{\hat{\sigma}_{a_{n}}^{2}}{\hat{\sigma}_{a_{n }-1}^{2}}<1-Cs_{n}^{2}\right)\] \[= \mathbb{P}\left(Cs_{n}^{-2}<\frac{\hat{\sigma}_{a_{n}-1}^{2}- \zeta_{n}^{2}-(\hat{\sigma}_{a_{n}}^{2}-\zeta_{n}^{2})}{\zeta_{n}^{2}+\hat{ \sigma}_{a_{n}-1}^{2}-\zeta_{n}^{2}}\right)\] \[\leq \mathbb{P}\left(Cs_{n}^{-2}<\frac{\hat{\sigma}_{a_{n}-1}^{2}- \zeta_{n}^{2}}{M^{-1}+\hat{\sigma}_{a_{n}-1}^{2}-\zeta_{n}^{2}}+\frac{4L_{0} \xi_{E}n^{-1}d_{n}^{-1/2}}{M^{-1}+\hat{\sigma}_{a_{n}-1}^{2}-\zeta_{n}^{2}} \right)+o(1). \tag{41}\]
Put \(A_{n}=\{\hat{\sigma}_{a_{n}-1}^{2}-\zeta_{n}^{2}>0\}\). Then (41) implies
\[\mathbb{P}(\hat{k}>a_{n},A_{n})\leq \mathbb{P}\left(M^{-1}+\hat{\sigma}_{a_{n}-1}^{2}-\zeta_{n}^{2}< \frac{\hat{\sigma}_{a_{n}-1}^{2}-\zeta_{n}^{2}}{Cs_{n}^{-2}}+\frac{4L_{0}s_{n }^{2}\xi_{E}}{Cnd_{n}^{1/2}},A_{n}\right)+o(1)\] \[\leq \mathbb{P}\left(M^{-1}<Z_{n}\frac{s_{n}^{2}}{C(a_{n}-1)}+\frac{4L _{0}}{C}\frac{s_{n}^{2}\xi_{E}}{nd_{n}^{1/2}}\right)+o(1),\]
where
\[Z_{n}:=\max_{1\leq k\leq K_{n}}\frac{|(nd_{n})^{-1}\|\mathbf{Y}-\hat{\mathbf{G }}^{(k)}\|_{F}^{2}-\zeta_{n}^{2}|}{k^{-1}}.\]
Since \(\|(nd_{n})^{-1}\|\mathbf{Y}-\hat{\mathbf{G}}^{(k)}\|_{F}^{2}-\zeta_{n}^{2}| \leq(nd_{n})^{-1}\|\hat{\mathbf{Y}}-\hat{\mathbf{G}}^{(k)}\|_{F}^{2}+4L_{0} \xi_{E}n^{-1}d_{n}^{-1/2}\), where \(\zeta_{n}^{2}=(nd_{n})^{-1}\|\mathbf{E}\|_{F}^{2}\), it follows from Lemma 6 that \(Z_{n}=O_{p}(1)\). Thus \(\limsup_{n\to\infty}\mathbb{P}(\hat{k}>a_{n},A_{n})\to 0\) as \(D\to 0\). On \(A_{n}^{c}\), it is not difficult to show that
\[\hat{\sigma}_{a_{n}}^{2}-\zeta_{n}^{2}\leq\hat{\sigma}_{a_{n}-1}^{2}-\zeta_{n }^{2}\leq 0\]
and
\[\max\left\{\frac{1}{nd_{n}}\|\hat{\mathbf{Y}}-\hat{\mathbf{G}}^{(a_{n}-1)}\|_ {F}^{2},\frac{1}{nd_{n}}\|\hat{\mathbf{Y}}-\hat{\mathbf{G}}^{(a_{n})}\|_{F}^{ 2}\right\}\leq\frac{4L_{0}\xi_{E}}{nd_{n}^{1/2}}.\]
It follows that on \(A_{n}^{c}\),
\[\frac{\hat{\sigma}_{a_{n}}^{2}}{\hat{\sigma}_{a_{n}-1}^{2}}= 1-\frac{\hat{\sigma}_{a_{n}-1}^{2}-\hat{\sigma}_{a_{n}}^{2}}{ \hat{\sigma}_{a_{n}-1}^{2}}\] \[\geq 1-\frac{\hat{\sigma}_{a_{n}-1}^{2}-\zeta_{n}^{2}-(\hat{\sigma}_{ a_{n}}^{2}-\zeta_{n}^{2})}{\zeta_{n}^{2}-4L_{0}\xi_{E}n^{-1}d_{n}^{-1/2}}\] \[\geq 1-\frac{1}{\zeta_{n}^{2}-4L_{0}\xi_{E}n^{-1}d_{n}^{-1/2}}\frac{ 16L_{0}\xi_{E}}{nd_{n}^{1/2}}.\]
By (C4), we have
\[\mathbb{P}(\hat{k}>a_{n},A_{n}^{c})\leq \mathbb{P}\left(Cs_{n}^{-2}\leq\frac{1}{\zeta_{n}^{2}-4L_{0}\xi_{E} n^{-1}d_{n}^{-1/2}}\frac{16L_{0}\xi_{E}}{nd_{n}^{1/2}}\right)=o(1),\]
which completes the proof.
Before proving Theorem 3, we introduce the following uniform convergence rate for the second-stage RGA, which is also of independent interest.
**Theorem 7**: _Assume the same as Theorem 1, and additionally (C5) and (C6). The second-stage RGA satisfies_
\[\max_{1\leq m\leq K_{n}}\frac{(nd_{n})^{-1}\|\tilde{\mathbf{Y}}- \hat{\mathbf{G}}^{(m)}\|_{F}^{2}}{\left(1-\frac{\tau^{2}}{64\mu^{5}\kappa_{n}} \right)^{m}+\frac{(m+\kappa_{n})\xi_{E}^{2}}{n^{2}d_{n}}+\frac{\xi_{E}^{2}}{ \delta_{n}^{2}n^{2}}\mathbf{1}\{J_{o}\neq\emptyset\}}=O_{p}(1), \tag{42}\]
_where \(\tau<1\) is an absolute constant._
**Proof** By Theorem 1, we can assume \(\text{rank}(\mathbf{B}_{j}^{*})\leq\hat{r}_{j}\) holds for all \(j\) in the following analysis. Let \(1\leq m\leq K_{n}\) be arbitrary. Observe that for the second-stage RGA, each \(\hat{\mathbf{G}}^{(k)}\), \(k=1,2,\ldots\), lies in the set
\[\mathcal{C}_{L}=\left\{\mathbf{H}=\sum_{j\in\hat{J}}\mathbf{X}_{ j}\hat{\mathbf{\Sigma}}_{j}^{-1}\mathbf{U}_{j}\mathbf{D}_{j}\mathbf{V}_{j}^{ \top}:\sum_{j\in\hat{J}}\|\mathbf{D}_{j}\|_{*}\leq L_{n}\right\}. \tag{43}\]
By (28) and a similar argument as (30)-(32), we have, for all \(1\leq k\leq m\),
\[\langle\tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(k-1)},\mathbf{X}_{ j_{k}}\hat{\mathbf{\Sigma}}_{j_{k}}^{-1}\mathbf{U}_{j_{k}}\hat{\mathbf{S}}_{k} \mathbf{V}_{j_{k}}^{\top}-\hat{\mathbf{G}}^{(k-1)}\rangle\] \[\geq \tau\max_{\begin{subarray}{c}j\in\hat{J}_{k}\\ \|\mathbf{S}\|_{*}\leq L_{n}\end{subarray}}\langle\tilde{\mathbf{Y}}-\hat{ \mathbf{G}}^{(k-1)},\mathbf{X}_{j}\hat{\mathbf{\Sigma}}_{j}^{-1}\mathbf{U}_{j} \mathbf{S}\mathbf{V}_{j}^{\top}-\hat{\mathbf{G}}^{(k-1)}\rangle\] \[= \tau\sup_{\mathbf{H}\in\mathcal{C}_{L}}\langle\tilde{\mathbf{Y}}- \hat{\mathbf{G}}^{(k-1)},\mathbf{H}-\hat{\mathbf{G}}^{(k-1)}\rangle, \tag{44}\]
where \(\tau=1-4\mu L_{0}/\tilde{\tau}\) and \(\tilde{\tau}>4\mu L_{0}\) on the event
\[\mathcal{F}_{n}(m)=\left\{\min_{1\leq k\leq m}\max_{\begin{subarray}{c}j\in \hat{J}_{k}\\ \|\mathbf{S}\|_{*}\leq L_{n}\end{subarray}}\langle\tilde{\mathbf{Y}}-\hat{ \mathbf{G}}^{(k-1)},\mathbf{X}_{j}\hat{\mathbf{\Sigma}}_{j}^{-1}\mathbf{U}_{j} \mathbf{S}\mathbf{V}_{j}^{\top}-\hat{\mathbf{G}}^{(k-1)}\rangle>\tilde{\tau} d_{n}^{1/2}\xi_{E}\right\}.\]
Define
\[\mathcal{B}= \left\{\mathbf{H}=\sum_{j\in\hat{J}_{k}}\mathbf{X}_{j}\hat{ \mathbf{\Sigma}}_{j}^{-1}\mathbf{U}_{j}\mathbf{D}_{j}\mathbf{V}_{j}^{\top}: \|\bar{\mathbf{Y}}-\mathbf{H}\|_{F}^{2}\leq\frac{9nd_{n}L_{0}^{2}}{16\mu^{3} \kappa_{n}}\right\},\]
where
\[\bar{\mathbf{Y}}=\sum_{j\in\hat{J}_{o}}\mathbf{X}_{j}\hat{\mathbf{\Sigma}}_{j}^{-1 }\mathbf{U}_{j}\mathbf{L}_{j}\mathbf{\Lambda}_{j}\mathbf{R}_{j}^{\top}\mathbf{V }_{j}^{\top}+\sum_{j\in\hat{J}-J_{o}}\mathbf{X}_{j}\mathbf{B}_{j}^{*}, \tag{45}\]
in which \(\hat{J}_{o}=\{j\in\hat{J}:\hat{r}<\min\{q_{n,j},d_{n}\}\}\), \(\mathbf{\Lambda}_{j}\) are defined in (C6), and \(\mathbf{L}_{j}\), \(\mathbf{R}_{j}\) are \(\hat{r}\times\bar{r}_{j}\) matrices such that \(\mathbf{L}_{j}^{T}\mathbf{L}_{j}=\mathbf{I}_{\bar{r}_{j}}=\mathbf{R}_{j}^{T} \mathbf{R}_{j}\) to be specified later (recall that \(\hat{r}\geq\bar{r}_{j}=\operatorname{rank}(\mathbf{X}_{j}^{\top}\tilde{ \mathbf{Y}})\) because of Theorem 1). We claim that
\[\lim_{n\to\infty}\mathbb{P}(\mathcal{B}\subseteq\mathcal{C}_{L})=1, \tag{46}\]
whose proof is relegated to Appendix C. Now put \(\mathbf{H}^{(l)}=\hat{\mathbf{G}}^{(l)}+(1+\alpha_{l})(\tilde{\mathbf{Y}}- \hat{\mathbf{G}}^{(l)})\) for \(l=1,2,\ldots\), where
\[\alpha_{l}=\frac{3\sqrt{nd_{n}}L_{0}}{4\mu^{3/2}\sqrt{\kappa_{n}}\|\tilde{ \mathbf{Y}}-\hat{\mathbf{G}}^{(l)}\|_{F}}\geq 0.\]
Then (46) implies that \(\mathbb{P}(\mathbf{H}^{(l)}\in\mathcal{C}_{L},l=1,2,\ldots)\to 1\). Thus by (44),
\[\langle\tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(k-1)},\mathbf{X}_{ \bar{j}_{k}}\hat{\mathbf{\Sigma}}_{j_{k}}^{-1}\mathbf{U}_{\bar{j}_{k}}\hat{ \mathbf{S}}_{k}\mathbf{V}_{\bar{j}_{k}}^{\top}-\hat{\mathbf{G}}^{(k-1)}\rangle\] \[\geq\tau\langle\tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(k-1)}, \mathbf{H}^{(k-1)}-\hat{\mathbf{G}}^{(k-1)}\rangle \tag{47}\]
for all \(1\leq k\leq m\) on \(\mathcal{F}_{n}(m)\) except for a vanishing event. Put \(\mathcal{H}_{n}(m)=\{\|\tilde{\mathbf{Y}}-\tilde{\mathbf{Y}}\|_{F}<2^{-1}\min_ {1\leq l\leq m}\|\tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(l-1)}\|_{F}\}\). On \(\mathcal{F}_{n}(m)\cap\mathcal{H}_{n}(m)\) except for a vanishing event, (47) and Cauchy-Schwarz inequality yield
\[\langle\tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(k-1)},\mathbf{X}_{ \bar{j}_{k}}\hat{\mathbf{\Sigma}}_{j_{k}}^{-1}\mathbf{U}_{\bar{j}_{k}}\hat{ \mathbf{S}}_{k}\mathbf{V}_{\bar{j}_{k}}^{\top}-\hat{\mathbf{G}}^{(k-1)}\rangle\] \[\geq \tau\langle\tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(k-1)},\mathbf{ H}^{(k-1)}-\hat{\mathbf{G}}^{(k-1)}\rangle\] \[\geq \tau(1+\alpha_{k-1})\left\{\|\tilde{\mathbf{Y}}-\hat{\mathbf{G}} ^{(k-1)}\|_{F}^{2}-\|\tilde{\mathbf{Y}}-\bar{\mathbf{Y}}\|_{F}\|\bar{\mathbf{Y }}-\hat{\mathbf{G}}^{(k-1)}\|_{F}\right\}\] \[\geq \frac{\tau(1+\alpha_{k-1})}{2}\|\bar{\mathbf{Y}}-\hat{\mathbf{G} }^{(k-1)}\|_{F}^{2}\geq 0\]
for all \(1\leq k\leq m\). Notice that \(\|\tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(k-1)}\|_{F}\geq(2/3)\|\tilde{\mathbf{Y}}- \hat{\mathbf{G}}^{(k-1)}\|_{F}\) for all \(1\leq k\leq m\) on \(\mathcal{H}_{n}(m)\). Hence, by Lemma 10(ii), (iii), and a similar argument used in (33),
\[\|\tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(k)}\|_{F}^{2}\leq \|\tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(k-1)}\|_{F}^{2}-\frac{ \langle\tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(k-1)},\mathbf{X}_{j_{k}}\hat{ \mathbf{\Sigma}}_{j_{k}}^{-1}\mathbf{U}_{j_{k}}^{\cdot}\hat{\mathbf{S}}_{k} \mathbf{V}_{j_{k}}^{\top}-\hat{\mathbf{G}}^{(k-1)}\rangle^{2}}{\|\mathbf{X}_{ j_{k}}\hat{\mathbf{\Sigma}}_{j_{k}}^{-1}\mathbf{U}_{j_{k}}^{\cdot}\hat{\mathbf{S}}_{k} \mathbf{V}_{j_{k}}^{\top}-\hat{\mathbf{G}}^{(k-1)}\|_{F}^{2}}\] \[+2\langle\mathbf{E},\hat{\mathbf{G}}^{(k)}-\mathbf{G}^{(k)}\rangle\] \[\leq \|\tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(k-1)}\|_{F}^{2}-\frac{ \tau^{2}\langle\tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(k-1)},\mathbf{H}^{(k-1)} -\hat{\mathbf{G}}^{(k-1)}\rangle^{2}}{4n\mu L_{n}^{2}}+2\langle\mathbf{E},\hat {\mathbf{G}}^{(k)}-\mathbf{G}^{(k)}\rangle\] \[\leq \|\tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(k-1)}\|_{F}^{2}-\frac{ \tau^{2}(1+\alpha_{k-1})^{2}}{16n\mu L_{n}^{2}}\|\tilde{\mathbf{Y}}-\hat{ \mathbf{G}}^{(k-1)}\|_{F}^{4}+2\langle\mathbf{E},\hat{\mathbf{G}}^{(k)}- \mathbf{G}^{(k)}\rangle\] \[\leq \|\tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(k-1)}\|_{F}^{2}-\frac{ \tau^{2}\|\tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(k-1)}\|_{F}^{2}}{64\mu^{4} \kappa_{n}}\] \[+2(\hat{\lambda}_{k}-\lambda_{k})\langle\mathbf{E},\mathbf{X}_{j _{k}}\hat{\mathbf{\Sigma}}_{j_{k}}^{-1}\mathbf{U}_{j_{k}}\hat{\mathbf{S}}_{k} \mathbf{V}_{j_{k}}^{\top}-\hat{\mathbf{G}}^{(k-1)}\rangle\] \[\leq \|\tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(k-1)}\|_{F}^{2}\left(1- \frac{\tau^{2}}{64\mu^{4}\kappa_{n}}\right)+\frac{8\mu}{1-\epsilon_{L}}\frac{ \xi_{E}^{2}}{n}\]
for all \(1\leq k\leq m\) on \(\mathcal{F}_{n}(m)\cap\mathcal{H}_{n}(m)\) except for a vanishing event. It follows that, on the same event,
\[\|\tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(m)}\|_{F}^{2}\leq\|\tilde{\mathbf{Y}} \|_{F}^{2}\left(1-\frac{\tau^{2}}{64\mu^{4}\kappa_{n}}\right)^{m}+\frac{8\mu} {1-\epsilon_{L}}\frac{m\xi_{E}^{2}}{n}. \tag{48}\]
By (28), on \(\mathcal{F}_{n}^{c}(m)\cap\mathcal{H}_{n}(m)\) there exists some \(1\leq k\leq m\) such that
\[\tilde{\tau}d_{n}^{1/2}\xi_{E}\geq \langle\tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(k-1)},\mathbf{H}^{(k- 1)}-\hat{\mathbf{G}}^{(k-1)}\rangle\] \[\geq (1+\alpha_{k-1})\langle\tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(k-1 )},\bar{\mathbf{Y}}-\hat{\mathbf{G}}^{(k-1)}\rangle\] \[\geq \frac{1}{2}(1+\alpha_{k-1})\|\bar{\mathbf{Y}}-\hat{\mathbf{G}}^{( k-1)}\|_{F}^{2}\] \[\geq \frac{3\sqrt{nd_{n}}L_{0}}{8\mu^{3/2}\sqrt{\kappa_{n}}}\|\bar{ \mathbf{Y}}-\hat{\mathbf{G}}^{(k-1)}\|_{F},\]
which implies
\[\|\tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(m)}\|_{F}^{2}\leq \|\tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(k-1)}\|_{F}^{2}+\frac{8\mu} {1-\epsilon_{L}}\frac{(m-k)\xi_{E}^{2}}{n}\] \[\leq 2\|\tilde{\mathbf{Y}}-\bar{\mathbf{Y}}\|_{F}^{2}+2\|\bar{ \mathbf{Y}}-\hat{\mathbf{G}}^{(k-1)}\|_{F}^{2}+\frac{8\mu}{1-\epsilon_{L}}\frac{ (m-k)\xi_{E}^{2}}{n}\] \[\leq \frac{5}{2}\|\bar{\mathbf{Y}}-\hat{\mathbf{G}}^{(k-1)}\|_{F}^{2} +\frac{8\mu}{1-\epsilon_{L}}\frac{(m-k)\xi_{E}^{2}}{n}\] \[\leq \left(\frac{160\tilde{\tau}^{2}\mu^{3}}{9L^{2}}\kappa_{n}+\frac{8 \mu}{1-\epsilon_{L}}(m-k)\right)\frac{\xi_{E}^{2}}{n}. \tag{49}\]
Next, on \({\cal H}^{c}_{n}(m)\), there exists some \(1\leq k\leq m\) such that \(\|\tilde{\bf Y}-\hat{\bf G}^{(k-1)}\|_{F}^{2}\leq 4\|\tilde{\bf Y}-\tilde{\bf Y}\|_{F}^ {2}\). By (26) and the parallelogram law,
\[\|\tilde{\bf Y}-\hat{\bf G}^{(m)}\|_{F}^{2}\leq \|\tilde{\bf Y}-\hat{\bf G}^{(k-1)}\|_{F}^{2}+2\sum_{j=k}^{m} \langle{\bf E},\hat{\bf G}^{(j)}-{\bf G}^{(j)}\rangle\] \[\leq 10\|\tilde{\bf Y}-\tilde{\bf Y}\|_{F}^{2}+\frac{8\mu}{1-\epsilon _{L}}\frac{(m-k)\xi_{E}^{2}}{n} \tag{50}\]
on \({\cal H}^{c}_{n}(m)\) except for a vanishing event. Finally, note that (48)-(50) are valid for any choice of \({\bf L}_{j}\) and \({\bf R}_{j}\) so long as \({\bf L}_{j}^{\top}{\bf L}_{j}={\bf I}_{\tilde{r}_{j}}={\bf R}_{j}^{\top}{\bf R }_{j}\), \(j\in\hat{J}\). In Appendix C, we show that \({\bf L}_{j}\), \({\bf R}_{j}\), \(j\in\hat{J}_{o}\), can be chosen so that
\[\frac{1}{nd_{n}}\|\tilde{\bf Y}-\tilde{\bf Y}\|_{F}^{2}\leq 8\mu L^{2}\frac{ \xi_{E}^{2}}{(n\delta_{n}-\xi_{E})^{2}}=O_{p}\left(\frac{\xi_{E}^{2}}{n^{2} \delta_{n}^{2}}\right). \tag{51}\]
Hence, by (48)-(51), the desired result follows. \(\blacksquare\)
Now we are ready to prove our last main result.
**Proof** [Proof of Theorem 3] Note first that \({\cal C}_{L}\) (defined in (43)) is a convex compact set almost surely. Thus we can define \({\bf Y}^{*}\) to be the orthogonal projection of \({\bf Y}\) onto \({\cal C}_{L}\). Since \(\hat{\bf G}^{(m)}\in{\cal C}_{L}\) and \(\hat{\sigma}_{m}^{2}\leq\hat{\sigma}_{m_{n}}^{2}\) for \(m\geq m_{n}\), it follows that for \(m\geq m_{n}\),
\[\|{\bf Y}^{*}-\hat{\bf G}^{(m)}\|_{F}^{2}= \|{\bf Y}-\hat{\bf G}^{(m)}\|_{F}^{2}-\|{\bf Y}-{\bf Y}^{*}\|_{F} ^{2}+2\langle{\bf Y}^{*}-{\bf Y},{\bf Y}^{*}-\hat{\bf G}^{(m)}\rangle\] \[\leq \|{\bf Y}-\hat{\bf G}^{(m_{n})}\|_{F}^{2}-\|{\bf Y}-{\bf Y}^{*}\| _{F}^{2}\] \[= \|{\bf Y}^{*}-\hat{\bf G}^{(m_{n})}\|_{F}^{2}-2\langle\tilde{\bf Y }-{\bf Y}^{*},\hat{\bf G}^{(m_{n})}-{\bf Y}^{*}\rangle-2\langle{\bf E},\hat{ \bf G}^{(m_{n})}-{\bf Y}^{*}\rangle\] \[\leq 2\|{\bf Y}^{*}-\hat{\bf G}^{(m_{n})}\|_{F}^{2}+\|{\bf Y}^{*}- \tilde{\bf Y}\|_{F}^{2}-2\langle{\bf E},\hat{\bf G}^{(m_{n})}-{\bf Y}^{*}\rangle. \tag{52}\]
Note that if \({\bf H},{\bf G}\) are in \({\cal C}_{L}\) with \({\bf H}=\sum_{j\in\hat{J}}{\bf X}_{j}\hat{\bf\Sigma}_{j}^{-1}{\bf U}_{j}{\bf S }_{j}^{H}{\bf V}_{j}^{\top}\) and \({\bf G}=\sum_{j\in\hat{J}}{\bf X}_{j}\hat{\bf\Sigma}_{j}^{-1}{\bf U}_{j}{\bf S }_{j}^{G}{\bf V}_{j}^{\top}\), then by Proposition 8 and (C3) we have
\[\|{\bf H}-{\bf G}\|_{F}^{2}\geq\frac{n}{\mu^{3}\kappa_{n}}\left\{\sum_{j\in \hat{J}}\|{\bf S}_{j}^{H}-{\bf S}_{j}^{G}\|_{*}\right\}^{2}.\]
Hence
\[|\langle{\bf E},{\bf H}-{\bf G}\rangle|\leq\mu\xi_{E}\sum_{j\in\hat{J}}\|{ \bf S}_{j}^{H}-{\bf S}_{j}^{G}\|_{*}\leq\xi_{E}\sqrt{\frac{\mu^{5}\kappa_{n}} {n}}\|{\bf H}-{\bf G}\|_{F}. \tag{53}\]
Combining (52) and (53) yields
\[\|{\bf Y}^{*}-\hat{\bf G}^{(m)}\|_{F}^{2}\leq 2\|{\bf Y}^{*}-\hat{\bf G}^{(m_{ n})}\|_{F}^{2}+\|{\bf Y}^{*}-\tilde{\bf Y}\|_{F}^{2}+2\xi_{E}\sqrt{\frac{\mu^{5} \kappa_{n}}{n}}\|{\bf Y}^{*}-\hat{\bf G}^{(m)}\|_{F}.\]
Since \(x^{2}\leq c+bx\) (\(x,b,c\geq 0\)) implies \(x\leq(b+\sqrt{b^{2}+4c})/2\), we have
\[\|\mathbf{Y}^{*}-\hat{\mathbf{G}}^{(m)}\|_{F}^{2}\leq 2\|\mathbf{Y}^{*}- \tilde{\mathbf{Y}}\|_{F}^{2}+4\|\mathbf{Y}^{*}-\hat{\mathbf{G}}^{(m_{n})}\|_{F }^{2}+4\mu^{5}\frac{\kappa_{n}\xi_{E}^{2}}{n}. \tag{54}\]
By (54) and repeated applications of the parallelogram law, it is straightforward to show
\[\frac{1}{nd_{n}}\|\tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(m)}\|_{F}^{2}\leq \frac{C_{1}}{nd_{n}}\left\{\|\tilde{\mathbf{Y}}-\mathbf{Y}^{*}\|_{F}^{2}+\| \tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(m_{n})}\|_{F}^{2}+\frac{\mu^{5}\kappa_{ n}\xi_{E}^{2}}{n}\right\}\]
for some absolute constant \(C_{1}\). The right-hand side does not depend on \(m\), so the inequality still holds if we take supremum over \(m\geq m_{n}\) on the left-hand side. Moreover, by (C3) and Theorem 1, we have
\[\sup_{m\geq m_{n}}\frac{1}{d_{n}}\sum_{j=1}^{p_{n}}\|\mathbf{B}_{ j}^{*}-\hat{\mathbf{B}}_{j}^{(m)}\|_{F}^{2}=O_{p}\left(\frac{1}{nd_{n}}\left\{\| \tilde{\mathbf{Y}}-\mathbf{Y}^{*}\|_{F}^{2}+\|\tilde{\mathbf{Y}}-\hat{ \mathbf{G}}^{(m_{n})}\|_{F}^{2}+\frac{\mu^{5}\kappa_{n}\xi_{E}^{2}}{n}\right\}\right) \tag{55}\]
By Theorem 7 and the choice of \(m_{n}\), we have
\[\frac{1}{nd_{n}}\|\tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(m_{n})} \|_{F}^{2}=O_{p}\left(\frac{\kappa_{n}\xi_{n}^{2}}{n^{2}d_{n}}\log\frac{n^{2} d_{n}}{\xi_{n}^{2}}+\frac{\xi_{n}^{2}}{n^{2}\delta_{n}^{2}}\right). \tag{56}\]
By (C6), it is not difficult to show \(\tilde{\mathbf{Y}}\), defined in (45), is in \(\mathcal{C}_{L}\). It follows from the definition of \(\mathbf{Y}^{*}\) that
\[\|\tilde{\mathbf{Y}}-\mathbf{Y}^{*}\|_{F}^{2}= \|\mathbf{Y}-\mathbf{Y}^{*}\|_{F}^{2}-\|\mathbf{E}\|_{F}^{2}-2 \langle\mathbf{E},\tilde{\mathbf{Y}}-\mathbf{Y}^{*}\rangle\] \[\leq \|\mathbf{Y}-\bar{\mathbf{Y}}\|_{F}^{2}-\|\mathbf{E}\|_{F}^{2}-2 \langle\mathbf{E},\tilde{\mathbf{Y}}-\mathbf{Y}^{*}\rangle\] \[= \|\tilde{\mathbf{Y}}-\bar{\mathbf{Y}}\|_{F}^{2}+2\langle\mathbf{E },\mathbf{Y}^{*}-\bar{\mathbf{Y}}\rangle. \tag{57}\]
By (53) again,
\[|\langle\mathbf{E},\mathbf{Y}^{*}-\bar{\mathbf{Y}}\rangle|\leq \xi_{E}\left(\frac{\mu^{5}\kappa_{n}}{n}\right)^{1/2}\|\bar{\mathbf{Y}}- \mathbf{Y}^{*}\|_{F}. \tag{58}\]
Now if \(\|\bar{\mathbf{Y}}-\mathbf{Y}^{*}\|_{F}\geq 2\|\bar{\mathbf{Y}}-\mathbf{Y}^{*}\|_{F}\), then \(\|\bar{\mathbf{Y}}-\mathbf{Y}^{*}\|_{F}\leq 2\|\bar{\mathbf{Y}}-\bar{\mathbf{Y}}\|_{F}\). This, together with (57), (58), and (51), yields
\[\|\hat{\mathbf{Y}}-\mathbf{Y}^{*}\|_{F}^{2}\leq \|\tilde{\mathbf{Y}}-\bar{\mathbf{Y}}\|_{F}^{2}+4\xi_{E}\left( \frac{\mu^{5}\kappa_{n}}{n}\right)^{1/2}\|\bar{\mathbf{Y}}-\tilde{\mathbf{Y}} \|_{F}\] \[\leq 2\|\tilde{\mathbf{Y}}-\bar{\mathbf{Y}}\|_{F}^{2}+4\mu^{5}\frac{ \kappa_{n}\xi_{E}^{2}}{n}\] \[\leq 16\mu L_{0}^{2}\frac{nd_{n}\xi_{E}^{2}}{(n\delta_{n}-\xi_{E})^{2 }}+4\mu^{5}\frac{\kappa_{n}\xi_{E}^{2}}{n}. \tag{59}\]
On the other hand, if \(\|\bar{\mathbf{Y}}-\mathbf{Y}^{*}\|_{F}<2\|\hat{\mathbf{Y}}-\mathbf{Y}^{*}\|_ {F}\), then (57) and (58) imply
\[\|\tilde{\mathbf{Y}}-\mathbf{Y}^{*}\|_{F}^{2}\leq \|\tilde{\mathbf{Y}}-\bar{\mathbf{Y}}\|_{F}^{2}+4\xi_{E}\left( \frac{\mu^{5}\kappa_{n}}{n}\right)^{1/2}\|\tilde{\mathbf{Y}}-\mathbf{Y}^{*}\| _{F}.\]
By a similar argument used to obtain (54), this and (51) yield
\[\|\tilde{\mathbf{Y}}-\mathbf{Y}^{*}\|_{F}^{2}\leq 16\mu^{5}\frac{\kappa_{n}\xi_{E}^{2}}{n}+2\|\tilde{\mathbf{Y}}- \tilde{\mathbf{Y}}\|_{F}^{2}\] \[\leq 16\mu^{5}\frac{\kappa_{n}\xi_{E}^{2}}{n}+16\mu L_{0}^{2}\frac{nd_ {n}\xi_{E}^{2}}{(n\delta_{n}-\xi_{E})^{2}}. \tag{60}\]
In view of (55), (56), (59), (60) and (C5), the desired result follows.
## Appendix C Further technical details
In this section, we present some additional auxiliary results along with the proofs of (35), (38), (39), (46), (51). Some existing results that are useful in our proofs are also stated here for completeness with the references to their proofs in the literature. These results are stated in the forms that are most convenient for our use, which may not be in full generality.
**Proposition 8** (Ruhe, 1970): _Let \(\mathbf{A},\mathbf{B}\) be matrices with size \(m\times n\) and \(n\times p\) respectively. Then_
\[\sum_{j=1}^{n}\sigma_{j}^{2}(\mathbf{A})\sigma_{j}^{2}(\mathbf{B})\geq\| \mathbf{A}\mathbf{B}\|_{F}^{2}\geq\sum_{j=1}^{n}\sigma_{n-j+1}^{2}(\mathbf{A} )\sigma_{j}^{2}(\mathbf{B}).\]
**Remark 9**: One consequence of this inequality we frequently use is \(\sigma_{1}^{2}(\mathbf{A})\|\mathbf{B}\|_{F}^{2}\geq\|\mathbf{A}\mathbf{B}\|_ {F}^{2}\geq\sigma_{n}^{2}(\mathbf{A})\|\mathbf{B}\|_{F}^{2}\). Note also that by transposition the roles of \(\mathbf{A}\) and \(\mathbf{B}\) can be interchanged on the left- and right-most expressions.
**Lemma 10**: _Assume (C1)-(C2) and that \(\sum_{j=1}^{p_{n}}\|\mathbf{B}_{j}^{*}\|_{*}\leq L\). Suppose \(L_{n}=d_{n}^{1/2}L_{0}\) is chosen so that \(L_{0}\geq L/(1-\epsilon_{L})\) with \(1-\epsilon_{L}\leq 1/(4\mu^{2})\). Then for first- and second-stage RGA, with probability tending to one,_
_(i)_
\[\inf_{k\geq 1}\frac{1}{nd_{n}}\|\mathbf{X}_{\hat{j}_{k}}\tilde{ \mathbf{B}}_{\hat{j}_{k}}-\hat{\mathbf{G}}^{(k-1)}\|_{F}^{2}\geq (1-\epsilon_{L})\mu L_{0}^{2} \tag{61}\] \[\inf_{k\geq 1}\frac{1}{nd_{n}}\|\mathbf{X}_{\hat{j}_{k}}\hat{ \mathbf{\Sigma}}_{\hat{j}_{k}}^{-1}\mathbf{U}_{\hat{j}_{k}}\hat{\mathbf{S}}_{ k}\mathbf{V}_{\hat{j}_{k}}^{\top}-\hat{\mathbf{G}}^{(k-1)}\|_{F}^{2}\geq (1-\epsilon_{L})\mu L_{0}^{2} \tag{62}\]
_(ii)_
\[\sup_{k\geq 1}|\lambda_{k}-\hat{\lambda}_{k}|\leq \frac{2}{(1-\epsilon_{L})L_{0}}\frac{\xi_{E}}{n\sqrt{d_{n}}} \tag{63}\]
_(iii)_
\[\max_{1\leq k\leq K_{n}}\lambda_{k}\leq 1. \tag{64}\]
ProofWe shall prove the results for the second-stage RGA. The corresponding proofs for first-stage RGA follow similarly and thus are omitted. It is also sufficient to prove (i)-(iii) assuming the condition described in (C1) holds almost surely because the event that the condition holds has probability tending to one. It will greatly simplify the exposition (without repeating that the inequalities holds except on a vanishing event). Note that
\[\langle\mathbf{X}_{\hat{j}_{k}}\hat{\mathbf{\Sigma}}_{\hat{j}_{k} }^{-1}\mathbf{U}_{\hat{j}_{k}}\hat{\mathbf{S}}_{k}\mathbf{V}_{\hat{j}_{k}}^{ \top},\tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(k-1)}\rangle\] \[= \langle\mathbf{X}_{\hat{j}_{k}}\hat{\mathbf{\Sigma}}_{\hat{j}_{k} }^{-1}\mathbf{U}_{\hat{j}_{k}}\hat{\mathbf{S}}_{k}\mathbf{V}_{\hat{j}_{k}}^{ \top},\mathbf{Y}-\hat{\mathbf{G}}^{(k-1)}\rangle-\langle\mathbf{X}_{\hat{j}_{k }}\hat{\mathbf{\Sigma}}_{\hat{j}_{k}}^{-1}\mathbf{U}_{\hat{j}_{k}}\hat{\mathbf{ S}}_{k}\mathbf{V}_{\hat{j}_{k}}^{\top},\mathbf{E}\rangle\] \[\geq -|\langle\mathbf{X}_{\hat{j}_{k}}\hat{\mathbf{\Sigma}}_{\hat{j}_ {k}}^{-1}\mathbf{U}_{\hat{j}_{k}}\hat{\mathbf{S}}_{k}\mathbf{V}_{\hat{j}_{k}}^ {\top},\mathbf{E}\rangle|\] \[\geq -\|\hat{\mathbf{\Sigma}}_{\hat{j}_{k}}^{-1}\mathbf{X}_{\hat{j}_{ k}}^{-1}\mathbf{E}\|_{op}\|\mathbf{U}_{\hat{j}_{k}}\hat{\mathbf{S}}_{k}\mathbf{V}_{ \hat{j}_{k}}^{\top}\|_{*}\] \[\geq -\mu L_{n}\xi_{E},\]
where the first inequality follows because \(\langle\mathbf{X}_{\hat{j}_{k}}\hat{\mathbf{\Sigma}}_{\hat{j}_{k}}^{-1} \mathbf{U}_{\hat{j}_{k}}\hat{\mathbf{S}}_{k}\mathbf{V}_{\hat{j}_{k}}^{T}, \mathbf{Y}-\hat{\mathbf{G}}^{(k-1)}\rangle\geq 0\) with probability one and the second inequality follows because the dual norm of the nuclear norm is the operator norm. By Proposition 8, we have
\[\|\mathbf{X}_{\hat{j}_{k}}\hat{\mathbf{\Sigma}}_{\hat{j}_{k}}^{-1} \mathbf{U}_{\hat{j}_{k}}\hat{\mathbf{S}}_{k}\mathbf{V}_{\hat{j}_{k}}^{\top}- \hat{\mathbf{G}}^{(k-1)}\|_{F}^{2}\] \[\geq \|\mathbf{X}_{\hat{j}_{k}}\hat{\mathbf{\Sigma}}_{\hat{j}_{k}}^{-1 }\mathbf{U}_{\hat{j}_{k}}\hat{\mathbf{S}}_{k}\mathbf{V}_{\hat{j}_{k}}^{\top}\| _{F}^{2}-2\langle\mathbf{X}_{\hat{j}_{k}}\hat{\mathbf{\Sigma}}_{\hat{j}_{k}}^{- 1}\mathbf{U}_{\hat{j}_{k}}\hat{\mathbf{S}}_{k}\mathbf{V}_{\hat{j}_{k}}^{\top},\hat{\mathbf{G}}^{(k-1)}\rangle\] \[\geq n\mu^{-1}\|\mathbf{U}_{\hat{j}_{k}}\hat{\mathbf{S}}_{k}\mathbf{V} _{\hat{j}_{k}}^{\top}\|_{F}^{2}\] \[+2\langle\mathbf{X}_{\hat{j}_{k}}\hat{\mathbf{\Sigma}}_{\hat{j}_ {k}}^{-1}\mathbf{U}_{\hat{j}_{k}}\hat{\mathbf{S}}_{k}\mathbf{V}_{\hat{j}_{k}}^ {\top},\tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(k-1)}\rangle-2\langle\mathbf{X} _{\hat{j}_{k}}\hat{\mathbf{\Sigma}}_{\hat{j}_{k}}^{-1}\mathbf{U}_{\hat{j}_{k} }\hat{\mathbf{S}}_{k}\mathbf{V}_{\hat{j}_{k}}^{\top},\tilde{\mathbf{Y}}\rangle\] \[\geq n\mu^{-1}L_{n}^{2}-2\mu L_{n}\xi_{E}-2\langle\mathbf{X}_{\hat{j}_ {k}}\hat{\mathbf{\Sigma}}_{\hat{j}_{k}}^{-1}\mathbf{U}_{\hat{j}_{k}}\hat{ \mathbf{S}}_{k}\mathbf{V}_{\hat{j}_{k}}^{\top},\tilde{\mathbf{Y}}\rangle,\]
where the last inequality follows from the fact that \(\hat{\mathbf{S}}_{k}\) is rank-one with singular value \(L_{n}\). Thus, by writing \(\hat{\mathbf{S}}_{k}=L_{n}\mathbf{a}\mathbf{b}^{T}\) for some unit vectors \(\mathbf{a},\mathbf{b}\), we have \(\|\mathbf{U}_{\hat{j}_{k}}\hat{\mathbf{S}}_{k}\mathbf{V}_{\hat{j}_{k}}^{T}\| _{F}^{2}=L_{n}^{2}\|\mathbf{U}_{\hat{j}_{k}}\mathbf{a}\mathbf{b}^{T}\mathbf{V }_{\hat{j}_{k}}^{T}\|_{F}^{2}=L_{n}^{2}\). Next, observe that
\[|\langle\mathbf{X}_{\hat{j}_{k}}\hat{\mathbf{\Sigma}}_{\hat{j}_{k} }^{-1}\mathbf{U}_{\hat{j}_{k}}\hat{\mathbf{S}}_{k}\mathbf{V}_{\hat{j}_{k}}^{ \top},\tilde{\mathbf{Y}}\rangle|= \left|\sum_{j=1}^{p_{n}}\langle\mathbf{X}_{\hat{j}_{k}}\hat{ \mathbf{\Sigma}}_{\hat{j}_{k}}^{-1}\mathbf{U}_{\hat{j}_{k}}\hat{\mathbf{S}}_{k }\mathbf{V}_{\hat{j}_{k}}^{\top},\mathbf{X}_{j}\mathbf{B}_{j}^{*}\rangle\right|\] \[\leq \sum_{j=1}^{p_{n}}\|\mathbf{B}_{j}^{*}\|_{*}\|\mathbf{X}_{j}^{ \top}\mathbf{X}_{\hat{j}_{k}}\hat{\mathbf{\Sigma}}_{\hat{j}_{k}}^{-1}\mathbf{U}_ {\hat{j}_{k}}\hat{\mathbf{S}}_{k}\mathbf{V}_{\hat{j}_{k}}^{\top}\|_{op}\] \[\leq (1-\epsilon_{L})L_{n}^{2}n\mu.\]
Therefore,
\[(nd_{n})^{-1}\|\mathbf{X}_{\hat{j}_{k}}\hat{\mathbf{\Sigma}}_{\hat{j}_ {k}}^{-1}\mathbf{U}_{\hat{j}_{k}}\hat{\mathbf{S}}_{k}\mathbf{V}_{\hat{j}_{k}}^{ \top}-\hat{\mathbf{G}}^{(k-1)}\|_{F}^{2}\geq \mu^{-1}L_{0}^{2}-2(1-\epsilon_{L})L_{0}^{2}\mu-2\mu L_{0}\frac{\xi_{E }}{n\sqrt{d_{n}}}\] \[\geq 2(1-\epsilon_{L})L_{0}^{2}\mu-2\mu L_{0}\frac{\xi_{E}}{n\sqrt{d_ {n}}}.\]
Since \(\xi_{E}=o_{p}(n\sqrt{d_{n}})\) by (C2), (62) follows.
For (63), note first that if the solutions to the line search problems (9) and (24) (with \(\tilde{\mathbf{B}}_{\hat{j}_{k}}\) replaced by \(\hat{\mathbf{\Sigma}}_{\hat{j}_{k}}^{-1}\mathbf{U}_{\hat{j}_{k}}\hat{\mathbf{S} }_{k}\mathbf{V}_{\hat{j}_{k}}^{\top}\)) for second-stage RGA are not constrained to be in \([0,1]\), then they are given by
\[\hat{\lambda}_{k,uc}= \frac{\langle\mathbf{Y}-\hat{\mathbf{G}}^{(k-1)},\mathbf{X}_{\hat {j}_{k}}\hat{\mathbf{\Sigma}}_{\hat{j}_{k}}^{-1}\mathbf{U}_{\hat{j}_{k}}\hat{ \mathbf{S}}_{k}\mathbf{V}_{\hat{j}_{k}}^{\top}-\hat{\mathbf{G}}^{(k-1)} \rangle}{\|\mathbf{X}_{\hat{j}_{k}}\hat{\mathbf{\Sigma}}_{\hat{j}_{k}}^{-1} \mathbf{U}_{\hat{j}_{k}}\hat{\mathbf{S}}_{k}\mathbf{V}_{\hat{j}_{k}}^{\top}- \hat{\mathbf{G}}^{(k-1)}\|_{F}^{2}},\] \[\lambda_{k,uc}= \frac{\langle\tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(k-1)}, \mathbf{X}_{\hat{j}_{k}}\hat{\mathbf{\Sigma}}_{\hat{j}_{k}}^{-1}\mathbf{U}_{ \hat{j}_{k}}\hat{\mathbf{S}}_{k}\mathbf{V}_{\hat{j}_{k}}^{\top}-\hat{\mathbf{ G}}^{(k-1)}\rangle}{\|\mathbf{X}_{\hat{j}_{k}}\hat{\mathbf{\Sigma}}_{\hat{j}_{k}}^{-1} \mathbf{U}_{\hat{j}_{k}}\hat{\mathbf{S}}_{k}\mathbf{V}_{\hat{j}_{k}}^{\top}- \hat{\mathbf{G}}^{(k-1)}\|_{F}^{2}}.\]
Since \(\hat{\mathbf{G}}^{(l)}\) can always be expressed as \(\hat{\mathbf{G}}^{(l)}=\sum_{j\in\hat{J}}\mathbf{X}_{j}\hat{\mathbf{\Sigma}}_ {j}^{-1}\mathbf{U}_{\hat{j}}\mathbf{A}_{j}\mathbf{V}_{j}^{\top}\) with \(\sum_{j\in\hat{J}}\|\mathbf{A}_{j}\|_{*}\leq L_{n}\), it follows that
\[|\hat{\lambda}_{k}-\lambda_{k}|\leq|\hat{\lambda}_{k,uc}-\lambda_ {k,uc}|= \frac{|\langle\mathbf{E},\mathbf{X}_{\hat{j}_{k}}\hat{\mathbf{ \Sigma}}_{\hat{j}_{k}}^{-1}\mathbf{U}_{\hat{j}_{k}}\hat{\mathbf{S}}_{k} \mathbf{V}_{\hat{j}_{k}}^{\top}-\hat{\mathbf{G}}^{(k-1)}\rangle|}{\|\mathbf{X }_{\hat{j}_{k}}\hat{\mathbf{\Sigma}}_{\hat{j}_{k}}^{\cdot\hat{1}_{k}}\mathbf{ V}_{\hat{j}_{k}}^{\top}-\hat{\mathbf{G}}^{(k-1)}\|_{F}^{2}}\] \[\leq \frac{2L_{n}\mu\xi_{E}}{\|\mathbf{X}_{\hat{j}_{k}}\hat{\mathbf{ \Sigma}}_{\hat{j}_{k}}^{-1}\mathbf{U}_{\hat{j}_{k}}\hat{\mathbf{S}}_{k} \mathbf{V}_{\hat{j}_{k}}^{\top}-\hat{\mathbf{G}}^{(k-1)}\|_{F}^{2}}\] \[\leq \frac{2\xi_{E}}{nd_{n}^{1/2}(1-\epsilon_{L})L_{0}},\]
with probability tending to one, where the last inequality follows from (62).
For (64), it suffices to prove that \(\lim_{n\to\infty}\mathbb{P}(E_{n})=1\), where \(E_{n}=\{\max_{1\leq k\leq K_{n}}\lambda_{k,uc}\leq 1\}\). On \(E_{n}^{c}\), there exists some \(k\) such that, by Cauchy-Schwarz inequality and (26),
\[\|\mathbf{X}_{\hat{j}_{k}}\hat{\mathbf{\Sigma}}_{\hat{j}_{k}}^{-1} \mathbf{U}_{\hat{j}_{k}}\hat{\mathbf{S}}_{k}\mathbf{V}_{\hat{j}_{k}}^{T}-\hat{ \mathbf{G}}^{(k-1)}\|_{F}^{2}\leq \|\tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(k-1)}\|_{F}^{2}\] \[\leq \|\tilde{\mathbf{Y}}\|_{F}^{2}+2\sum_{j=1}^{k-1}\langle\mathbf{E},\hat{\mathbf{G}}^{(k-j)}-\mathbf{G}^{(k-j)}\rangle\] \[= \|\tilde{\mathbf{Y}}\|_{F}^{2}+2\sum_{l=1}^{k-1}(\hat{\lambda}_{l }-\lambda_{l})\langle\mathbf{E},\mathbf{X}_{\hat{j}_{l}}\hat{\mathbf{\Sigma}}_{ \hat{j}_{l}}^{-1}\mathbf{U}_{\hat{j}_{l}}\hat{\mathbf{S}}_{l}\mathbf{V}_{\hat{ j}_{l}}^{\top}-\hat{\mathbf{G}}^{(l-1)}\rangle\] \[\leq \|\tilde{\mathbf{Y}}\|_{F}^{2}+4K_{n}L_{n}\mu\xi_{E}\max_{1\leq l \leq k}|\hat{\lambda}_{l}-\lambda_{l}|. \tag{65}\]
It is easy to see that
\[\|\tilde{\mathbf{Y}}\|_{F}=\left\|\sum_{j=1}^{p_{n}}\mathbf{X}_{j} \mathbf{B}_{j}^{*}\right\|_{F}\leq(1-\epsilon_{L})L_{n}\sqrt{n\mu}. \tag{66}\]
Thus, by (62), (63) and (65)-(66), we have
\[\mathbb{P}(E_{n}^{c})\leq\mathbb{P}\left((1-\epsilon_{L})L_{0}^{2} \mu\{1-(1-\epsilon_{L})\}\leq\frac{8\mu}{1-\epsilon_{L}}\frac{K_{n}\xi_{E}^{2}} {n^{2}d_{n}}\right)+o(1)=o(1),\]
where the last equality follows from (C2).
**Lemma 11**: _Let \(\{a_{m}\}\) be a nonnegative sequence of reals. If_
\[a_{0}\leq A,\text{ and }a_{m}\leq a_{m-1}\left(1-\frac{\xi^{2}a_{m-1}}{A}\right)+b_ {m},\]
_for \(m=1,2,\dots,\) where \(b_{m}\geq 0\) with \(b_{0}=0\), then for each \(m\),_
\[a_{m}\leq\frac{A}{1+m\xi^{2}}+\sum_{k=0}^{m}b_{k}. \tag{67}\]
**Proof** We prove by induction. When \(m=0\), (67) holds by assumption. Suppose now that (67) holds for some \(m\geq 1\). Then
\[a_{m+1}\leq a_{m}\left(1-\frac{\xi^{2}a_{m}}{A}\right)+b_{m+1}\] \[\leq \frac{1}{a_{m}^{-1}+\xi^{2}/A}+b_{m+1}\] \[\leq \frac{1}{\left(\frac{A}{1+m\xi^{2}}+\sum_{k=0}^{m}b_{k}\right)^{ -1}+\xi^{2}/A}+b_{m+1}\] \[= \frac{\frac{A}{1+m\xi^{2}}+\sum_{k=0}^{m}b_{k}}{1+\frac{\xi^{2}} {A}\left(\frac{A}{1+m\xi^{2}}+\sum_{k=0}^{m}b_{k}\right)}+b_{m+1}\] \[\leq \frac{A}{1+(m+1)\xi^{2}}+\sum_{k=0}^{m+1}b_{k},\]
where the second inequality follows from \(1-x\leq 1/(1+x)\) for \(x\geq 0\).
**Remark 12**: Lemma 11 is a slight modification of Lemma 3.1 of Temlyakov (2000).
**Proof** [Proof of (35)] On \(\mathcal{E}_{n}^{c}(m)\), there exists some \(l\leq m\) such that
\[\tilde{\tau}d_{n}^{1/2}\xi_{E}\geq\max_{\begin{subarray}{c}1\leq j\leq p_{n} \\ \|\mathbf{B}_{j}\|_{*}\leq L_{n}\end{subarray}}\langle\tilde{\mathbf{Y}}- \hat{\mathbf{G}}^{(l-1)},\mathbf{X}_{j}\mathbf{B}_{j}-\hat{\mathbf{G}}^{(l-1) }\rangle\geq\|\tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(l-1)}\|_{F}^{2}.\]
By (26) and Lemma 10(ii), it follows that, on \(\mathcal{E}_{n}^{c}(m)\) except for a vanishing event,
\[\|\tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(m)}\|_{F}^{2}\leq \|\tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(l-1)}\|_{F}^{2}+2\sum_{k=l }^{m}\langle\mathbf{E},\hat{\mathbf{G}}^{(k)}-\mathbf{G}^{(k)}\rangle\] \[\leq \tilde{\tau}d_{n}^{1/2}\xi_{E}+2\sum_{k=l}^{m}(\hat{\lambda}_{k} -\lambda_{k})\langle\mathbf{E},\mathbf{X}_{j_{k}}\hat{\mathbf{B}}_{j_{k}}- \hat{\mathbf{G}}^{(k-1)}\rangle\] \[\leq \tilde{\tau}d_{n}^{1/2}\xi_{E}+\frac{8m\xi_{E}^{2}}{n(1-\epsilon_ {L})},\]
which is the desired result.
[Proof of (38) and (39)] Note first that for any \(D>0\), \((D+x)/(D-x)\leq 1+3x/D\) for all \(0\leq x\leq(1-\sqrt{2/3})D\). It is not difficult to see that
\[\mathbb{P}\left\{\frac{4L_{0}\xi_{E}}{nd_{n}^{1/2}}\leq(1-\sqrt{ \frac{2}{3}})\left((nd_{n})^{-1}\|\tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(k)}\|_ {F}^{2}+(nd_{n})^{-1}\|\mathbf{E}\|_{F}^{2}\right),1\leq k\leq\hat{k},\mathcal{ G}_{n}\right\}\] \[\geq \mathbb{P}\left\{\frac{4L_{0}\xi_{E}}{nd_{n}^{1/2}}\leq(1-\sqrt{ \frac{2}{3}})M^{-1}\right\}-o(1)\] \[\rightarrow 1.\]
Thus, on \(\mathcal{G}_{n}\) except for a vanishing event,
\[A_{k}\leq 1+\frac{12L_{0}\xi_{E}/(nd_{n}^{1/2})}{(nd_{n})^{-1}\|\tilde{ \mathbf{Y}}-\hat{\mathbf{G}}^{(k)}\|_{F}^{2}+(nd_{n})^{-1}\|\mathbf{E}\|_{F}^ {2}}\] \[\leq 1+12ML_{0}\frac{\xi_{E}}{nd_{n}^{1/2}},\]
for all \(1\leq k\leq\hat{k}\). This proves (38). We now turn to (39). Since for any positive \(A\) and \(B\), \(A/(B+x)\geq A(1-x/B)/B\) for all \(x\geq 0\), it follows from (36) that on \(\mathcal{G}_{n}\) except for a vanishing event,
\[B_{k}\geq \frac{\tau^{2}s_{n}^{-1}}{4L_{0}^{2}\mu^{2}}\frac{(nd_{n})^{-1}\| \tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(k-1)}\|_{F}^{2}}{(nd_{n})^{-1}\|\tilde{ \mathbf{Y}}-\hat{\mathbf{G}}^{(k-1)}\|_{F}^{2}+(nd_{n})^{-1}\|\mathbf{E}\|_{F }^{2}}\] \[\times\left(1-\frac{4L_{0}\xi_{E}/(nd_{n}^{1/2})}{(nd_{n})^{-1}\| \tilde{\mathbf{Y}}-\hat{\mathbf{G}}^{(k-1)}\|_{F}^{2}+(nd_{n})^{-1}\|\mathbf{E }\|_{F}^{2}}\right)\] \[\geq \frac{\tau^{2}s_{n}^{-1}}{4L_{0}^{2}\mu^{2}}\frac{1}{1+\mu Ms_{n }}\left(1-\frac{4ML_{0}\xi_{E}}{nd_{n}^{1/2}}\right)\]
for \(1\leq k\leq\hat{k}\), which proves (39).
[Proof of (46)] Let
\[\mathbf{H}=\sum_{j\in j}\mathbf{X}_{j}\hat{\Sigma}_{j}^{-1}\mathbf{U}_{j} \mathbf{D}_{j}\mathbf{V}_{j}^{\top}\in\mathcal{B}.\]
Note that Proposition 8 and (C3) imply
\[\|\bar{\mathbf{Y}}-\mathbf{H}\|_{F}^{2}\geq n\mu^{-1}\left\{\sum_{j\in\tilde{J}_{o}}\|\hat{\mathbf{\Sigma}}_{j}^{-1} \mathbf{U}_{j}(\mathbf{L}_{j}\mathbf{\Lambda}_{j}\mathbf{R}_{j}^{\top}-\mathbf{D}_ {j})\mathbf{V}_{j}^{\top}\|_{F}^{2}+\sum_{j\in\tilde{J}-\tilde{J}_{o}}\|\hat{ \mathbf{\Sigma}}_{j}^{-1}\mathbf{U}_{j}\mathbf{D}_{j}\mathbf{V}_{j}^{\top}-\mathbf{ B}_{j}^{*}\|_{F}^{2}\right\}\] \[\geq n\mu^{-3}\left\{\sum_{j\in\tilde{J}_{o}}\|\mathbf{L}_{j}\mathbf{ \Lambda}_{j}\mathbf{R}_{j}^{\top}-\mathbf{D}_{j}\|_{F}^{2}+\sum_{j\in\tilde{J} -\tilde{J}_{o}}\|\mathbf{U}_{j}^{\top}\hat{\mathbf{\Sigma}}_{j}\mathbf{B}_{j}^{*} \mathbf{V}_{j}-\mathbf{D}_{j}\|_{F}^{2}\right\}\] \[\geq \frac{n}{\mu^{3}\kappa_{n}}\left\{\sum_{j\in\tilde{J}_{o}}\| \mathbf{L}_{j}\mathbf{\Lambda}_{j}\mathbf{R}_{j}^{\top}-\mathbf{D}_{j}\|_{*}+\sum _{j\in\tilde{J}-J_{o}}\|\mathbf{U}_{j}^{\top}\hat{\mathbf{\Sigma}}_{j}\mathbf{B}_{ j}^{*}\mathbf{V}_{j}-\mathbf{D}_{j}\|_{*}\right\}^{2}.\]
Since \(\mathbf{H}\in\mathcal{B}\), we have
\[\left\{\sum_{j\in\tilde{J}_{o}}\|\mathbf{L}_{j}\mathbf{\Lambda}_{j}\mathbf{R}_{j} ^{\top}-\mathbf{D}_{j}\|_{*}+\sum_{j\in\tilde{J}-\tilde{J}_{o}}\|\mathbf{U}_{ j}^{\top}\hat{\mathbf{\Sigma}}_{j}\mathbf{B}_{j}^{*}\mathbf{V}_{j}-\mathbf{D}_{j}\|_{*} \right\}^{2}\leq\frac{9d_{n}L_{0}^{2}}{16}=\frac{9L_{n}^{2}}{16}.\]
By the triangle inequality, we have \(\sum_{j\in\tilde{J}}\|\mathbf{D}_{j}\|_{*}\leq 3L_{n}/4+\sum_{j\in\tilde{J}_{o} }\|\mathbf{\Lambda}_{j}\|_{*}+\sum_{j\in\tilde{J}-\tilde{J}_{o}}\|\hat{\mathbf{\Sigma} }_{j}\mathbf{B}_{j}^{*}\|_{*}\).
Because of (C6), and \(\hat{J}_{o}\subset J_{o}\) (with probability tending to one), \(\sum_{j\in\tilde{J}_{o}}\|\mathbf{\Lambda}_{j}\|_{*}+\sum_{j\in\tilde{J}-\hat{J}_{o }}\|\hat{\mathbf{\Sigma}}_{j}\mathbf{B}_{j}^{*}\|_{*}\leq\sum_{j\in J}\|\hat{\mathbf{ \Sigma}}_{j}\mathbf{B}_{j}^{*}\|_{*}\leq\mu(1-\epsilon_{L})L_{n}\leq 4^{-1} \mu^{-1}L_{n}\leq L_{n}/4\). Hence \(\sum_{j\in\tilde{J}_{k}}\|\mathbf{D}_{j}\|_{*}\leq L_{n}\), which proves \(\mathbf{H}\in\mathcal{C}_{L}\).
**Proposition 13**: _Let \(\mathbf{A}^{*}\) be an \(m\times n\) matrix and \(\mathbf{A}=\mathbf{A}^{*}+\mathbf{E}\) be its perturbed version. Let \(\mathbf{U}_{*}\mathbf{\Sigma}_{*}\mathbf{V}_{*}^{\top}\) and \(\mathbf{U}\mathbf{\Sigma}\mathbf{V}^{\top}\) be their truncated SVD of rank \(r_{*}\), respectively. If \(\sigma_{r_{*}}(\mathbf{A}^{*}):=\sigma_{r_{*}}>\sigma_{r_{*}+1}(\mathbf{A}^{*} )=0\), and if \(\|\mathbf{E}\|_{op}<\sigma_{r_{*}}\), then_
\[\max\{\mathrm{dist}(\mathbf{U}_{*},\mathbf{U}),\mathrm{dist}(\mathbf{V}_{*}, \mathbf{V})\}\leq\frac{\sqrt{2}\max\{\|\mathbf{E}^{\top}\mathbf{U}_{*}\|_{op}, \|\mathbf{E}\mathbf{V}_{*}\|_{op}\}}{\sigma_{r_{*}}-\|\mathbf{E}\|_{op}},\]
_where \(\mathrm{dist}(\mathbf{Q},\mathbf{Q}_{*})=\min_{\mathbf{R}}\|\mathbf{Q}\mathbf{ R}-\mathbf{Q}_{*}\|_{op}\) for any two orthogonal matrices \(\mathbf{Q}\), \(\mathbf{Q}^{*}\) with \(r\) columns, where the minimum is taken over all \(r\times r\) orthonormal matrices._
**Remark 14**: Proposition 13 is a consequence of the perturbation bounds for singular values (Wedin, 1972). A proof can be found in Chen et al. (2021).
[Proof of (51)] Note first that
\[\bar{\mathbf{Y}}-\tilde{\mathbf{Y}}= \sum_{j\in\tilde{J}_{o}}\mathbf{X}_{j}\hat{\mathbf{\Sigma}}_{j}^{-1}( \mathbf{U}_{j}\mathbf{L}_{j}-\tilde{\mathbf{U}}_{j})\mathbf{\Lambda}_{j}\tilde{ \mathbf{V}}_{j}^{\top}\] \[+\sum_{j\in\tilde{J}_{o}}\mathbf{X}_{j}\hat{\mathbf{\Sigma}}_{j}^{-1} \mathbf{U}_{j}\mathbf{L}_{j}\mathbf{\Lambda}_{j}(\mathbf{V}_{j}\mathbf{R}_{j}- \tilde{\mathbf{V}}_{j})^{\top}.\]
By triangle inequality,
\[\|\bar{\mathbf{Y}}-\tilde{\mathbf{Y}}\|_{F}\leq \sqrt{n\mu}\left(\sum_{j\in\hat{J}_{o}}\|\mathbf{\Lambda}_{j}\|_{F }\right)\left\{\max_{j\in\hat{J}_{o}}\|\mathbf{U}_{j}\mathbf{L}_{j}-\tilde{ \mathbf{U}}_{j}\|_{op}+\max_{j\in\hat{J}_{o}}\|\mathbf{V}_{j}\mathbf{R}_{j}- \tilde{\mathbf{V}}_{j}\|_{op}\right\}. \tag{68}\]
Let \(\mathbf{U}_{j,\tilde{r}_{j}}\) and \(\mathbf{V}_{j,\tilde{r}_{j}}\) be sub-matrices of \(\mathbf{U}_{j}\) and \(\mathbf{V}_{j}\) consisting of column vectors that correspond to the leading \(\bar{r}_{j}\) singular vectors. Write \(\mathbf{U}_{j}=(\mathbf{U}_{j,\bar{r}_{j}},\mathbf{U}_{j,-\bar{r}_{j}})\) and \(\mathbf{V}_{j}=(\mathbf{V}_{j,\bar{r}_{j}},\mathbf{V}_{j,-\bar{r}_{j}})\). Since \(\mathbf{X}_{j}^{\top}\tilde{\mathbf{Y}}=\mathbf{X}_{j}^{\top}\mathbf{Y}- \mathbf{X}_{j}^{\top}\mathbf{E}\), it follows from Proposition 13 and (C5) that there exist \(\bar{r}_{j}\times\bar{r}_{j}\) orthonormal matrices \(\tilde{\mathbf{L}}_{j}\) and \(\tilde{\mathbf{R}}_{j}\) such that with probability tending to one,
\[\max\left\{\|\mathbf{U}_{j,\bar{r}_{j}}\tilde{\mathbf{L}}_{j}- \tilde{\mathbf{U}}_{j}\|_{op},\|\mathbf{V}_{j,\bar{r}_{j}}\tilde{\mathbf{R}}_ {j}-\tilde{\mathbf{V}}_{j}\|_{op}\right\}\leq \frac{\sqrt{2}\max\{\|\mathbf{E}^{\top}\mathbf{X}_{j}\tilde{ \mathbf{U}}_{j}\|_{op},\|\mathbf{X}_{j}^{\top}\mathbf{E}\tilde{\mathbf{V}}_{j} \|_{op}\}}{n\delta_{n}-\|\mathbf{X}_{j}^{\top}\mathbf{E}\|_{op}}\] \[\leq \frac{\sqrt{2}\xi_{E}}{n\delta_{n}-\xi_{E}}.\]
Set \(\mathbf{L}_{j}^{\top}=(\tilde{\mathbf{L}}_{j}^{\top},\mathbf{0}_{\bar{r}_{j} \times(\hat{r}-\bar{r}_{j})})\) and \(\mathbf{R}_{j}^{\top}=(\tilde{\mathbf{R}}_{j}^{\top},\mathbf{0}_{\bar{r}_{j} \times(\hat{r}-\bar{r}_{j})})\) for \(j\in\hat{J}_{o}\) in (68). Then by (C4) and (C6), it follows that
\[\|\bar{\mathbf{Y}}-\tilde{\mathbf{Y}}\|_{F}^{2}\leq n\mu\left(\sum_{j\in\hat{J}_{o}}\| \mathbf{\Lambda}_{j}\|_{F}\right)^{2}\left(\frac{2\sqrt{2}\xi_{E}}{n\delta_{n }-\xi_{E}}\right)^{2}\leq 8\mu L^{2}nd_{n}\frac{\xi_{E}^{2}}{(n\delta_{n}-\xi_{E})^{2}}.\]
[Proof of Corollary 5] By Lemma 2, \(\sharp(\hat{J})+\hat{r}=O_{p}(s_{n}^{2})\). Thus running the first-stage RGA with the just-in-time stopping criterion costs
\[O_{p}(s_{n}^{2}(n_{1}+d_{n})) \tag{69}\]
bytes of communication per computing node. In addition, preparing \(\{\hat{\mathbf{\Sigma}}_{j}^{-1}:j\in\hat{J}\}\) and \((\mathbf{U}_{j},\mathbf{V}_{j})\) for \(j\in\hat{J}\) with \(q_{n,j}\wedge d_{n}>\hat{r}\) costs
\[O_{p}\left(\sum_{j\in\hat{J}}\{q_{n,j}^{2}+(q_{n,j}d_{n}+\hat{r}( q_{n,j}+d_{n}))\mathbf{1}\{q_{n,j}\wedge d_{n}>\hat{r}\}\}\right)\] \[= O_{p}(n_{1}^{2\alpha}s_{n}^{2}+n_{1}^{\alpha}d_{n}s_{n}^{2}+s_{ n}^{4}(n_{1}^{\alpha}+d_{n})). \tag{70}\]
Since the communication costs per node at the \(k\)-th iteration of the second-stage RGA is at most
\[O_{p}\left(\sum_{j\in\hat{J}}\left(\hat{r}^{2}\mathbf{1}\{q_{n,j }\wedge d_{n}>\hat{r}\}+q_{n,j}d_{n}\mathbf{1}\{q_{n,j}\wedge d_{n}\leq\hat{r} \}\right)+d_{n}k+n_{1}\right)\] \[= O_{p}\left(s_{n}^{6}+n_{1}^{\alpha}d_{n}s_{n}^{2}+d_{n}k+n_{1} \right),\]
running \(m_{n}=O_{p}(s_{n}^{4}\log(n^{2}d_{n}/\xi_{n}^{2}))\) iterations (see Theorem 3 for the definition of \(m_{n}\)) costs
\[O_{p}\left((s_{n}^{6}+s_{n}^{2}n_{1}^{\alpha}d_{n}+n_{1})s_{n}^{4}\log\frac{n^{2 }d_{n}}{\xi_{n}^{2}}+d_{n}s_{n}^{8}\left(\log\frac{n^{2}d_{n}}{\xi_{n}^{2}} \right)^{2}\right). \tag{71}\]
Combining (69)-(71) yields the desired result.
|
2306.03739 | Learning to Do or Learning While Doing: Reinforcement Learning and
Bayesian Optimisation for Online Continuous Tuning | Online tuning of real-world plants is a complex optimisation problem that
continues to require manual intervention by experienced human operators.
Autonomous tuning is a rapidly expanding field of research, where
learning-based methods, such as Reinforcement Learning-trained Optimisation
(RLO) and Bayesian optimisation (BO), hold great promise for achieving
outstanding plant performance and reducing tuning times. Which algorithm to
choose in different scenarios, however, remains an open question. Here we
present a comparative study using a routine task in a real particle accelerator
as an example, showing that RLO generally outperforms BO, but is not always the
best choice. Based on the study's results, we provide a clear set of criteria
to guide the choice of algorithm for a given tuning task. These can ease the
adoption of learning-based autonomous tuning solutions to the operation of
complex real-world plants, ultimately improving the availability and pushing
the limits of operability of these facilities, thereby enabling scientific and
engineering advancements. | Jan Kaiser, Chenran Xu, Annika Eichler, Andrea Santamaria Garcia, Oliver Stein, Erik Bründermann, Willi Kuropka, Hannes Dinter, Frank Mayet, Thomas Vinatier, Florian Burkart, Holger Schlarb | 2023-06-06T14:56:47Z | http://arxiv.org/abs/2306.03739v1 | Learning to Do or Learning While Doing: Reinforcement Learning and Bayesian Optimisation for Online Continuous Tuning
###### Abstract
Online tuning of real-world plants is a complex optimisation problem that continues to require manual intervention by experienced human operators. Autonomous tuning is a rapidly expanding field of research, where learning-based methods, such as Reinforcement Learning-trained Optimisation (RLO) and Bayesian optimisation (BO), hold great promise for achieving outstanding plant performance and reducing tuning times. Which algorithm to choose in different scenarios, however, remains an open question. Here we present a comparative study using a routine task in a real particle accelerator as an example, showing that RLO generally outperforms BO, but is not always the best choice. Based on the study's results, we provide a clear set of criteria to guide the choice of algorithm for a given tuning task. These can ease the adoption of learning-based autonomous tuning solutions to the operation of complex real-world plants, ultimately improving the availability and pushing the limits of operability of these facilities, thereby enabling scientific and engineering advancements.
## I Introduction
Complex real-world plants are instrumental in facilitating scientific and technological progress. For their successful operation, it is critical that these facilities achieve predefined performance metrics. These are reached through online tuning, i.e. the optimisation of the plant and its subsystems towards a desired system state. Tuning these systems is a challenging optimisation problem due to the non-linear and often dynamic correlations among a large number of tuning parameters. Moreover, the inherent noise in real-world measurements, the time-consuming data acquisition, and the high costs associated with system downtime make the tuning of real-world systems particularly challenging.
To date, online tuning continues to be performed manually, relying on the experience of expert human operators. This leads to suboptimal solutions that are labour intensive to attain and difficult to reproduce.
To reduce downtime and push the limits of their operational capabilities, efforts are made to develop autonomous plant tuning solutions. Existing approaches can improve optimisation results, reproducibility, and reliability for some tuning tasks, but come with their own drawbacks. For example, while grid search and random search are reliable and highly reproducible approaches, they require a large number of samples. As a result, these methods become impractical in the real world, where the cost per sample may be high. Other approaches from the field of numerical optimisation can reduce the number of required samples and have been successfully applied to tuning tasks [1, 2]. While these approaches show promising results, their performance drops as the number of tuning dimensions increases due to the so-called _curse of dimensionality_[3]. Furthermore, many of these methods are sensitive to noise [2], which is omnipresent in real-world measurements.
Learning-based methods have emerged as promising solutions capable of sample-efficient, high-dimensional optimisation under real-world conditions. Bayesian optimisation (BO) [4] is one such learning-based method, that has recently risen in popularity. In BO, the number of samples required for a successful optimisation is reduced by learning a surrogate model of the objective function at the time of optimisation. Another promising approach is _optimiser learning_[5, 6, 7, 8], where the function predicting the next sampling point is learned before the application. A powerful instance of optimiser learning is the use of a neural network that is trained via Reinforcement Learning (RL) [7, 8], allowing for the automated discovery of optimisation algorithms. In this paper, we call the resulting optimisation algorithm Reinforcement Learning-trained Optimisation (RLO). As continued optimisation of a dynamic function can be considered to be equivalent to control, we consider RLO to be equivalent to RL-based control. Both RLO and BO are very actively researched and applied to a variety of real-world plants such as particle accelerators [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22],
fusion reactors [23; 24; 25], optical and radio telescopes [26; 27; 28], chemical reactions [29], additive manufacturing [30], photovoltaic power plants [31], spacecraft [32; 33], airborne wind energy systems [34], telecommunication networks [35] and grid-interactive buildings [36; 37], amongst others. In each of these fields, both RLO and BO have achieved excellent tuning results at high sample efficiency. To the best of our knowledge, however, there is no previous work conducting a detailed comparison of RLO and BO for online continuous optimisation of real-world plants.
In this work, we study RLO and BO for tuning a subsystem of a particle accelerator and compare them in terms of the achieved optimisation result and their convergence speed. In the field of particle accelerators, both methods are gaining notable attention and have led to significant improvements [9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22]. To ensure the reliability of our results, we combine a significant number of simulations with real-world measurements. Based on the results of our study, we ascertain the advantages and disadvantages of each tuning method and identify criteria to guide the choice of algorithm for future applications.
## II Results
In this study, we consider as a benchmark a recurring beam tuning task which is ubiquitous across linear particle accelerators and frequently performed during start-up and operation mode changes, where the goal is to focus and steer the electron beam on a diagnostic screen. While this task can be very time-consuming, it is also well-defined, making it suitable as a proof-of-concept application for RLO and BO. For the study, we use the specific magnet lattice of a section of the ARES (Accelerator Research Experiment at SINBAD) particle accelerator [38; 39] at DESY in Hamburg, Germany, one of the leading accelerator centres worldwide. From here on, we refer to this section as the _accelerator section_. An illustration of the accelerator section is shown in Fig. 1. Further details on ARES and the accelerator section are given in Section IV.1. The lattice of the accelerator section is in downstream order composed of two quadrupole focusing magnets \(Q_{1}\) and \(Q_{2}\), a vertical steering magnet \(C_{v}\), a third quadrupole focusing magnet \(Q_{3}\), and a horizontal steering magnet \(C_{h}\). Downstream of the magnets there is a diagnostic screen capturing a transverse image of the electron beam. A Gaussian distribution \(\mathbf{b}=(\mu_{x},\sigma_{x},\mu_{y},\sigma_{y})\) is fitted to the observed image, where \(\mu_{x,y}\) denote the transverse beam positions and \(\sigma_{x,y}\) denote the transverse beam sizes. The goal of the tuning task is to adjust the quadrupole magnets' field strengths \(k\) and steering magnets' steering angles \(\alpha\) to achieve a target beam \(\mathbf{b}^{\prime}\) chosen by a human operator. We denote the _actuators_, here the magnet settings to be changed by the algorithm, as \(\mathbf{u}=(k_{Q_{1}},k_{Q_{2}},\alpha_{C_{v}},k_{Q_{3}},\alpha_{C_{h}})\). The optimisation problem can be formalised as minimising the objective
\[\min O\left(\mathbf{u}\right)=\min D\left(\mathbf{b},\mathbf{b}^{\prime}\right), \tag{1}\]
which for the benchmark tuning task is defined as the difference \(D\) between the target beam \(\mathbf{b}^{\prime}\) and the observed beam \(\mathbf{b}\). The observed beam \(\mathbf{b}\) is determined by the beam dynamics, which depend on the actuators \(\mathbf{u}\), and environmental factors, such as the magnet misalignments and the incoming beam to the accelerator section. Together with the target beam \(\mathbf{b}^{\prime}\), these define the _state_ of the environment. With most real-world tuning tasks, not all of the state can be observed, i.e. it is _partially observable_. In the case of the benchmark task, the magnet misalignments and the incoming beam cannot be easily measured or controlled, and are therefore part of the environment's hidden state. As a measure of difference between the observed beam \(\mathbf{b}\) and the target beam \(\mathbf{b}^{\prime}\), we use the mean absolute error (MAE) defined as
\[D_{\mathrm{MAE}}(\mathbf{b},\mathbf{b}^{\prime})=\frac{1}{4}\sum_{i=1}^{4}\left|\mathbf{ b}^{(i)}-\mathbf{b}^{\prime(i)}\right|, \tag{2}\]
i.e. the mean of the absolute value of the beam parameter differences over all four beam parameters, where \(\mathbf{b}^{(i)}\) denotes the \(i\)-th element of \(\mathbf{b}\).
For this study, an RLO policy was trained according to previous work [15] and as described in Section IV.4. An implementation of BO with a Gaussian process (GP) model [40], detailed in Section IV.5, was specially designed for this study. In addition to the studied RLO and BO solutions, we consider random search and Nelder-Mead Simplex optimisation [41] as baselines for randomised and heuristic optimisation algorithms. They are presented in Sections IV.6 and IV.7, respectively.
### Simulation study
For the simulation study, we consider a fixed set of 300 randomly generated environment states, each defined by a target beam \(\mathbf{b}^{\prime}\), an incoming beam \(I\) entering the accelerator section from upstream, and transverse misalignments of
the quadrupole magnets and the diagnostic screen \(M\). We refer to these instances of the environment state as _trials_, defined in Eq. (3). The results of the simulation study over RLO and BO, as well as the two baseline algorithms, random search and Nelder-Mead Simplex, are summarised in Table 1.
We find that the learning-based algorithms RLO and BO outperform both baselines in terms of the optimisation result, achieving a final beam difference \(D\) at least 6 times smaller. Furthermore, RLO achieves a median final beam difference \(D\) of \(4\,\mathrm{\SIUnitSymbolMicro m}\), which is more than an order of magnitude smaller than the one achieved by BO. The final beam difference achieved by RLO is smaller than the one achieved by BO in \(96\,\mathrm{\char 37}\) of the trials. Note that the final beam difference achieved by RLO is smaller than the measurement accuracy \(\epsilon=$20\,\mathrm{\SIUnitSymbolMicro m}$\) of the real-world diagnostic screen.
Based on \(\epsilon\), we construct two metrics to measure the optimisation speed. We define _steps to target_ as the number of steps until the observed beam parameters differ less than an average of \(\epsilon\) from the target beam parameters, and _steps to convergence_ as the number of steps after which the average of the beam parameters never changes by more than \(\epsilon\). We observe that RLO always converges and manages to do so close to the target in \(88\,\mathrm{\char 37}\) of trials. BO also converges on almost all trials, but only does so close to the target in \(12\,\mathrm{\char 37}\) of trials, taking about 4 times longer to do so. Figure 2 indicates why: BO explores the optimisation space instead of fully converging toward the target beam. It is possible to suppress this behaviour by using an acquisition function that favours exploitation, but our experiments have shown that such acquisition functions do not perform well with noisy objective functions. If a sample of the objective value was too high as a result of noise, the surrogate model is likely to overestimate the objective near that sample, causing BO to get stuck instead of finding the true optimum. We further observe that RLO converges more smoothly than BO. While this has little effect in simulation, in the real world, smooth convergence has various advantages like limiting wear on the actuators. In the particle accelerator benchmark, smoother convergence limits the effects of magnet hysteresis, an effect where the ferromagnetic core of an electromagnet retains some magnetisation when the current in its coils is removed, reduced, or reversed. As a result of such effects, the objective function may become noisy or even shift, which is why avoiding them through smooth actuator changes generally stabilises the
Figure 1: **Simplified 3D illustration of the considered section of the ARES particle accelerator.** This section consists of three quadrupole magnets and two steering magnets, followed by a diagnostic screen. The measured beam \(\mathbf{b}\) and the desired beam \(\mathbf{b}^{\prime}\) are provided to the algorithm performing the tuning. In the case of BO, they are used to compute the objective. In the case of RL, they are provided along with the magnet settings as input to the policy and are used to calculate the reward. Both algorithms output either the next settings to the magnets \(\mathbf{u}\) or a change to the magnets \(\Delta\mathbf{u}\).
objective function and improves reproducibility.
### Real-world study
In order to evaluate the methods' ability to transfer to the real world and to verify the results obtained in simulation, we also studied the performance of RLO and BO on the ARES particle accelerator. This part of the study is crucial, as even with accurate simulations, the gap between simulation and the real world is often wide enough that algorithms performing well in simulation cannot be transferred to the real plant [42]. We observed this gap between simulation and experiment in the early stages of training the RL policy, where trained policies performed well in simulation but failed altogether in the real world. Similarly, when implementing BO for the tuning task, implementations tuned for exploitation showed faster and better optimisation in simulation but failed during the experiment under real-world conditions.
Given the limited availability of the real accelerator, we considered 22 trials of the 300 used for the simulation study. The magnet misalignments and the incoming beam on the real accelerator can neither be influenced nor measured during the experiment, so they were considered unknown variables. Before every measurement shift, the incoming beam was aligned with the centres of the quadrupole magnets in order to reduce dipole moments induced when the beam passes through a quadrupole magnet off-centre, which can steer the beam too far off the screen. This adjustment is needed for BO to find an objective signal in a reasonable time. In Section II.5, we investigate how the alignment, or the lack thereof, affects the results of this study. The results of the real-world measurements are listed in Table 1. Two example optimisations by RLO and BO on the real accelerator are shown in Fig. 3. On the real particle accelerator, just like in the simulation study, we observe that RLO achieves both a better tuning result and faster convergence than BO. This time, RLO outperforms BO on 13 of 22 trials. The gap between the two, however, is not as pronounced in the real world. While all three performance metrics of BO are almost identical between the real world and simulation, the performance of RLO appears to degrade. This is partially due to the measurement accuracy now limiting the achievable beam difference, with the result of RLO being only slightly larger than \(\epsilon\) at \(24.46\,\mathrm{\SIUnitSymbolMicro m}\). The degradation of RLO performance may, however, also be an indication that despite the use of domain randomisation the RL policy has slightly overfitted on the simulation. BO does not suffer from this issue as it learns at application time. Note also that to use the available machine study time most effectively, both algorithms were given fewer steps on the real accelerator than in simulation, and that BO was given more steps than RLO.
Figure 2: **Beam difference over time for different optimisation algorithms.** The mean beam difference as the MAE of the beam parameters to the target beam is shown by the solid and dashed lines. The envelopes show the \(95\,\mathrm{\char 37}\) confidence intervals of the beam differences. **a** shows the beam differences as measured at each step. **b** shows the best beam differences encountered up to each step, i.e. the beam differences that one would return to if the optimisation was terminated in the respective step. Note that on the real plant, this is an estimate, as the beam difference may not be exactly the same for the same set of actuator settings at different times.
### Sim2real transfer
The transfer of a method, that works well in a simulation environment, to the real-world is a large part of developing tuning algorithms for facilities such as particle accelerators. The challenges posed by this so-called _sim2real_ transfer impact the choice of tuning algorithm.
Successfully transferring a policy trained for RLO to the real ARES accelerator involved a number of engineering decisions detailed in previous work [15] and in Section IV.4. While some of the design choices, such as inferring changes to the actuator settings instead of the actuator settings directly, can be applied to other tuning tasks with relative ease, others, such as domain randomisation [43; 44], require specialised engineering for each considered tuning task. Furthermore, all of these require time-consuming fine-tuning to actually achieve a successful zero-shot sim2real transfer of a policy trained only in simulation. This is illustrated by the fact that many of the policies trained before the one studied here, performed excellently in simulation while sometimes not working at all on the real ARES accelerator.
On the other hand, BO transfers to the real world with relatively little effort. Once it was sorted out how to best deal with faulty measurements, further discussed in Section II.5, most iterations of the BO implementation performed about as well on the real accelerator as they did in simulation. Only some more specialised design decisions, such as tuning the acquisition function strongly towards exploitation, did not transfer as well. The easier sim2real transfer of BO is likely owing to the fact that GP model is learned entirely on the real plant and therefore will not overfit to a different objective function that deviates from the one under optimisation.
Figure 3: **Example optimisations on the real particle accelerator.****a,c,e,g** show one optimisation with RLO. **b,d,f,h** show one optimisation with BO. **a,b** show the steeper settings and **c,d** show the quadrupole magnet settings. **e,f** show the beam positions and **g,h** show the beam sizes. **i,j** show the beam images before and after the optimisation respectively. The target beam size and position are indicated with dashed lines.
One issue that may arise when transferring BO or random search from simulation to the real plant is that, while RLO naturally converges toward an optimum and then stays there, meaning that if the optimisation is ended at any time the environment's state is at, or at least close to, the best-seen position, algorithms like BO and random search are likely to explore further after finding the optimum. It is therefore necessary to return to the best-seen input when the optimisation is terminated. In simulation, this strategy will recover the same objective value, but real-world objective functions are noisy and not always perfectly stationary, e.g. due to slow thermal drifts. As a result, effects such as magnet hysteresis on particle accelerators may shift the objective value when returning to a previously seen point in the optimisation space. In the benchmark tuning task, we experience noisy measurements and magnet hysteresis. We found that for the studied BO trials, the final beam error deviated by a median of \(11\,\mathrm{\SIUnitSymbolMicro m}\) and a maximum of \(42\,\mathrm{\SIUnitSymbolMicro m}\). This means that the deviation is usually smaller than the measurement accuracy \(\epsilon\). At least for the benchmark task, the effect is therefore non-negligible, but also not detrimental to the performance of the tuning algorithms.
### Inference times
The time it takes to infer the next set of actuator settings may also influence the algorithm choice. For the benchmark task, the inference time happens to be negligible, because our benchmarked physical system, specifically the magnets and the beam measurement, is orders of magnitude slower than the inference time. At other facilities, where the physical process takes less time, the time taken for tuning may be dominated by the inference time of the tuning algorithm and there might even be real-time requirements [45].
We measure the average inference times of both algorithms over the \(45\,000\) inferences of the simulation study using a MacBook Pro with an M1 Pro chip running Python 3.9.15. We observe that BO takes an average of \(0.7\,\mathrm{s}\) to infer the next actuator settings, while RLO is more than three orders of magnitude faster at \(0.0002\,\mathrm{s}\). This is because the RLO policy requires only one forward pass of the multilayer perceptron (MLP) with a complexity of \(O(1)\) with respect to the steps taken. By contrast, in each BO inference step, a full optimisation of the acquisition function is performed. This involves inferences with the GP model with complexity \(O(n^{3})\), scaling with the number of steps taken \(n\). Note that the RLO inference can be significantly sped up by using specialised hardware [46].
### Robustness in the presence of sensor blind spots
In any real system, it is possible to encounter states where the available diagnostics deliver false or inaccurate readings, causing erroneous objective values and observations. Transitions to these states can be caused by external factors as well as the tuning algorithm itself. A good tuning algorithm should therefore be able to recover from these states. In the benchmark tuning task, an erroneous measurement occurs when the electron beam is not visible on the diagnostic screen within the camera's field of view. In this case, the beam parameters computed from the diagnostic screen image are false, also resulting in a faulty objective value.
We observed that when the beam is not properly observed, RLO can usually recover it in just a few steps. Presumably, the policy can leverage its experience from training to make an educated guess on the beam's position based on the magnet settings even though faulty beam measurements were not part of its training, where RLO always had access to the correct beam parameters.
In contrast, BO struggles to recover the beam when it is off-screen, as the GP model is learned at application time from faulty observations, resulting in faulty predictions of the objective and acquisition functions. When defining the task's objective function as only a difference measure from the current to the target beam, falsely good objective values are predicted in the blind spot region of the actuator space and BO converges towards their locations. Our implementation, as described in Section IV.5, alleviates this issue by introducing a constant punishment to the objective function when no beam is detected in the camera's field of view. Nevertheless, the lack of information about the objective function's topology results in BO taking many arbitrary steps before the beam is by chance detected and the optimisation starts progressing towards the target beam. While more comprehensive diagnostics can help solve this problem, these are often not available.
Because of BO's insufficient ability to recover from a system state in which there is no informative objective signal, the presented measurements on the real accelerator were taken with the beam aligned to the quadrupole magnets. As a result, the additional dipole moments induced by the quadrupole magnets when increasing the magnitude of their focusing strength are kept minimal, reducing the chance that the beam leaves the camera's field of view during the initial step of the optimisation. As this alignment would not be performed during nominal operation but may change the observed performance of both algorithms, a study was performed in simulation in order to understand how to interpret the reported results given that the beam was aligned to the centres of the quadrupole magnets before
the optimisation. Both algorithms are evaluated over the same 300 trials as in Section II.1. Unlike in the original simulation study, we also simulate erroneous beam parameter measurements when the beam position is detected outside the camera's field of view. Both algorithms are tested once with the original incoming beam and once with an incoming beam that was previously aligned to the quadrupole magnets. The results are reported in Table 1. We conclude that the reported results on the real particle accelerator would be expected to worsen by about \(5\,\%\) to \(33\,\%\) for RLO and by \(12\,\%\) to \(121\,\%\) for BO if the electron beam had not been aligned to the centres of the quadrupole magnets at the beginning of measurement shifts. This does not change how both algorithms compare to each other.
### Failure modes
With tuning algorithms that are intended to be deployed without supervision to enable the autonomous operation of complex plants, it is important to understand how they might fail. We observe that over the entirety of this study, neither RLO nor BO ever produced a final beam that was worse than the beam before the optimisation. Instead, both algorithms clearly improve the beam in most trials, with only a few trials being outliers where the objective was only slightly improved. It was not possible to identify for either RLO or BO a cause for these outliers. Most likely, they are stochastic in nature, owing to the stochastic components of either algorithm. That is, the RLO policy presumably did not gain enough experience in some regions of the state space because they were not explored as much during training. Similarly, BO may be at a disadvantage when the randomly chosen initial samples are unfavourable. We performed grid scans over target beams for both algorithms in simulation to confirm this through the presence of outliers in random locations of the target beam space. They further show that both algorithms perform worse when tasked with tuning towards large than when tasked with tuning towards a small beam, though this effect is subtle compared to the outliers. The root cause of this observation is presumably the initialisation of the magnets in an focus-defocus-focus (FDF) pattern at the beginning of each optimisation with both RLO and BO. While creating a performance deficit for certain beams compared to others, this initialisation improves the overall performance of both algorithms.
There are two further failure modes that should be discussed for both algorithms. RLO can in rare cases enter an unstable state, in which the policy outputs oscillating actuator settings. These result in the beam parameters oscillating around the target. The cause of these oscillations is yet unknown. We note that the oscillations are also produced by policies trained with different random seeds. BO may fail seriously when the beam is far away instead of just slightly off the screen before the start of the optimisation. In such a case, it can take a long time before the beam is randomly moved into the visible area of the diagnostic screen.
### Running as a feedback
Real-world plants may be subject to drifts caused by unmodelled external factors. Moreover, control can be regarded as the continuous optimisation of a dynamic objective function. Consequently, a tuning algorithm that can run as feedback on a dynamic objective function can be used for drift compensation and control in addition to tuning. Thus, a tuning method's ability to operate as a feedback is an interesting subject of further investigation and could impact algorithm selection.
While BO assumes a static objective function, the benchmarked RLO policy does not rely on memorising previously seen objective values or could alternatively learn to adapt to dynamic objective functions during training. It should therefore be possible to use the policy from RLO as an RL-based feedback controller for a dynamic system. To test this, we ran an optimisation with both methods for 80 steps. After 40 steps, we introduce an instant step-to-step change to the incoming beam, changing the latter such that a different set of actuator settings is required to achieve the same beam on the diagnostic screen. We then observe how the RL policy and our BO implementation react to the upstream change. If the method manages to recover the machine state, it can be considered capable of running as feedback. As can be seen in Table 2 and Fig. 4, the RL-based controller can in fact recover the target beam in about the same time it took to perform the original optimisation, with the final beam difference being comparable to the one achieved in optimisation with a static incoming beam. The beam achieved by BO when the beam instantly changes during the optimisation is, as expected, significantly worse than it is with a constant beam. After the incoming beam changed, the GP model based on the previous 40 samples is no longer correct, effectively breaking BO.
However, the system changes that feedbacks need to react to are not always fast. Often, they occur slowly over time, such that the controller must track the change in order to hold the system near the desired state after attaining it. We therefore also evaluate the RLO policy as a controller and BO in a setup where the incoming beam changes linearly over the course of 80 steps. The results are listed in Table 2. We can see in Fig. 4 that the RL policy is capable of tracking the target beam parameters after attaining them. The reasonably small increase in final MAE can primarily
be explained by the fact that the policy requires a few steps to converge on the desired beam parameters, but in the final step only a single step has passed since the last change to the incoming beam, therefore giving the policy only very little time to correct for the change. As with the instant incoming beam change, BO is not capable of tracking the desired beam parameters. As the incoming beam cannot be included in the GP-model, the learned surrogate is ill-defined, tracking a dynamically changing objective function with a static model. As a result BO optimises an objective function that diverges from the true objective function of the system.
It needs to be mentioned that the slow drifts of the underlying objective function, like the temperature drift of the magnets, can be tackled by adaptive BO with a spatiotemporal GP model [47] or contextual BO [48]. This would require, however, problem-specific implementation and additional engineering effort.
### Robustness to actuator failure
In real-world plants, one also has to deal with the potential failure of components such as the actuators used for tuning. It would therefore be beneficial if a tuning algorithm could handle such an actuator failure and recovers the previous state.
We evaluate RLO's and BO' ability to handle both a permanent actuator failure, where the actuator has failed some time before the tuning algorithm was started, and a delayed actuator failure, where the actuator is operational when the tuning starts but fails at a later time during the tuning. Specifically, we simulate the failure of the third quadrupole magnet in the benchmark task, assuming that the magnet's power supply has failed and its quadrupole strength is permanently set to \(0\,\mathrm{m}^{-2}\) after the initial failure. We assume that a failed actuator provides a correct readback to the optimisation algorithm. Table 2 lists the results. We observe that RLO handles actuator failure well, despite never being trained to do so. When the magnet has failed before the start of the optimisation, RLO finds an optimum, that is almost as good as it would be without the magnet failure, without using the failed magnet. RLO can recover the beam's state when the magnet fails during the optimisation. An example of RLO reacting to an actuator failure during tuning is shown in Fig. 5. BO even improves in performance, as a failed magnet reduces the dimensions of the search space, but despite this, it performs worse than RLO.
Figure 4: **RLO and BO optimisers running as feedbacks in simulation.****a**-**d**, RLO and BO reacting to an instant change of the incoming beam at step 40, denoted by the vertical dotted lines. **e**-**h**, the optimisers tracking the optimum with respect to a continuously changing incoming beam. **a,b,e,f** show the evolution of the beam positions \(\{\mu_{x},\mu_{y}\}\), and **c,d,g,h** show the beam sizes \(\{\sigma_{x},\sigma_{y}\}\). The horizontal dashed lines denote the target beam parameters respectively.
## III Discussion
The results of our study show that both learning-based optimisation algorithms RLO and BO clearly outperform the baseline methods Nelder-Mead Simplex optimisation and random search. Furthermore, the results indicate that in most cases, RLO is the superior learning-based optimisation method, thanks to its ability to utilise experience acquired before the application time. Nevertheless, BO proves to be a promising alternative for online continuous tuning of complex real-world plants due to its versatility as a black-box optimisation method. In Fig. 6 we illustrate, how both learning-based algorithms relate to each other and the two investigated baselines in terms of different design aspects.
RLO primarily outperforms BO in that it is capable of converging towards the desired plant state faster than BO and closer to the desired state. Furthermore, RLO was found to be more capable of dealing with many of the challenges encountered when working with real-world plants. When presented with false sensor readings, RLO recovers faster than BO. Furthermore, RLO does not continue exploring once the optimum is found, eliminating the problems associated with recovering previously seen objective values in real-world systems. In addition, a trained RLO agent requires no setting of hyperparameters or similar at application time and can therefore be used as a one-click solution by anyone without requiring RL expertise from the user and promising reproducible results. This is in contrast to BO, which is likely to need small hyperparameter adjustments for different instances of the same tuning tasks, thus requiring that a user brings at least some understanding of the chosen BO implementation and its hyperparameters. While not the main focus of this study, policies trained via RL as optimisers may also be used without retraining as controllers, being able to both reach the optimum in a static system and track the optimum in a dynamic system.
The main advantage of BO is the relatively small engineering effort required to deploy it successfully. BO algorithms that adapt hyperparameters automatically during the actual optimisation can be implemented easily and require relatively little hyperparameter tuning between different tuning tasks. In contrast, RLO requires substantial engineering efforts by both RL and domain experts, who must develop a suitable training setup and overcome the sim2real transfer problem. In addition, we observed that both RLO and BO can deal with unexpected situations like actuator failures, thus being robust tuning methods for real-world applications.
The choice of tuning algorithm depends primarily on how much and how often a tuning algorithm is going to be used, and whether the final tuning result and the time saved by a fast tuning algorithm are worth the associated engineering effort. We find that RLO is the overall more capable and faster optimiser, but requires significant upfront engineering. It is therefore better suited to regularly performed tasks, where better tuning results and faster tuning justify the initial investment. For tasks that are only performed a few times, for example on rare occasions during operation or during the commissioning of a system, the engineering effort associated with RLO may not be justified.
Figure 5: **RLO reacting to an simulated actuator failure during the optimisation.** The third quadrupole magnet fails in step 40, denoted by the vertical dotted lines. **a** shows the normalised steerer settings and **b** showed the normalised quadrupole strengths, where, when Q3 fails, the strength of the other focusing quadrupole magnet Q2 is quickly increased and the horizontal steering magnet is used to counter the increased change beam position as a result of the changing dipole moments induced by the quadrupole magnets. **c** shows the beam positions \(\{\mu_{x},\mu_{y}\}\) and **d** shows the beam sizes \(\{\sigma_{x},\sigma_{y}\}\). The dashed lines denote the target beam parameters respectively.
Our study has shown that BO, despite not performing as well as RLO, is a valid alternative in such cases.
## IV Methods
To collect the data presented in this study, evaluation runs of RLO and BO as well as the baseline methods of Nelder-Mead Simplex and random search were run in simulation and on a real particle accelerator. The following sections introduce the real-world plant used for our study, our experimental setups, and the optimisation algorithms.
### ARES particle accelerator section
The ARES (Accelerator Research Experiment at SINBAD) particle accelerator [38; 39], located at Deutsches Elektronen-Synchrotron DESY in Hamburg, Germany, is an S-band radio frequency linac that features a photoinjector and two independently driven travelling wave accelerating structures. These structures can operate at energies up to \(154\,\mathrm{MeV}\). The primary research focus of ARES is to produce and study sub-femtosecond electron bunches at relativistic energies. The ability to generate such short bunches is of great interest for applications such as radiation generation by free electron lasers. ARES is also used for accelerator component research and development as well as medical applications.
The accelerator section, known as the _Experimental Area_, is a subsection of ARES, shown in Fig. 7 and made up of two quadrupole magnets, followed by a vertical steering magnet that is followed by another quadrupole magnet and a horizontal steering magnet. Downstream of the five magnets, there is a scintillating diagnostic screen observed by a camera. The power supplies of all magnets can be switched in polarity. The quadrupole magnets can be actuated up to a field strength of \(72\,\mathrm{m}^{-2}\). The limit of the steering magnets is \(6.2\,\mathrm{mrad}\). The camera observes an area of about \(8\,\mathrm{mm}\) by \(5\,\mathrm{mm}\) at a resolution of 2448 by 2040 pixels. The effective resolution of the scintillating screen is ca. \(20\,\mathrm{\SIUnitSymbolMicro m}\).
At the downstream end of the section, there is an experimental chamber. This section is regularly used to tune the beam to the specifications required in the experimental chamber or further downstream in the ARES accelerator.
### Simulation evaluation setup
In the simulation, a fixed set of 300 randomly generated trials were used to compare the different optimisation algorithms. Each trial is a tuple
\[(\mathbf{b}^{\prime},M,I) \tag{3}\]
of the target beam \(\mathbf{b}\) that we wish to observe on the diagnostic screen, the misalignments of the quadrupole magnets and the diagnostic screen \(M\), as well as the incoming beam \(I\) entering the accelerator section. The target beam was generated in a range of \(\pm 2\,\mathrm{mm}\) for \(\mu_{x}\) and \(\mu_{y}\), and \(0\,\mathrm{mm}\) to \(2\,\mathrm{mm}\) for \(\sigma_{x}\) and \(\sigma_{y}\). These ranges were chosen to cover a wide range of measurable target beam parameters, which are constrained by the dimensions of the diagnostic screen.
Figure 6: **Design space for large-scale facility tuning algorithms.** Shows qualitative metrics of comparison for all algorithms considered in our study relative to each other and may aid the decision-making process for choosing one of these algorithms based on criteria specific to the desired application.
The incoming beam \(I\) is randomly generated to represent samples from the actual operating range of the real-word accelerator. Both incoming beam and misalignment ranges were chosen to be larger than their estimated ranges present in the real machine.
### Real-world evaluation setup
In the real world, the overall machine state was set to an arbitrary normal machine state, usually by leaving it as it was left from previous experiments. This should give a good spread over reasonable working points. The target beams were taken from the trial set used for the simulation study. As the incoming beam and misalignments cannot be influenced in the real world in the same way they can be in simulation, they are left as they are on the real accelerator and considered unknown. Experiments on the real accelerator were conducted on 9 different days over the course of 82 days, running at charges between \(2.6\,\mathrm{pC}\) and \(29.9\,\mathrm{pC}\), and an energy of \(154\,\mathrm{MeV}\). To ensure a fair comparison of the tuning methods, we align the beam to the quadrupole magnets at the beginning of each measurement day. This ensures that the beam remains within the camera's field of view on the diagnostic screen in the initial step, which is also a common operating condition of the accelerator. This reduces the dipole moments produced when increasing the strength of the quadrupole magnets and therefore reduces the likelihood of the beam being steered past the camera's field of view on the diagnostic screen in the very first step when BO changes the quadrupole strengths. The alignment is not necessarily needed for the RLO as it can recover the beam back into the diagnostic screen camera's field of view despite receiving erroneous observations.
Transferability of the experiments between simulation and real world as well as RLO and black-box optimisation was achieved through a combination of OpenAI Gym [49] environments an overview of which is shown in Fig. 8. Two different environments were created based on a common parent environment defining the logic of the beam tuning task. One wraps around the _Cheetah_ simulation code [50], allowing for fast training and evaluation. The other environment interfaces with the accelerator's control system. Crucially, both environments present the same interface, meaning that any solution can easily be transferred between the two. While Gym environments are primarily designed for RL policies to interact with their task, the ones used for this work were made configurable in such a way that they can also be interfaced with a BO optimisation. This includes configurable reward formulations and action types that pass actuator settings to the step method and have the latter return an objective value via the reward field of the step method's return tuple.
Figure 7: **Considered accelerator section at ARES.** The electron beam travels downstream from left to right. The five magnets actuated by the optimisation algorithms and the diagnostic screen station are marked.
### Reinforcement learning
The RLO implementation used for this study has been introduced in previous work [15]. In this case, an MLP with two hidden layers of 64 nodes each is used as a policy, observing as input the currently measured beam parameters on the diagnostic screen \(\mathbf{b}=(\mu_{x},\sigma_{x},\mu_{y},\sigma_{y})\), the currently set field strengths and deflection angles of the magnets \(\mathbf{u}=(k_{Q_{1}},k_{Q_{2}},\alpha_{C_{w}},k_{Q_{3}},\alpha_{C_{h}})\) and the desired beam parameters \(\mathbf{b}^{\prime}=(\mu_{x},\sigma_{x},\mu_{y},\sigma_{y})\) set by the human operator. The policy then outputs changes to the magnet settings \(\mathbf{a}_{t}=\Delta\mathbf{u}\). A normalisation of rewards and observations using a running mean and standard deviation is performed over the training. The outputs are normalised to 0.1 times the magnet ranges of \(\pm 30\,\mathrm{m}^{-2}\) for the quadrupole magnets and \(\pm 2\,\mathrm{mrad}\) for the steering magnets. During training and application, optimisations are started from a fixed FDF setting of the quadrupole triplet, with the strengths \((k_{Q_{1}},k_{Q_{2}},k_{Q_{3}})=(10,-10,10)\)\(\mathrm{m}^{-2}\) and both steering magnets set to \(0\,\mathrm{mrad}\). The policy is trained for \(6\,000\,000\) steps using the Twin Delayed DDPG (TD3) [51] algorithm as implemented by the _Stable Baselines3_[52] package. Training is run in a simulation provided by the _Cheetah_[50] particle tracking code, as limited availability makes training on the real particle accelerator infeasible. Domain randomisation [43] is performed during training. Specifically, the magnet and screen misalignments, the incoming beam and the target beam are randomly sampled from a uniform distribution for each episode. The reward function used for training is
\[R\left(\mathbf{s}_{t},\mathbf{a}_{t}\right)=\begin{cases}\hat{R}\left(\mathbf{s}_{t},\mathbf{a }_{t}\right)&\text{if }\hat{R}\left(\mathbf{s}_{t},\mathbf{a}_{t}\right)>0\\ 2\cdot\hat{R}\left(\mathbf{s}_{t},\mathbf{a}_{t}\right)&\text{otherwise}\end{cases} \tag{4}\]
with \(\hat{R}\left(\mathbf{s}_{t},\mathbf{a}_{t}\right)=O\left(\mathbf{u}_{t}\right)-O\left(\bm {u}_{t+1}\right)\) and \(O\left(\mathbf{u}_{t}\right)\) being the natural logarithm of the weighted MAE between observed and target beam on the diagnostic screen. The trained policy is deployed zero-shot, i.e. without any further training or fine tuning, to the real world.
### Bayesian optimisation
The BO version used for this study is a custom implementation using the _BoTorch_[53] package. The objective \(O(\mathbf{u})\) to be optimised is defined as
\[O(\mathbf{u})=-\log\left(\text{MAE}(\mathbf{b},\mathbf{b}^{\prime})\right)+w_{\text{on- screen}}. \tag{5}\]
The logarithm is used to properly weigh the fine improvement when BO approaches the target beam. A further on-screen reward \(w_{\text{on-screen}}=10\) is added to the objective when the beam can be observed on the screen, and subtracted
Figure 8: **Gym environment setup used for the study.** In particular showing how one environment interface facilitates design and training using a simulation of the plant, as well as transferring a developed tuning algorithm to the real plant without modification.
from the objective to penalise the settings when the beam is off the diagnostic screen. To increase the numerical stability of the GP regression, the previous input settings \(\mathbf{u}\) are normalised to \([-1,1]\), projecting the maximum to \(1\) and the minimum to \(-1\), and objective values are standardised. The covariance function of the GP models used in this study is the sum of a Matern-5/2 kernel [54] and a white noise function. The GP hyperparameters, like the length scales and signal noise, are determined dynamically by log-likelihood fits in each step. In each trial, BO is started from the same fixed FDF setting used by RLO. Five random samples are taken to initialize the GP model. Based on the posterior prediction of the GP model, an expected improvement (EI) [55] acquisition function is calculated, which automatically balances the exploration and exploitation of the objective. The next sample is chosen by maximising the acquisition function, where the maximum step sizes are constrained to \(0.1\) times the total action space. Additionally, the quadrupole magnets are only allowed to vary unidirectionally, i.e. in the FDF setting, so that the time-consuming polarity changes of the quadrupole magnets' power supplies due to the exploration behaviour of BO can be avoided. BO is allowed to run \(150\) steps in simulation and \(75\) steps on the real machine, after which we return to the best settings found.
Note that this designed mostly using a simulation before deploying it to the real accelerator. This was done in an effort to reduce the amount of beam time needed for development.
### Nelder-Mead simplex
The Nelder-Mead Simplex optimisation [41] was implemented using the _SciPy_[56] Python package. The initial simplex was tuned in a random search of \(405\) samples to the one that performed best across the set of \(300\) trials. Nelder-Mead is allowed to run for a maximum of \(150\) steps. After \(150\) steps or after early termination the simplex might perform better if it returns to the final sample, but as it is generally converging, it does not necessarily need to. The objective function optimised by Nelder-Mead is the MAE of the measured beam parameters to the target beam parameters.
### Random search
For the random search baseline, we sample random magnet settings from the constrained space of magnet settings. Constraint in this case means that we limit the space to a range commonly used during operations, instead of the full physical limits of the magnets. The latter limits are almost an order of magnitude larger than anything ever used in operation. At the end of the optimisation, we return to the best example found.
## Data availability
The data generated for the presented study is available at [https://doi.org/10.5281/zenodo.7853721](https://doi.org/10.5281/zenodo.7853721).
## Code availability
The code used to conduct and evaluate the presented study is available at [https://github.com/desy-ml/rl-vs-bo](https://github.com/desy-ml/rl-vs-bo).
## Acknowledgements
This work has in part been funded by the IVF project InternLabs-0011 (HIR3X) and the Initiative and Networking Fund by the Helmholtz Association (Autonomous Accelerator, ZT-I-PF-5-6). All figures and pictures by the authors are published under a CC-BY7 license. The authors thank Sonja Jaster-Merz and Max Kellermeier of the ARES team for their great support during shifts as well as always insightful brainstorms. In addition, the authors acknowledge support from DESY (Hamburg, Germany) and KIT (Karlsruhe, Germany), members of the Helmholtz Association HGF, as well as support through the _Maxwell_ computational resources operated at DESY and the _buHPC_ at SCC, KIT.
## Author contributions
J.K., C.X., A.S.G., A.E. and E.B developed the concept of the study. A.E., E.B and H.S. secured funding. J.K. developed and trained the RL agent with support from O.S. C.X. designed the implementation of BO. J.K. ran the simulated evaluation trials and took the real-world data. J.K. evaluated the measured data. C.X. provided substantial input to the evaluation. A.E. and A.S.G. provided input on the evaluation of the measured data. W.K., H.D., F.M. and T.V. assisted the data collection as ARES operators. F.B. assisted the data collection as ARES machine coordinator. W.K., H.D., F.M., T.V. and F.B. contributed their knowledge of the machine to the implementation of both methods. J.K. wrote the manuscript. C.X. provided substantial edits to the manuscript. J.K. created the presented figures with input from C.X., O.S. and F.M. All authors discussed the results and provided edits and feedback on the manuscript.
## Competing interests
The authors declare no competing interests.
|
2301.10084 | Cavitation-induced microjets tuned by channels with alternating
wettability patterns | A laser pulse focused near the closed end of a glass capillary partially
filled with water creates a vapor bubble and an associated pressure wave. The
pressure wave travels through the liquid toward the meniscus where it is
reflected, creating a fast, focused microjet. In this study, we selectively
coat the hydrophilic glass capillaries with hydrophobic strips along the
capillary. The result after filling the capillary is a static meniscus which
has a curvature markedly different than an unmodified capillary. This tilting
asymmetry in the static meniscus alters the trajectory of the ensuing jets. The
hydrophobic strips also influence the advancing contact line and receding
contact line as the vapor bubble expands and collapses. We present thirteen
different permutations of this system which includes three geometries and four
coating schemes. The combination of geometry and coatings influences the jet
breakup, the resulting drop size distribution, the trajectory of the jet tip,
and the consistency of jet characteristics across trials. The inclusion of
hydrophobic strips promotes jetting in line with the channel axis, with the
most effective arrangement dependent on channel size. | Jelle J. Schoppink, Keerthana Mohan, Miguel A. Quetzeri-Santiago, Gareth McKinley, David Fernandez Rivas, Andrew K. Dickerson | 2023-01-24T15:46:16Z | http://arxiv.org/abs/2301.10084v1 | # Cavitation-induced microjets tuned by channels with alternating wettability patterns
###### Abstract
A laser pulse focused near the closed end of a glass capillary partially filled with water creates a vapor bubble and an associated pressure wave. The pressure wave travels through the liquid toward the meniscus where it is reflected, creating a fast, focused microjet. In this study, we selectively coat the hydrophilic glass capillaries with hydrophobic strips along the capillary. The result after filling the capillary is a static meniscus which has a curvature markedly different than an unmodified capillary. This tilting asymmetry in the static meniscus alters the trajectory of the ensuing jets. The hydrophobic strips also influence the advancing contact line and receding contact line as the vapor bubble expands and collapses. We present thirteen different permutations of this system which includes three geometries and four coating schemes. The combination of geometry and coatings influences the jet breakup, the resulting drop size distribution, the trajectory of the jet tip, and the consistency of jet characteristics across trials. The inclusion of hydrophobic strips promotes jetting in line with the channel axis, with the most effective arrangement dependent on channel size.
**Keywords:**[]
## 1 Introduction
The stability of liquid jets has captivated fluid mechanicians for nearly two centuries [1, 2], owing to both their mathematical complexity [3, 4, 5, 6, 7, 8] and usefulness [9, 10, 11]. Recently, jet dynamics have gained attention from the engineering and medical communities for their use in drug delivery [10, 12, 13], ink-jet printers [14, 15], and micro-fabrication [16, 17]. Such microscale jets rely on the sudden acceleration of a liquid column [18, 19], piezoelectric actuation [20] or by rapidly vaporizing a portion of liquid upstream with a laser pulse (thermocavitation) [21, 22, 23, 24]. Impulsively created jets are unsteady and are "kinematically focused" by a curved meniscus in which a pressure wave is reflected at the free surface [22]. The focused liquid converges toward the center of curvature, resulting in jets with velocities that can exceed a Mach number [22, 25]\(M=U/c_{\mathrm{s}}=1\), where \(U\) is the jet velocity and \(c_{\mathrm{s}}\) is the speed of sound in air. Jets emerge from the focused menisci in the form of a stretched ligament that breaks into droplets. Numerous theoretical and experimental investigations have been carried out to explain the disintegration of liquid jets, which are inherently unstable [5]. Various forces act on the surface of jets leading to disturbances that are amplified when carried downstream, ultimately leading to jet breakup by the Rayleigh-Plateau instability among others [5, 26, 27].
The number and trajectory of droplets after jet breakup are guided by the characteristics of the initial impulse, bubble retraction in the case of thermocavitation, meniscus shape [28, 29], and contact line motion. When the jet leaves the nozzle, the no-slip boundary condition is relieved at the outer radial edge, leading to the creation of radial velocity components within the jet. This profile relaxation generates instabilities in the jet [30]. Therefore, the jet characteristics can be modulated by modifying the initial meniscus shape and the nature of contact line motion.
The orifice geometry is another variable influencing jet disintegration [31, 32]. For non-circular nozzles, the propagating jet expands along one radial axis, while contracting in the other in an oscillating manner, destabilizing the jet [32]. This so-called axis switching has been modeled as a spring-mass system driven by the competition of surface tension and inertia [33]. Jets produced by non-circular nozzles break up into smaller droplets and have shorter breakup lengths than comparable circular nozzles [32]. These chain-like oscillations in the jet are caused by non-axisymmetric perturbations which are less unstable than Rayleigh-Plateau instabilities. Chain-like oscillations are non-linear in nature and their frequency decreases with increasing amplitude [34]. In a typical jetting experiment, the Rayleigh-Plateau instability is superimposed on non-axisymmetric perturbations to cause jet breakup [34].
In this study we use an infrared laser pulse to create a cavitation bubble at the closed end of a microscale liquid channel and film the expulsion of the jet from two perpendicular views, as shown in **Fig.1**. Our experimental system is similar to that established by Oyarte Galvez _et al._ (2020) [21] and earlier by Tagawa _et al._ (2012) [22], but here we probe how the channel geometry and its wettability
Figure 1: Schematic of the experimental setup showing the orientation of the chip with respect to both camera views. The zoom box shows a glass chip in detail. Jets emerge from chips to the right.
influence the jet characteristics. Two fundamental channel shapes are etched into borosilicate chips for experimental investigation, circular (C) and rounded rectangular (R) cross-sections, as shown in **Fig.2**a. The full length and relative height of each tested geometry, and the variety of jets they produce, are shown in **Fig.2**b-d. Channel surface chemistry is either homogeneous (A1) or has alternating hydrophobic-hydrophilic sections (A2-A8) as depicted in **Fig.2**a. Due to manufacturing limitations, circular cross-sections have only three coating permutations, whereas rounded rectangles have five, for a total of thirteen unique channel configurations.
The jets created under the conditions tested in this study are of interest for the role they will play in needle-free injection and other microscale liquid delivery devices. However, our primary goal is to unravel the connection between channel geometry and subsequent jet properties, which may be useful for other applications such as coating and spraying of surfaces.
The fabrication methods, experimental protocols, and further details of chip design are given in Section 2. We present experimental results and discussion of jet velocity, droplet characteristics, repeatability, and focusing in Section 3, by splitting the observed phenomena into axial (Sections 3.1-3.4) and out-of-axis (Sections 3.5-3.9) behavior. We conclude our work in Section 4.
## 2 Experimental methods
### Chip layout and fabrication
The overall layout of our experimental microfluidic chips is shown in **Fig.1**. The fiber channel is nearly cylindrical and measures 425 \(\mu\)m tall, 400 \(\mu\)m deep, and 2450 \(\mu\)m long. A 400-\(\mu\)m channel is included to serve as an inlet and flush the fiber channel after fabrication to remove
Figure 2: **(a)** Schematic of channel inner surface coating and cross-section permutations, and representative jet images from channels. Diagrams are such that the camera views them looking from left to right. **(b)** CA1, **(c)** R2A2, and **(d)** R3A4. Every channel is 1850 \(\mu\)m long and 100 \(\mu\)m deep into the frame. Other pertinent dimensions are given in **Table 1**.
contaminants. All jet channels have a characteristic cross-section width \(d_{1}=100\)\(\mu\)m and are 1850 \(\mu\)m in length. Rounded rectangles have two size configurations, R2 and R3, such that \(d_{2}/d_{1}=2\) and 3, respectively. Relative channel size is shown in **Fig.2**. The channel cross-sectional area \(A\) and perimeter \(P\) are computed using that of rectangles capped by two half-circles. Channel area \(A\) and hydraulic diameter \(D_{\rm H}=4A/P\) are reported in **Table 1**. Homogeneous glass channels (A1) are modified by selectively depositing atomically thin layers of gold that are thereafter soaked in thiol. The result is that channels have _alternating_ sections of hydrophilic glass, \(\theta_{\rm e}\approx 30^{\circ}\), and hydrophobic gold, \(\theta_{\rm e}\approx 115^{\circ}\), where \(\theta_{\rm e}\) is the equilibrium contact angle. The arclength of coated and uncoated sections \(\ell_{1}\), \(\ell_{2}\), and \(\ell_{3}\) are shown schematically in **Fig.2a** and provided in **Table 1**. We henceforth refer to channels by an abbreviated identifier. For example, a rounded rectangle with a cross-sectional aspect ratio of three and six discrete alternating sections is referred to by R3A6. The jet channel is filled by a 360-\(\mu\)m glass capillary with distilled water, as shown in **Fig.1**. The circular fill channel is 100 \(\mu\)m in diameter and meanders to provide greater hydraulic resistance such that flow is preferential down the jet channel rather than towards the filling channel. Each gold strip begins 100 \(\mu\)m from the closed end of the jet channel, as shown in **Fig.2c**.
Glass chips are fabricated under cleanroom conditions. The channel structures with half depths of 50 and 200 \(\mu\)m are wet etched into 4 inch borosilicate glass wafers with a thickness of 500 \(\mu\)m. Next, a new photoresist is applied to the glass wafers and removed from the intended position of the gold structures. A 15-nm thick coating of tantalum is applied prior to a 45-nm thick coating of gold. The photoresist is removed, and a gold layer remains only on the intended positions. Afterward, two glass wafers are bonded together and diced to create single chips. The gold surface is made hydrophobic according to Notsu _et al._ (2005)[35]. The chips are immersed for one hour into a 10-mM solution of 1H,1H,2H,2H-perfluorodecanethiol (PFDT, Sigma Aldrich) in ethanol, after which the channels are briefly flushed with ethanol to remove any excess PFDT. The PFDT has no effect on the borosilicate glass.
### Jet creation and high-speed imaging
Jets are created by the vaporization of water on the closed end of jetting channels by a 10 ms, 1.95 \(\mu\)m infrared laser pulse with a power of 0.59 \(\pm\) 0.03 W. Due to the high absorption coefficient
of water at this wavelength (\(\alpha\approx 120\) cm\({}^{-1}\))[36], no dye is required, in contrast to our previous work[21, 28, 37]. The laser pulse is produced by a Thulium fiber laser (BKTel Photonics) with an SMF28 optical fiber output. The fiber is cleaved prior to use and the fiber tip is placed at a distance of 250 \(\mu\)m from the closed end of the jet channel. The laser power has a secondary fiber output of 1% of the nominal power, which is monitored by a photodetector (Thorlabs DET05D2). To confirm the actual laser power, the photodetector is read out by an oscilloscope (Tektronix MSO 2014B).
Jetting events are filmed at two perpendicular angles by a Photron SA-X2 (Front view, \(x,y\)) and a Photron Nova S6 (Top view, \(x,z\)) at 144,000 fps. Both cameras are equipped with a Navitar 12\(\times\) zoom lens, operating at magnifications of 3\(\times\) and 2\(\times\) respectively. Backlighting is provided by a Schott Coldvision-LS and SugarCUBE Ultra. All equipment is triggered simultaneously by an Arduino UNO. Only select channel configurations were imaged from the top view. Priority was given to the asymmetrically coated channels. Therefore, the non-coated (CA1, R2A1, R3A1) and some symmetrically coated (R2A4, R2A8) are only imaged from the front. For the other eight chips, the jet was imaged from the front and top simultaneously. Videos were processed by custom code in MATLAB.
## 3 Results & discussion
We filmed approximately twenty jetting events from each thirteen channel configurations. The inclusion of PFDT-bonded gold coating in ten of the thirteen channels results in channels with a heterogeneous wetting condition. A typical jetting event is depicted in **Fig.4**, where all the stages of the process are signaled. Channels are filled to half their length, \(\sim 850\)\(\mu\)m, before activation of the laser pulse. Prior to bubble formation, a spot of non-homogeneous light intensity appears near the laser spot position, which we posit is due to an augmented refractive index caused by localized heating. Expansion of the laser-induced bubble drives the kinematically focused meniscus forward. We set \(t=0\) at the moment the jet emerges from the channel. A half-fill in our channels is done deliberately such that all moving menisci are allowed an equal and substantial runway length along the coated or uncoated channel walls. Channels that are fully filled do not experience the same degree of meniscus focusing and are only affected weakly by channel coating as the jet tip exits the channel, likely the result of not having a statically curved meniscus[28]. An example of CA1 fully filled is provided in **Fig.3** (Multimedia View). Jets from completely filled channels tend to be larger in diameter, with a thicker tip[28]. In the case of **Fig.3** (Multimedia View), the large tip flattens against air resistance as it emerges. On the other extreme, channels not sufficiently filled experience the breakup of the liquid plug before the jet exits, a phenomenon more likely as channel cross-sections grow in area from chip to chip.
Figure 3: The jetting sequence of a fully filled CA1 channel. The panels show the moments of jet emergence from the channel (top), maximum bubble size (middle), and complete bubble collapse. (Multimedia View)
The sheer amount of data produced in this study precludes a full presentation of our results below, and we thus select six from the thirteen nozzles fabricated to feature in this main text. A comprehensive presentation of plots from all nozzle permutations is provided in Figs.S1-S13. For each geometric configuration, we feature the uncoated (A1) and a coated configuration providing the lowest focusing factor \(F\), which is to be described in Section 3.6. Our featured nozzles are labeled in the top left of the panels in **Fig.7**.
### Bubble size and jet velocity
The maximum bubble size, bubble expansion velocity, and filling level all influence jetting from a given channel geometry. Small bubbles (less than 20% of channel length) create jets with little volume, often a few discrete droplets (of diameter \(~{}50~{}\mu m\)), **Fig.6**. However, all jets presented here are well above the transition from dripping to jetting [38], given by a velocity at or less than 2.5 m/s. By contrast, larger bubbles produce a more complete emptying of the channel often with a curved jet tail (which we refer to as tail sway), see **Fig.7?**. Bubbles with sizes comparable to the channel length can empty the channel almost completely, and the jet exits as a plug. The jet tip in this case is also thicker, similar to that of a fully filled channel. There exists an optimum range of bubble sizes versus filling levels for a given channel geometry in which a jet can be produced without the extreme cases of droplet or plug formation. For example, in the R2 channels, the transition to plug flow was observed for bubbles between 1.1 and 1.4 times the initial filling level, but its precise definition will be the topic of future work.
The relation between jet velocity (\(U\)) and the average velocity of the bubble front during its growth phase (\(\bar{U}_{\rm bub}\)) is plotted in **Fig.7**. We determine the jet velocity by tracking the leading edge of the jet tip from the moment it exits the chip at \(t=0\) for ten frames (69.4 \(\mu\)s). We here note the first distinction between jets from circular (C) and rectangular (R) channels. Both coated circular channels exhibit a lower aggregate \(U/\bar{U}_{\rm bub}\) when compared to CA1. In rectangular channel R3, coatings increase \(U/\bar{U}_{\rm bub}\) over the uncoated configuration. For all configurations \(U/\bar{U}_{\rm bub}\) and correlation coefficients for \(U=j\bar{U}_{\rm bub}\) are given in **Table 2**, where \(j\) is a fitting constant.
Figure 4: Sequence of images showing the jetting event. Each element of the complete jetting event will be discussed in a specific subsection: Bubble growth and initial jet formation, etc.
We use jet velocity \(U\) and hydraulic diameter \(D_{\rm H}\) to define other common dimensionless groups used in jetting studies: Reynolds number \(Re=\rho UD_{\rm H}/\mu\), Weber number \(We=\rho U^{2}D_{\rm H}/\sigma\), and Ohnesorge number \(Oh=We/Re^{2}\). Here, the density, viscosity, and surface tension of distilled water are taken to be \(\rho=1\) g/mL, \(\mu=0.89\) cP, and \(\sigma=72.9\) dyne/cm, respectively. We report average \(Re\) and \(We\) for all channels in **Table 2**. The range of Reynolds number, \(175-3125\), indicates inertia dominates viscosity in our jets. We note, however, that flow focusing on the meniscus creates jets that have a characteristic size \(\sim\)1/3 the diameter of the rectangular (R) channels. A reduction of our calculated Reynolds numbers by a factor of three does little to stifle the apparent inertia dominance. Our experimental range in Weber number, \(2-664\), indicates that for our slowest jets, surface tension plays a large role in their behavior. The slowest jets arise from R3 channels and experience rapid ligament disintegration into droplets. It is of no surprise from the dominance of inertia and surface tension that the Ohnesorge number is low for all channels. The Ohnesorge number \(Oh=0.0104,0.0088\), and \(0.0083\) for (C), (R2), and (R3), respectively.
Figure 5: The jetting sequence produced by a relatively small bubble, less than 20% of the channel length. The panels show the moments of maximum bubble size (top), jet emerge from the channel (middle), and when the last droplet breaks from the main jet body within the channel (bottom). (Multimedia View)
Figure 6: The jetting sequence produced by a relatively large bubble, resulting in a swaying jet tail. The panels show the moments of jet emergence (top), complete bubble collapse (middle), and when the final droplet breaks from the main jet body within the channel (bottom). (Multimedia View)
### Jet breakup and the breakup factor
For a quantitative measure of how coherent jets break into drops throughout the jetting event, we define the 'breakup factor' \(B\). The breakup factor is a ratio of the total liquid parcel length along the jetting axis to the distance between the tip of the leading drop to the tail of the tailing drop. To make \(B\) comparable across different channels we define the _primary jetting event_ from the moment the jet tip leaves the chip at \(t=0\) to the moment the leading drop leaves the frame at \(t=T\). The average value of \(T\) for each channel \(\bar{T}\) across \(N\) trials is reported in **Table 2**. Most often jets
Figure 7: Jet velocity versus average bubble velocity for featured nozzles. For the circular channels, the hydrophobic coatings lower \(U/\bar{U}_{\rm b}\) compared to uncoated channels. For R2 the effect of the coatings is negligible. In the case of larger R3 channels, coatings increase \(U/\bar{U}_{\rm b}\) compared to uncoated channels.
exit the FOV at the rightmost edge (approximately 4850 \(\mu\)m from the channel exit) but may exit earlier through the top or bottom of the FOV. An example of the window taken to measure \(B\) is shown by the bounding red lines in the top panel of **Fig.8**. Mathematically the breakup factor is represented by,
\[B(t)=\sum_{i=1}^{\rm n}D_{i}\frac{1}{(x(t)_{\rm c,1}+D_{1}/2)-(x(t)_{\rm c,i}-D_{ i}/2)}, \tag{1}\]
where \(n\) is the number of liquid parcels (either drops or ligaments) in the observation window, \(D_{i}\) is the equivalent diameter of the \(i\)-th liquid parcel, and \(x_{\rm c,i}\) is the lateral centroid location of a liquid parcel. The denominator of Eq.(1) is the width of the window over which the breakup factor is measured. A nozzle emitting a single drop, or an unbroken column of liquid, has a breakup factor of unity for all time. We average the breakup factor across \(N\) videos and plot \(\bar{B}\) versus jetting time \(t\) in **Fig.8** for our featured channels. Since each individual jet leaves the frame at a different time, the number of trials used to calculate the \(\bar{B}\) curve reduces as time progresses. The blue curves represent the fraction of trials \(N\) that contribute to \(\bar{B}(t)\). The red '\(\times\)' on the vertical axes represents the fraction of videos in which the leading drop exits the frame on the right, rather than the top or bottom. The area under the \(\bar{B}(t)\) curve can be compared to an unbroken jet that maintains \(B=1\) for some specified time \(\tau\). Accordingly, we define this ratio as
\[B^{*}=\frac{1}{\tau}\int_{0}^{\tau}\bar{B}\mathrm{d}t. \tag{2}\]
We set \(\tau=1/3\) ms such that we can compare \(B^{*}\) values across all channel configurations; after this time \(\bar{B}\) is undefined for some channels because all their respective leading drops have reached the boundary of the FOV. We report the values of \(B^{*}\) in the plots of **Fig.8** and **Table 2**. The choice of \(\tau\) plays a large role in the value of \(B^{*}\). At short times (up to \(\tau\approx 0.1\) ms), \(B^{*}\) tends toward unity, while for the circular channels \(B^{*}(0.5\) ms\()=0.55\pm 0.02\) except for CA1 where \(B^{*}(0.5\) ms) is not defined. Similarly, for the rectangular channels \(B^{*}(0.5\) ms\()=0.72\pm 0.02\) except for R2A4 for which \(B^{*}(0.5\) ms\()\) is not defined and R3A4 for which \(B^{*}(0.5\) ms\()=0.51\). In other words \(B^{*}\) decreased \(\approx\) 25 \(\pm\) 5 % from \(\tau=1/3\) ms to \(\tau=1/2\) ms. Of our featured channels, R3A6 retains the most videos through time of any channel but has the lowest average jet velocity, a likely contributor to its propensity for producing axially focused jet trajectories.
From the values of B*, we find that hydrophobic coatings do not promote or delay breakup in comparison to homogeneous channels.
### Droplet size distribution
The distribution of equivalent drop diameter at the end frame of the primary jetting event, \(t=T\) is shown for featured nozzles in **Fig.9**. Bin sizes are 20 \(\mu\)m, starting at 20 \(\mu\)m. Below 20 \(\mu\)m drops are less than three pixels across and are filtered by our binarization algorithm. The majority of
Figure 8: Breakup factor versus jetting time for featured channel configurations. Dotted lines represent standard deviation bounds, which are limited to not exceed \(B=1\). Values printed beneath curves correspond to Eq.(2). Blue curves represent the fraction of trials contributing to \(\bar{B}(t)\) and the red \(\times\) on the ordinates correspond to the number of trials in which leading drops leave the right-hand side of the FOV. **Top Panel:** A representative photograph of jet breakup with red lines bounding the breakup window in Eq.(1). The jet, emitted from an R2A6 channel has \(U=8.64\) m/s, \(Re=1350\), \(We=142\), and \(T=583\)\(\mu\)s.
drops present at \(t=T\) range from 40-60 \(\mu\)m, a dominance that is generally enhanced by coating. The droplet breakup in our system is highly dependent on multiple factors and varies even for jets ejected at similar conditions. This leads to a wide size distribution that is typical for uncontrolled breakups[39]. As done previously for these random breakup processes that involve fragmentation and coalescence[39, 40, 41], we fit a gamma distribution to the drop size histogram,
\[n=\frac{1}{k^{\alpha}\Gamma(\alpha)}D^{\alpha-1}e^{\frac{-D}{k}}, \tag{3}\]
where \(n\) is the number of drops in a bin, \(\alpha\) is the shape parameter and \(k\) the scale parameter. In this case, \(\alpha\) gives an idea of how 'corrugated' the jet is, where \(\alpha=1\) for a smooth jet and for a corrugated one \(\alpha>1\). While \(k\) is directly related to the width of the distribution. We expect the shape factor \(\alpha\) to be correlated with jet corrugation \(B^{*}\), as both measure the jet roughness. However, we found that \(\alpha\) and \(B^{*}\) are not related. There are two reasons for this: i) both parameters are measured at different times, \(B^{*}\) is measured at \(\tau=1/3\) ms whereas \(\alpha\) is evaluated at \(t=T\). ii) \(B^{*}\) depends on the ratio between white pixels and black pixels in the viewing window, regardless of the jet structure. In contrast, \(\alpha\) depends on how the jet breaks up, smaller and larger numbers of droplets lead to larger \(\alpha\). We report \(\alpha\) and \(k\) for all channels in **Table 2**.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline Geometric & \multicolumn{4}{c|}{**C**} & \multicolumn{4}{c|}{**R2**} & \multicolumn{4}{c|}{**R3**} \\ Configuration & \multicolumn{4}{c|}{**C**} & \multicolumn{4}{c|}{**R2**} & \multicolumn{4}{c|}{**R3**} \\ \hline Coating Configuration & **A1** & **A2** & **A4** & **A1** & **A2** & **A4** & **A6** & **A8** & **A1** & **A2** & **A4** & **A6** & **A8** \\ \hline \(N\) & 21 & 20 & 20 & 20 & 20 & 20 & 42 & 20 & 20 & 22 & 20 & 20 & 21 \\ \hline \(U/\bar{U}_{\rm bub}\) & 0.87 & 0.58 & 0.68 & 0.83 & 0.75 & 0.80 & 0.81 & 0.85 & 0.90 & 1.52 & 1.29 & 1.07 & 1.16 \\ \hline R\({}^{2}\), \(U\sim\bar{U}_{\rm bub}\) & 0.84 & 0.79 & 0.75 & 0.95 & 0.73 & 0.94 & 0.92 & 0.90 & 0.91 & 0.43 & 0.70 & 0.27 & 0.35 \\ \hline \multirow{2}{*}{_Re_} & 2000 & 1409 & 1325 & 1812 & 1323 & 2105 & 1545 & 1660 & 1967 & 892 & 1501 & 1210 & 1352 \\ & \(\pm 303\) & \(\pm 234\) & \(\pm 359\) & \(\pm 238\) & \(\pm 144\) & \(\pm 412\) & \(\pm 301\) & \(\pm 234\) & \(\pm 347\) & \(\pm 596\) & \(\pm 465\) & \(\pm 259\) & \(\pm 344\) \\ \hline \multirow{2}{*}{\(We\)} & 444 & 221 & 204 & 261 & 138 & 359 & 172 & 219 & 278 & 79 & 171 & 106 & 135 \\ & \(\pm 134\) & \(\pm 71\) & \(\pm 116\) & \(\pm 67\) & \(\pm 29\) & \(\pm 148\) & \(\pm 69\) & \(\pm 64\) & \(\pm 99\) & \(\pm 70\) & \(\pm 124\) & \(\pm 41\) & \(\pm 52\) \\ \hline \multirow{2}{*}{\(\bar{T}\) (\(\mu\)s)} & 263 & 358 & 432 & 365 & 370 & 334 & 458 & 427 & 408 & 484 & 521 & 563 & 491 \\ & \(\pm 43\) & \(\pm 102\) & \(\pm 121\) & \(\pm 170\) & \(\pm 162\) & \(\pm 89\) & \(\pm 180\) & \(\pm 157\) & \(\pm 167\) & \(\pm 266\) & \(\pm 184\) & \(\pm 196\) & \(\pm 178\) \\ \hline \multirow{2}{*}{\(B^{*}\) (\(\tau=1/3\) ms)} & 0.81 & 0.71 & 0.75 & 0.89 & 0.91 & 0.89 & 0.87 & 0.89 & 0.87 & 0.84 & 0.64 & 0.85 & 0.84 \\ \cline{2-11} & \(\tau=1/3\) ms & & & & & & & & & & & & & \\ \hline shape parameter, \(\alpha\) & 4.62 & 16.87 & 13.73 & 3.70 & 6.06 & 6.11 & 8.48 & 5.17 & 3.81 & 5.90 & 7.23 & 38.04 & 7.95 \\ \hline scale parameter, \(\alpha\) & & & & & & & & & & & & & \\ \hline scale parameter, \(k\) & & & & & & & & & & & & & & & \\ \hline \(F\) [\(\mu\)m] & 138.9 & 92.6 & 92.1 & 238.2 & 195.2 & 103.8 & 121.4 & 111.6 & 161.1 & 208.4 & 82.2 & 69.3 & 124.2 \\ \hline \end{tabular}
\end{table}
Table 2: Jet characterization parameters. Featured channels are highlighted in red. \(N\) indicates the number of videos. \(U/\bar{U}_{\rm bub}\) indicates the slope of the linear fit of \(U(\bar{U}_{\rm bub})\), and R\({}^{2}\), \(U\sim\bar{U}_{\rm bub}\) indicates the R\({}^{2}\)-value representing the quality of this fit. \(B^{*}\) indicates the breakup factor, \(F\) the focusing factor. \(\alpha\) and \(k\) respectively indicate the shape and scale factor for the gamma distribution of the drop distribution.
### Jet trajectories
We find the shape of the static meniscus formed during channel filling to be the primary factor influencing the directional bias of jet tips. Symmetric static menisci wet the opposing walls equally to produce jet tips that exit the channel aligned with the channel centerline. Channels R3A1 and R3A6 pictured in **Fig.10** have a symmetric coating pattern when viewed from the front and thus when filled have a symmetric static menisci. From **Fig.10** the formation of self-focused jet tips is observed at \(t=-56\)\(\mu\)s and the exit of these focused tips from the channels is seen at \(t>0\)\(\mu\)s.
Asymmetrically coated channels form asymmetric static menisci that bias the jet tip trajectory. We present the average trajectory of the jet tip in **Fig.11**. The variance in trajectories indicates that coatings, by way of the static menisci shape, influence the direction of the average leading
Figure 9: Drop distribution histograms at \(t=T\) for featured nozzles. Histogram bin sizes are 20 \(\mu\)m. Solid lines represent a gamma distribution of the histogram. \(\tilde{T}\) is provided for all channels in **Table 2**.
drop. Average trajectories are calculated as follows: first, the individual jet tip trajectories are obtained from the videos. Then, for all \(x\)-values, the average is taken of all individual jet tips. In some cases, the jet tip leaves the FOV through the top or bottom edge (\(y=\pm\) 250 \(\mathrm{\SIUnitSymbolMicro m}\) or \(z=\pm\) 320 \(\mathrm{\SIUnitSymbolMicro m}\)) instead of the rightmost edge (at \(x=\) 4750 \(\mathrm{\SIUnitSymbolMicro m}\)). In this case, an individual trial does not contribute to the average trajectory plot for \(x\)-values larger than where it left the FOV. The shaded regions in **Fig.11** indicate the standard deviations from the average trajectory. Front view trajectories (\(x,y\) view) are available for all channels and are shown in orange. The trajectories of the top view (\(x,z\)), where available, are shown in blue.
#### Front view \(y\) trajectories
For all channels, including those with hydrophobic coatings, their static menisci are expected to be symmetric in the \(y\) direction, due to the symmetry of the coatings in \(y\). Gravitational forces in our system are negligible. Therefore, we do not expect any systematic directionality in the front view (**Fig.10**). However, local surface defects can generate non-systematic exceptions because they may cause an initial asymmetric static meniscus. The defects are usually microscopic imperfections on the glass or gold surfaces formed during fabrication. If the defects are present at the exact locations where a channel is filled, the asymmetric static meniscus that is subsequently formed can change jet trajectories. The presence of these defects and their significance can be discerned only by viewing jetting behavior after the experiment trial. We find this to be the case for the trajectory deviations in R3A8 and R3A6. Otherwise, as expected, the deviation of the jet in the vertical \(y\)-direction from the centerline does not show a systematic bias, with the exception of CA1. The CA1 directionality can be attributed to the upward tilt of the chip in its holder.
#### Top view \(z\) trajectories
In contrast to the deviation in the \(y\)-direction, trajectories in \(z\) show a clear bias away from the channel centerline, especially the asymmetrically coated channels (A2 and A6). Trajectories bias towards \(-z\)-direction, or to the right side in the channel schematics in **Fig.2**a. For the three symmetric channel configurations with top views (CA4, R3A4, R3A8), only R3A4 shows a small bias toward \(-z\) (1.25\({}^{\circ}\)). Therefore, we can conclude that the bias of the asymmetric channels is caused by coating asymmetry and not by camera or chip misalignment. For asymmetrically
Figure 10: Time sequences of bubble expansion and partial collapse for R3A1 (left, \(T=382\)\(\mu\)s, \(U=10.96\) m/s, \(Re=1920\), \(We=257\)) and R3A6 (right, \(T=500\)\(\mu\)s, \(U=8.36\) m/s, \(Re=1470\), \(We=150\)) at comparable time steps following nucleation (\(t=0\)).
coated channels, the bias is towards the hydrophobic gold coating for A2 and towards the centered hydrophobic gold strip for A6. Of these two patterns, the A6 channels show a greater bias, and R3A6 exhibits the most extreme case of bias toward the \(-z\)-direction. The extreme bias of R3A6 is lost in **Fig.11** because almost all the jet tips leave the FOV in the \(-z\)-direction and most at \(x<3000\)\(\mu\)m, after which they no longer contribute to the average trajectory. Thereafter, the contributions of the jet tips closer to the centerline become more significant, and the average trajectory shifts back towards the centerline.
Figure 11: Average trajectory of jet tips from front (red, \(y\)) and top (blue, \(z\)) views with standard deviation shaded regions in orange and light blue, respectively. The \(y,z\)-dimension for each panel is \(\pm 250\)\(\mu\)m, with an inverted axis, such that negative \(y,z\)-values are up. Featured channels are labeled with red text. Coatings generate no systematic bias in \(y\). Coating configurations which are asymmetric about the \(x-y\) plane bias jets toward \(-z\) as seen from a top view.
#### Contact line effects on jet trajectory
Here, we explain how the contact line dynamics and the wettability of the channels affect the meniscus focusing, the tip bias and the tail movement out of axis. As discussed previously, for asymmetric channels, the jet tips and body trajectories have a bias towards the hydrophobic coating (see Figs. 11 and 16). Initially, this bias may seem counter-intuitive, however, the static meniscus shape holds the key. For asymmetrically coated channels, the contact line at the hydrophilic walls is further advanced down the channel axis compared to the hydrophobic channel wall, as depicted in **Fig.12**. Therefore, the static meniscus is slightly tilted toward the \(-z\)-direction, with its surface normal directed towards the hydrophobic wall. At the moment of jet formation, the tilted meniscus results in an off-axis jet. Thus meniscus shape dictates the tip direction and governs the initial stages of the jet ejection.
Figure 12: Front \((x,y)\), top \((x,z)\) and isometric \((x,y,z)\) schematics of the static meniscus (upper panels) and flow-focusing effect during bubble expansion (bottom panels). A non-coated channel (A1) is shown on the left column and a coated channel (A2) on the right.
As the jet continues to exit the channel following the leading drop, liquid is expelled and the bubble begins to retract. The contact line on the jet side of the liquid remaining in the channel often detaches from the channel walls at different times and locations, a phenomenon we term 'asymmetric contact line detachment,' which is shown for R3A1 in the image sequences of **Fig.10** (left) at \(t=21\) and \(76~{}\mu\)s. Asymmetric detachment creates a deviation in the jet body away from the trajectory set by the jet tip, exhibited by the R3A1 jet at \(t=104\mu s\) in **Fig.10**. The jet body shifts toward the upper channel wall, and at \(285~{}\mu\)s the curved jet tail leaves the FOV. The sway does not affect the highly inertial jet tip, which continues on its course. Altogether, the jets that experience asymmetric contact line detachment have a large spread of intensities in the STDs in \(y\) and a low focusing factor \(F\).
Asymmetric detachment has been observed to arise from: i) The presence of \(<\)20 \(\mu\)m sized air bubbles present on the upper or lower walls, introduced during channel filling. These bubbles burst during the rapid advancement of the contact line, contributing to detachment. ii) Contact line hysteresis due to local surface defects originating from chip fabrication. iii) A vertical eccentricity of approximately \(20-30~{}\mu\)m in the location of bubble nucleation, resulting in asymmetric bubble expansion and therefore asymmetric contact-line detachment. The presence of hydrophobic coatings can help reduce the extent of the effects of asymmetric detachment on the jet body. An example of contact line detachment stabilization can be seen **Fig.10** (right) for R3A6, where an asymmetric detachment at \(t=76~{}\mu\)s is not propagated to the jet body. The contrast in wetting between the hydrophilic and hydrophobic strips stabilizes the liquid bulk supplying the jet. This wetting contrast effectively creates energy barriers at the hydrophobic strips that maintain the liquid within the hydrophilic strip. Therefore the jet body continues to exit from the liquid bulk centerline, as seen at \(t=104\) & \(285~{}\mu\)s.
For channel R3A8, in most cases, we observe that the jet body remains centered by adhering to the central hydrophilic strips. However, in cases with an asymmetric static meniscus, the jet tip has a bias towards the top or bottom, for which reason the jet leaves the center strip and moves toward the hydrophilic strip in the top or bottom of the channel. This deviation can be explained due to the smaller size of the centered hydrophilic strip compared to the R3A6 (\(89\) and \(98~{}\upmu\)m for R3A8 and R3A6 respectively), as well as the smaller extent of the hydrophobic strips surrounding the hydrophilic strip (\(89\) and \(97~{}\upmu\)m respectively). Therefore, the energy barrier for moving toward the top or bottom surfaces and wetting them is smaller, resulting in a larger fraction of the jets leaving the channel biased toward the top or bottom. For R3A4 and R3A2, the initial jet formation is centered. However, in the case of asymmetric liquid detachment, the tail of the jet sways. The sway occurs as there are no energy barriers, i.e., there is no hydrophilic strip that keeps the liquid in the center of the channel, in contrast to A6 and A8 channels.
Compared to R3 chips, the R2 chips experience a greater average bubble velocity \(\bar{U}_{\rm bub}\). In some cases, this larger average bubble velocity results in the formation of plug flow instead of a focused jet tip. Furthermore, the centered hydrophilic strip in R2A6 and R2A8 is smaller compared to their R3 counterparts. Therefore, the diameter of the plug is larger than the centered hydrophilic strip, resulting in the wetting of one or both hydrophobic strips. This means that the contact line is no longer contained between the interface of the hydrophilic-hydrophobic strips and the energy barriers are overcome by the initial inertia of the system. Thus, the jets are not centered and can exit along the top or bottom of the channel. For the circular channels, due to their smaller cross-sectional area (less than half that of R2 channels), average bubble velocities are larger than for rectangular channels, resulting in plug flow in all cases. The jet diameter is of similar size to the channel (\(100~{}\mu\)m), therefore the contact lines slide along the channel walls to the channel exit. Because the initial cavitation bubble is of the same size as the channel length, all liquid is expelled and there is no receding contact line. Therefore, there is little sway of the jet body and tail.
### Spatiotemporal diagrams in \(x\)
The jets produced from our system behave like a high momentum fluid ligament, with pinch-off occurring as the jet travels forward [5, 42]. The liquid remaining in the channel acts as a'reservoir' that feeds the ligament with the expansion of the cavitation bubble. With the bubble collapse and the ejection of the remaining liquid, the ligament pinches off from the reservoir and breaks up into a string of droplets as shown in **Fig.2c**,d. A convenient way to visualize the dynamical behavior of this ligament is a spatiotemporal diagram (STD) or kymograph, which shows the evolution of the jets in a single space dimension and time. In the STDs, \(x\) is the coordinate parallel to the channel's long axis, and \(t\) is the perpendicular coordinate. For convenience, the edge of the channel (or nozzle) from which the liquid emerges is set as \(x=0\); the time at which this occurs corresponds to \(t=0\). In the STDs, therefore, we visualize the jet as it emerges from the nozzle and travels in time through the field-of-view (FOV) to the right of the nozzle.
STDs are created from the binarized video frames of a jetting event. Every binarized frame consists of the liquid (ligament or drops) in white against a black background. An STD created for a single video is shown adjacent to representative frames in **Fig.13** (top). For the STD in \(x\), the binary matrix for a frame is summed along each column, resulting in a row vector with a range of 'intensities'. Row vectors for each frame are stacked onto one another to form the STD in \(x\), which is an \(i\times j\) matrix where \(i\) is the number of frames and \(j\) is the number of \(x\)-pixels in the FOV. Liquid parcels which are longer in the direction of travel create a larger footprint in \(x\) (spanning more pixels). Long unbroken lengths generally indicate ligaments, while individual lines indicate the motion of droplets. The height normal to the jetting axis (in \(y\)) of a liquid parcel, drop or ligament, for a single frame is represented by intensity values. Breakup is indicated by the splitting of a line and the velocity of a liquid parcel is given by the inverse of the slope of its line in the STD. Aggregated STDs in \(x\) are shown for our featured channels in **Fig.13** (bottom) which show the average axial behavior of the channel. Aggregates are formed by combining individual STDs for every trial (usually twenty) for a given channel. Individual STDs are truncated to the shortest captured video in time and averaged by the number of videos (**Table 2**), then normalized by the maximum intensity to create the aggregated STDs for that channel configuration, see **Fig.13** (bottom). Therefore all STDs in **Fig.13** (bottom) have an intensity range from 0 to 1.
In the aggregated STDs in **Fig.13**, trajectories of individual drops remain distinguishable. The leading drops are found at the top surface of the wedge-like spray emanating from the origin. A shallower slope indicates a faster drop. For example, the fastest drop in R2A4 is faster than the fastest drop in R2A1, labeled by (A) and (B) in **Fig.13**, respectively; a fact likewise confirmed by **Fig.7**. Jets that break up with greater consistency or repeatability across trials create aggregated STDs with fewer lines or tracks, with each track having a higher intensity due to repeated superposition of individual jetting events. R3A1 and R3A6 channels, for example, break up more repeatably across trials than the other featured channels.
### Spatiotemporal diagrams in \(y\)
Another convenient means of visualizing the trajectories of jets is an STD in the \(y\)-direction. STDs in \(y\) are created following the same approach as STDs in \(x\), but the binary matrix in each frame is now summed along each row and reduced to a column vector. Column vectors are stacked in time to form the STD in \(y\), which is a \(k\times i\) matrix where \(k\) is the number of \(y\)-pixels in the FOV and \(i\) is the number of frames. Aggregated STDs in \(y\) are shown for our featured channels in **Fig.14**. The \(y\) origin runs along the nozzle centerline. Dimensional or breakup information of the jet is not obtained from the STD in \(y\); the plot instead gives the tendency to find a liquid parcel at a given \(y\) location in the FOV. Moreover, individual trajectories cannot be discerned, but net deviations can be visualized. Initially, when jets emerge from the nozzle, intensity values gradually increase
Figure 13: Spatiotemporal diagrams (STDs) in \(x\). **Top:** A spatiotemporal diagram for a single video with key attribute labels. Video snapshots correspond to times indicated in the STD. **Bottom:** Aggregated \(x\)-\(t\) STDs for featured channels. The number of trials \(N\) comprising each aggregate is given in Table 2. Typically, \(N=20\). Labels (A) and (B) denote trajectories of the fastest leading drops.
as the liquid is drained from the channel and into the ligament. The intensity values reduce as the liquid parcels either exit the FOV or deviate from the centerline. The intensity reduction in the case of centerline deviation is also complemented by an increase in non-zero pixel rows in the column vector of each frame.
Individual plots are then aggregated by the same method as STDs in \(x\). In the aggregated \(y\)-\(t\) STDs, focused and repeatable jets are those where the trajectories are narrow and have greater intensities as a result of superimposition. Such is the case when trailing drops follow the leading drop, and an overall jet trajectory can be distinguished. The STD can either be centered or skewed in one direction if there is a preferential jetting direction. In cases where there is a different motion of the tail with respect to the jet tip, there is a spread in the STDs in \(y\). In 14, the most repeatable jets are emitted from CA1 and R3A6. R2A1 has a large spread and therefore lower repeatability. The cause for the upward trajectory of jets produced by CA1 is unknown but again is likely the result of a slight upward tilt of the chip in its holder (approximately \(0.5^{\circ}\)).
### Focusing factor
Each column in **Fig.14**, a snapshot in time, has a corresponding intensity curve across \(y\). If time is collapsed and we take the maximum intensity value at each \(y\) position, we render a single curve that represents the STD, as shown in **Fig.15**. We may quantify the aggregate focus of a jet by the focusing factor \(F\), defined as the full width at half maximum (FWHM) of the intensity curves in **Fig.15**. Lower values of \(F\) correspond to more focused jets. We denote \(F\), which has units of \(\mu\)m, as a red line in **Fig.15**, and report \(F\) for all channels in **Table 2**. The location of the peaks in the FWHM curves, and their skewness in one direction indicate a preferential jetting direction for the chip configuration (similar to the STDs in \(y\)); jets that emerge and travel at an angle from the chip, have peaks at an offset from 0, a trait that is observed in almost all chips. A slight tilt in the positioning of the CA1 chip gives its jets an artificially high \(F\). Otherwise, the shape of intensity curves and \(F\) values correspond well to the diffusive nature of jets shown in **Fig.14**. All
Figure 14: Aggregated \(y\)-\(t\) spatiotemporal diagrams for featured channels.
coated channels improve the focusing factor \(F\) over their uncoated (A1) counterparts, save R3A2. Overall, the most focused jets are produced by R3A4 and R3A6, indicating that the tallest channel (R3), inherently produces the most repeatable jets. In the upcoming subsections, we discuss in detail, the behavior of the jet tip and body and find the underlying mechanisms influencing their behavior.
Since the effect of the bias in **Fig.11** is lost, we again turn to outline curves discussed above to make a comparison with the front view trajectories. To be able to compare front and top views meaningfully, we focus on an area \(1575\times 482\)\(\mu\)m, \(\approx 1/3\)rd of the FOV presented in the previous sections, located 2300 \(\mu\)m away from the chip edge (**Fig.16**, top). The choice of this focus area is
Figure 15: Maximum normalized intensity derived from STDs in \(y\), across jetting time \(t\) for all featured channels. Negative values indicate the top of the FOV (see **Fig.4**)
due to our inability to visualize the channel exit from the top. The corresponding outline curves are calculated with the same procedure as in Section 3.7 and are presented in **Fig.16**. Here, as in Section 3.7, the deviation of the peak from the center of these curves represents a bias in jetting direction. However, in contrast to curves in **Fig.15**, the intensity values are not normalized. The intensity values correlate to the amount of fluid passing through the window, and therefore the difference between these values allows us to compare the extent of out-of-axis behavior between both views.
We find that for the tall R3 channels, the jet body follows the directional bias (toward the top in the front view and toward the left in the top view) set by the jet tip in all cases (**Fig.16**). For the asymmetric channels R3A2 and R3A6, the bias towards the \(-z\) direction is marked by the location of the maximum values at \(\approx-100\)\(\mu\)m for both channels. The intensity curve derived from the top camera view is lowest for R3A6 among other R3 channels, indicating that most of the jet has left the FOV before entering this window. In contrast, the symmetric channels R3A4 and R3A8 have sharp distinct peaks near the center (deviation \(<50\mu m\)), with greyscale intensities \(>200\), again supporting that symmetric coatings do not bias the jet trajectory. For the lower aspect ratio R2 channels, we also find a bias of the jet body towards the \(-z\) direction for both asymmetrically coated channels. For R2A2 channel, bias in \(-z\) in the top view is denoted by the presence of a sharp peak at \(\approx-80\)\(\mu\)m. For R2A6, this \(-z\) bias is seen as well; the top outline curve looks similar to that of R3A6, following an extreme out-of-axis behavior. The maximum value of the front outline curves is at \(y=0\), with a wide distribution over \(y\). This distribution is owed to plug flow in R2A6 instead of a focused jet tip, which makes the jet exit from the top or bottom of the channel.
For the circular channels, we find the bias towards the \(-z\) direction expected for the asymmetrically coated CA2, and the centerline trajectory expected for the symmetrically coated CA4. The gradual decrease in intensities as we go from the circular and rectangular R2 to the rectangular R3 is owing to the greater tendency to form plug flows in the smaller channels, which is not so in the larger channels. The more complete emptying of the smaller channels due to the plug flow induced in the channel following the initial cavitation event gives rise to their greater intensity values.
Figure 16: Maximum intensity derived from partial-view STDs in \(y\) (red) and \(z\) (blue), across jetting time \(t\). Negative values indicate the top of the FOV. Red channel labels indicate featured channels. **Top panel**: Diagram denoting the fields of analysis in both camera views for the partial-view STDs.
Concluding remarks
In this experimental study, we present jet behaviors observed from micro-channels of three geometries with up to five coating configurations each. Channels are coated with alternating hydrophobic and hydrophilic bands along their periphery. Jets are generated by laser-induced thermocavitation and the channels are initially partially-filled such that the advancing meniscus is kinematically focused. Modifications to the rapidly accelerated meniscus by the different coatings influences the jet breakup, the resulting drop size distribution, the trajectory of the jet tip, and the consistency of jet characteristics across trials. Our findings agree with previous studies that the jet velocity \(U\) has a linear relationship with the bubble growth velocity \(\bar{U}_{\rm bub}\), \(U\sim\bar{U}_{\rm bub}\), as shown in **Fig.7**. No effect of the hydrophobic coatings is observed for either the circular or the rectangular cross-sectioned R2 channels. In contrast, for the higher aspect ratio R3 coated channels the ratio of jet to average bubble velocity \(U/\bar{U}_{\rm bub}\) increases compared to the uncoated channels, indicating less hydrodynamic resistance to the rapid thermocavitation event.
We assessed how the coatings and their wettability influence the initial meniscus shape and contact line dynamics. These two factors are critical for understanding the jet tip direction and the jet body behavior. Asymmetrically coated channels produce an off-axis jet tip trajectory with a clear bias towards the hydrophobic channel wall. Although we could not image the meniscus from the top, we suggest that the asymmetrically shaped meniscus results in the observed flow focusing toward the hydrophobic wall. Furthermore, rectangular channels with a hydrophilic strip in the middle such as R3A6, reduce the out-of-axis trajectory. This is due to the hydrophobic strips geometrically delimiting the flow of the jet in the middle of the channel. The effect of the energy barriers is reduced for circular channels and R2 channels due to their tendencies to produce plug-like jets; the emitted jets wet the whole perimeter of the channel and are wider than the hydrophilic strips, resulting in low flow focusing.
For the analysis of the jet dynamics in time and space, we have developed a spatiotemporal diagram (STD) representation, which can be generated in both the \(x\) and \(y\) directions. STDs in \(x\) give information about the jet breakup, coalescence, and the trajectory of individual drops. STDs in \(y\) give information about the assymetric direction of the entire jet, tip and body. By extracting the maximum intensity of each time in the \(y\)-\(t\) STDs we can extract profiles that show concisely the jet bias off the centerline and the focusing factor \(F\).
We avoid referring to any one channel as superior. The jetting characteristics from any particular channel may well be optimally suited to a particular application. For example, needle-free dermal injections will work best with jets that remain coherent over greater distances and exhibit limited off-axis behavior such as tail sway. Jets aimed at uniformly coating surfaces may work best with tails that deviate from the trajectory of their leading drops. The exquisite tunability of the present system through variation of geometry, heterogeneous surface chemistry, laser properties, and more pave a bright future for its adaption to a wide range of applications.
## Acknowledgements
We would like to thank the National Science Foundation CBET-1941341 and the European Research Council (ERC) under the European Union Horizon 2020 Research and Innovation Programme (grant agreement no. 851630) for support, McKenna E.M. Goss for text edits. Furthermore, we would like to thank Stefan Schlautmann for the fabrication of the microfluidic chips.
## Data access
Raw experimental videos and data are publicly available in perpetuity via OneDrive. Interested parties should contact the corresponding author for access. |
2307.13603 | Blockchain inspired secure and reliable data exchange architecture for
cyber-physical healthcare system 4.0 | A cyber-physical system is considered to be a collection of strongly coupled
communication systems and devices that poses numerous security trials in
various industrial applications including healthcare. The security and privacy
of patient data is still a big concern because healthcare data is sensitive and
valuable, and it is most targeted over the internet. Moreover, from the
industrial perspective, the cyber-physical system plays a crucial role in the
exchange of data remotely using sensor nodes in distributed environments. In
the healthcare industry, Blockchain technology offers a promising solution to
resolve most securities-related issues due to its decentralized, immutability,
and transparency properties. In this paper, a blockchain-inspired secure and
reliable data exchange architecture is proposed in the cyber-physical
healthcare industry 4.0. The proposed system uses the BigchainDB, Tendermint,
Inter-Planetary-File-System (IPFS), MongoDB, and AES encryption algorithms to
improve Healthcare 4.0. Furthermore, blockchain-enabled secure healthcare
architecture for accessing and managing the records between Doctors and
Patients is introduced. The development of a blockchain-based Electronic
Healthcare Record (EHR) exchange system is purely patient-centric, which means
the entire control of data is in the owner's hand which is backed by blockchain
for security and privacy. Our experimental results reveal that the proposed
architecture is robust to handle more security attacks and can recover the data
if 2/3 of nodes are failed. The proposed model is patient-centric, and control
of data is in the patient's hand to enhance security and privacy, even system
administrators can't access data without user permission. | Mohit Kumar, Hritu Raj, Nisha Chaurasia, Sukhpal Singh Gill | 2023-06-28T14:47:59Z | http://arxiv.org/abs/2307.13603v1 | Blockchain Inspired Secure and Reliable Data Exchange Architecture for Cyber-Physical Healthcare System 4.0
###### Abstract
A cyber-physical system is considered to be a collection of strongly coupled communication systems and devices that poses numerous security trials in various industrial applications including healthcare. The security and privacy of patient data is still a big concern because healthcare data is sensitive and valuable, and it is most targeted over the internet. Moreover, from the industrial perspective, the cyber-physical system plays a crucial role in the exchange of data remotely using sensor nodes in distributed environments. In the healthcare industry, Blockchain technology offers a promising solution to resolve most securities-related issues due to its decentralized, immutability, and transparency properties. In this paper, a blockchain-inspired secure and reliable data exchange architecture is proposed in the cyber-physical healthcare industry 4.0. The proposed system uses the BigchainDB, Tendermint, Inter-Planetary-File-System (IPFS), MongoDB, and AES encryption algorithms to improve Healthcare 4.0. Furthermore, blockchain-enabled secure healthcare architecture for accessing and managing the records between Doctors and Patients is introduced. The development of a blockchain-based Electronic Healthcare Record (EHR) exchange system is purely patient-centric, which means the entire control of data is in the owner's hand which is backed by blockchain for security and privacy. The experimental results prove that the proposed architecture is robust to handle more security attacks and can recover the data if 2/3 of nodes are failed. The proposed model is patient-centric, and control of data is in the patient's hand to enhance security and privacy, even system administrators can't access data without user permission.
Cyber-Physical System, Blockchain Security, Healthcare 4.0, Electronic Health Records, BigchainDB, Data Privacy.
## 1 Introduction
In the growing world of technology, things around us becoming smarter than we think. The Sectors like healthcare industries are also revolutionaries by the latest technologies. As the technology grows, the Quality and efficiency of the healthcare industry are also increasing rapidly. Doctors and Patients both are getting the benefits of technological advancement in the healthcare industry. Now we are getting lab reports, MRI and CT scans in less time and are more efficient as well as accurate than earlier, Digital X-Ray revolutionaries the way look at fractures and tumors in bone, and digital storage of healthcare records opens a new way for patient care using deep learning and AI technologies. In addition, continuous remote monitoring of patients and collecting real-time data from the patients using IoT sensors, and performing the analysis without delay is possible due to advancements in technology [1]. We can now predict severe diseases (like cancer) more accurately and can prescribe medicine at a very earlier stage. Although storing healthcare data digitally offers many benefits but it also opens doors to security threats and data loss. As we know healthcare data is critical data, it consists of confidential and sensitive information related to patients. Hence, we require a reliable mechanism to ensure the integrity, security, and privacy of such types of sensitive data. Integration of blockchain technology with the healthcare industry can solve problems related to data integrity and security [2]. Now we can exchange health-related patient data more efficiently and securely with Doctors and healthcare providers.
Initially (in the 70's), the healthcare system is referred to as healthcare 1.0. There was a severe shortage of resources and restricted the ability to cooperate with digital systems in healthcare. Costs and time were both raised in the absence of embedded bio-medical sensors when healthcare companies turned to paper-based prescriptions and reporting during that period. The concept of healthcare systems began from 1991 to 2005 with healthcare 2.0. Digital tracking was used in this phase, enabling physicians to use imaging equipment to examine the health of patients. With the adoption of the internet platform, healthcare providers started to establish online communities and use cloud servers to store patient information which allowed for ubiquitous access for both the patient and the practitioner. Healthcare 3.0, gave rise to the concept of user-customization of patient healthcare records. New user interfaces enabled customized and optimized experiences. In addition to the advancements, healthcare record systems were implemented, which can track patients' medical data at the real-time, and universal level.
Similarly, stand-alone non-networked systems, such as social media channels, began to emerge alongside EHR systems, such as HL7, that were integrated to hold patient information. This reduced the sharing of health data, whether on the network or between clinicians using HL7. These methods also improved the ability to interact and communicate with patients. The Healthcare 4.0 era began in 2016 and will continue till the current day [3]. In this duration, a number of different technologies included fog computing, edge computing, cloud computing, the Internet of Things, advanced analytics, artificial intelligence, and machine learning, as well as blockchain to make it a smart healthcare system or Healthcare Industry 4.0. The primary focus was on wearable health sensors, so customized healthcare in real-time is possible. Figure 1 represents the illustrated view of the healthcare industry.
1. Motivation: Blockchain becomes the prominent technology to ensure the security and privacy of user data and is used in several applications like healthcare, transportation, agriculture, smart home, supply chain, etc. The healthcare data of the patient is sensitive and valuable, and cannot be accessed by unauthorized users. The security and privacy of data is a challenging issue because healthcare data is the most targeted data over the internet. As the emergence of thrust technology especially blockchain along with IoT, Edge computing, and AI provides huge development in a smart healthcare system that was suffering from several issues like security attacks, reliability, centralized system, cost, latency, etc. The proposed healthcare system addressed the mentioned challenges and enhance the security, reliability, and transparency along with decentralized architecture. This research gives rise to blockchain-based solutions that will include privacy protection measures, data integrity, resistance to single-point-of-failure vulnerabilities, and safe information exchange among healthcare facilities. Also, the research benefits the healthcare industry stakeholders to better manage healthcare systems. It leads to the transformation from a traditional healthcare system (hospital-centric) towards a digital healthcare system (patient-centric). Also, this work helps to further scientific understanding, by enabling other researchers to further their research. The main contribution of the article is given as follows:
* We have proposed blockchain-enabled robust healthcare architecture for a patient-centric method that provides a cryptographic access control mechanism for a healthcare provider with a distinct medical institution.
* The proposed blockchain-based technique is used for creating a permission-based electronic health record-sharing system and investigating how the suggested system meets the requirements of patients, healthcare professionals, and others.
* The proposed blockchain model secures healthcare data from modern attacks and ensures integrity, a single point of failure, and authentication.
* The objective of the proposed approach is to provide a platform that is free from modern types of attack
* We finally decide the best course of action to optimize the blockchain system's performance based on the quality of service (QoS) parameters.
The remaining structure of the article is as follows: Section 2 gives a description of the related work toward the problem domain. The preliminary study towards the study is presented in Section 3. Section 4 represents the proposed system including advanced technology such as Interplanetary File System, BigchainDB, Tendermint, MongoDB, and cryptographic algorithms. The
Fig. 1: Growth of the healthcare industry over time
proposed blockchain-enabled secure architecture for the healthcare system is discussed in Section 5 and Section 6 dictates the implementation details, results, and discussion. At the last, the conclusion and future work is discussed in Section 7.
## 2 Literature Review
The healthcare system has been improved after the adoption of evolving technology such as the Internet of Things (IoT), Edge Computing, Artificial Intelligence, Machine learning, etc., but the security and privacy of healthcare-related data is still a challenging issue [36]. This consists of sensitive and confidential information about the patients and it could be risky for the patient's life if some data is tempered or leakage using modern attacks by attackers over the internet. Hence, researchers have proposed several approaches for managing and storing healthcare data. B. Shickel et al., provide a summary of deep learning-based approaches that has been applied for clinical applications to analysis EHR [6]. The authors have elaborated the limitations of existing framework, and models applied on heterogeneous data, patients record, and certificates. Basic image processing research is focused on more complicated and hierarchical representations of pictures and creative image processing with greater and more sophisticated structures. Z. Ying et al. proposed an efficient strategy for cloud computing environments using CP-ABE schemes, which maintain the CP-ABE for medical records exchange in a cloud environment [7]. The experimental results successfully demonstrated that the system can provide policy-preserving as well as attribute-recovery implementation with a little performance cost. X. Yang et al. published an article on sharing Electronic Health Records (HER) in the cloud with the help of blockchain technology [8]. They have designed a secure EHRs sharing system termed BVO-ABSC by using the blockchain and ABSC. The proposed model ensures the secrecy, accuracy, and unforgeability of EHRs. The cloud services are to authenticate the users' identification while preserving their privacy, and EHRs stored in the cloud. ABS enables the EHRs to be uploaded by authorized users and keeps the signers' identities anonymous, but cloud-based system have the high latency issue and cannot be used, where condition of patients is severe.
Y. Wang et al. dictate a method for exchanging medical records using a blockchain-enabled cloud computing approach [9]. Security and privacy are further accessed through authorization and other methods using blockchain. The basic design structure for consortium blockchain consists of the following critical components such as networking model, data creation, and consensus mechanism. The proposed approach was tested only Ethereum platform, and don't provide the guarantee to work with high efficiency at others platform also, storing and retrieving the data from cloud platform required the high bandwidth. Y. Zhuang et al., proposed a framework for exchanging information securely using blockchain technology and overcoming the barriers of the current system [10]. The authors introduce two modules: Request Module and Linkage Module. The developed model is not only ensured the privacy of patient data but also provides complete control of their healthcare data. The simulation results of the proposed framework proved that the proposed blockchain-based approach improves the current healthcare system in terms of QoS parameters like security, stability, and robustness. The main limitation of the proposed approach is that the performance of the system is depending upon the property of the blockchain node. Y. Yang et al., proposed a cloud-based patient data exchange system named MedShare that can deal with interoperability issues and overcome the barriers of the existing system [11]. Mediating the situation where independent healthcare providers are uninterested in sharing patient data with their consumers and showing a lack of desire in passing the data to their rivals. However, the reliability of proposed scheme is depending upon the public cloud and this approach required extra cost to implement data transformers.
F. Deng et al., introduced personal Health Record Exchange using Attribute-Based Signcryption [12]. They formally establish the CP-OABSC framework that relies on ABE schemes and verified out-sourced decryption but uses server-aided signature verification. A hybrid encryption technique is used that utilizes an attribute-based encryption approach and proves the system is accurate, secure, and robust. Z. Wu et al., [13] proposed a group-oriented bilinear pairing-based cryptosystem that can accommodate four encryption and decryption models. In order to have multiple keys, members only need to maintain one private key. A sender only has to conduct one round of encryption regardless of the models used. A. Ali et al., have proposed a patient-centric novel framework that guarantees the security and privacy of patient data while ensuring the minimum cost [23]. To enhance the security mechanism, authors have used a deep learning-based approach along with homomorphic encryption [24]. Further, group theory along with binary spring search (BSS) is applied to achieve trustworthiness, reliability, and confidentiality [25]. The whole idea was implemented using smart to reduce the anomaly in the IoT system. To access health records securely, multiple certificate authority (CA) along with Blockchain has been presented by the authors to overcome the limitations of single certificate authority [26]. To ensure the security of IoT devices, a lightweight authentication approach has been proposed that validates the latency and improves communication statistics and data preservation [27].
Two level security techniques have been proposed, where a blockchain-based approach is used to register and validate the patients using smart contract-based proof of work, further deep learning along with a Variational AutoEncoder (VAE) based approach is used to detect the intruder [28]. To verify the medical certificate and avoid the possibility of forging certificates of healthcare records was a challenging issue in the traditional system, now that has been addressed in Industry 4.0 with the help of thrust technologies like IoT, Blockchain, and AI [29-30]. The proposed guarantee to security solution with authentication and access control using smart contracts [31]. The application of blockchain in various domains such as security can share without any trusted third party [32], secure IoT infrastructure for advanced healthcare systems [33], and many more. Two techniques (node-based matrices and safeness scores) have been introduced for complex industrial systems by the authors to conceal the nodes from various Community detection algorithms [34]. The objective of the proposed approach was to minimize the persistence score and maximize the safeness score to achieve the desired result. The authors have introduced a privacy-preserving Distributed Application (DA) to generate, verify and maintain the healthcare-related medical certificates of patients using smart contracts [35]. Blockchain technology has been applied by researchers in diverse areas that are discussed in Table 1 along with techniques and limitations.
## 3 Preliminary Study
**3.1 Security and Privacy Issues in Healthcare System:** Healthcare data of the patient is very sensitive that needs substantial security and privacy measures. The aim to maintain individual privacy of patient information is the beginning point when it comes to deciding who and whom should be granted access. To overcome this challenge, a number of security standards have been developed, including HIPAA, COBIT, and DISHA to protect patient data. The privacy of patients' information involves maintaining controls on access to the patient information, guarding against illegal access to patient data, and removing or destroying existing data. Figure 2 illustrates the hierarchical structure of security measures in Electronic Health Records.
**3.2 Statistics of Data breaches:** The U.S. Department of Health and Human Services (HHS) reported 3,687 health information thefts of 500 or even more records between 2010 and 2020 [4]. As a consequence, about 250 million healthcare records have been lost, stolen, exposed, or disclosed in error. As of 2018, there was around one data attack every day that involved 500 or even more records. When it hit Dec 2020, that rate increased by a factor of two. Figure 3 represents the graphical view of the number of breaches in years. Record exposure increased dramatically in 2015 and it was the worst in history for healthcare breaches due to the exposed, stolen, or impermissible disclosure of over 100 million data. Due to major cyber-attacks at three main health insurance providers Anthem, Premera Blue Cross, and Execellus, 2015 was a pretty horrible year. A better illustration is provided using the graph shown in Figure 4.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Year** & **Technique Used** & **Application** & **Limitations** \\ \hline
2018 [11] & MedShare & To extract the medical data, securely sharing and maintaining the patient data & Reliability is depending upon public cloud and required extra cost. \\ \hline
2019 [14] & Dividing network participants into clusters and maintaining one copy of the ledger per cluster & Healthcare Data Management & Data can be compromised if the cluster head \\ \hline
2019 [15] & Ethereum, IPFS Smart Contract & Storing (EHR) & Lack of control over data because some data is off-chain \\ \hline
2020 [16] & Hyperledger Fabric & Preserving Privacy of EHR & Lack of use cases \\ & Hyperledger Composer & (EHR) & Complex architecture \\ \hline
2018 [17] & Hyperledger Fabric & Healthcare & Complex architecture \\ \hline
2019 [18] & N/A & Supply Chain Management & Only theoretical concept is given, no implementation details are available \\ \hline
2017 [19] & Distributed Ledger & Finance & Lack of adoption because of decentralizing in nature \\ \hline
2020 [20] & Hyperledger Fabric & Healthcare & Complex architecture \\ \hline
2019 [21] & Smart Contract & EHR Management & Patient-Centric \\ \hline \end{tabular}
\end{table}
Table 1: Applications of Blockchain technique in diverse areas
**3.3 Electronic Health Records:** Electronic Health Records (EHRs) are digital records that include information about a patient's medical history. A hospital or a clinician has the responsibility to maintain electronic medical records in a digital format throughout time. All relevant clinical data is essential for the treatment of the patient, including MRI reports, previous medical records, vaccinations, laboratory test results, and any patient allergies. This patient-specific
Fig. 4: No. of Individuals affected each year (2010-20).
Fig. 3: No of Data Breaches Year-wise (2010-20)
Fig. 2: Hierarchical structure for security and privacy measure in Cyber-Physical EHR
information is easily accessible to the patient or the doctor and it is accessible only to authorized users. Sharing these results with only authorized care in the healthcare sector offers improvement in the research area.
**3.4 Blockchain:** A blockchain is a network of blocks that is interlinked with each other to save the records. An intriguing feature is that once information has been stored on a blockchain, it becomes impossible to alter. The hash of each block consists of a hash of the preceding block and the data that was in the block. For instance, let's consider an example, the chain of three blocks is represented in Figure 5, each block contains a hash and the hash of the previous block. the third block is connected to the second, and the second is connected to the first. The first one cannot connect to earlier blocks because it's the initial block and is named the genesis block. To put it another way, let's pretend you modified the contents of the second block. The resulting hash will change, too. Once the chain's data is invalidated, entire blocks are considered invalid since they cannot reference a valid hash of the preceding block. A change into a single block invalidates the blocks that follow it. But it is not enough to avoid manipulation by just hashing passwords.
In the world of cryptocurrency mining, computers have advanced computational power, they can do millions of hashes in one second. Recalculating all the hashes of subsequent blocks would fix your blockchain, too. To avoid this, blockchain uses the proof-of-work approach. This technique slows down the rate at which new blocks are created. It
Fig. 5: Blockchain node with Genesis block and Hash values
Fig. 6: Process of adding the new block in Cyber-Physical networks
takes approximately 10 minutes for the proof-of-work calculation and includes a new block into the blockchain. This design makes it difficult to cheat by changing a single block because it required re-verifying the proof-of-work for all subsequent blocks. So, the security of a blockchain derives from its innovative use of hashing and proof-of-work, which is a way of distorting data to force it to match. Blockchain utilizes a peer-to-peer network rather than a central authority to govern the chain. Every new member receives the whole blockchain after joining the network. In this case, the node will utilize it to check whether everything is still intact. The process of adding a new block can be seen in Figure 6 where if a new record is created (called transaction), it is broadcasted to all the nodes in the network. Then at least greater than 50% of nodes validate it is a valid transaction. After that new block of corresponding transactions is added to the existing blockchain ledger and then the response of the successful transaction is received.
#### 3.4.1 Cryptographic Hash function
It is a process that takes input and produces output, a one-way conversion [5]. Hashing is the process of turning the input of any length into a fixed-size string of text, using a mathematical formula. To put it another way, to find the hash of a message is to employ a function called a hash function, and the hash value is the result of that process. A cryptographic hash function must possess certain characteristics in order to be worthwhile. The output must have unique hashes. When an input is used to calculate the hash value, it implies that it should be impossible to de-rive the same hash value from several inputs, and, as a result, a single message should always provide the same hash value.
#### 3.4.2 Smart Contract
In fact, smart contracts are more like standard contracts, which are used in the "real world." There is only one difference between smart and standard contact: everything is digital in smart contact. A smart contract is indeed a piece of software that is kept on a blockchain.
### Proof of work:
#### 3.5.1 Preliminaries for the validation of transactions in Blockchain
It is the responsibility of the blockchain to authorize legitimate entities and validate the transaction data using protocols. Further, it also ensures that the new node is a fake node or a legitimate node. A node does not trust the information it receives, so it performs a few checks using its own validation protocol during the process.
\(\mathbb{P}_{r}\mathcal{V}\): \(\mathbb{N}^{*}\mathcal{E}\)\(\mathbb{(}0,1\mathbb{)}\) (1) (1) (1, \(\mathbb{T}_{r}\mathbb{)}\)\(\mathbb{P}_{r}\mathbb{V}\) (\(\mathbb{n}\), \(\mathbb{T}_{r}\mathbb{)}\) (2)
Where \(\mathbb{N}\) is the set of nodes, \(\mathbb{T}_{r}\) is the transaction and \(\mathcal{E}\) represents the transactions set. Suppose a node is entered \(\mathbb{n}_{enter}\) in the network and computed \(\mathbb{P}_{r}\mathcal{V}\)(\(\mathbb{n}_{enter}\mathbb{T}_{rx}\mathbb{)}\) for the validation. If the value is 1 then the transaction is valid and broadcasted to the nearby nodes otherwise it is rejected. Transaction \(\mathbb{T}_{rx}\) eventually reaches a complete node (\(\mathbb{n}_{c}\)) that verifies the identity of the sender after entering into the network. If \(\mathbb{T}_{rx}\) is valid then \(\mathbb{n}_{c}\) append it with a list of valid transactions using equation 3
\(\mathbb{n}_{l}^{\mathbb{n}_{c}}\).append(\(\mathbb{T}_{rx}\mathbb{)}\) (3)
Eventually, a new block (representing a set of valid transactions) is created out of a subset of transactions in the local list of \(\mathbb{n}_{l}^{\mathbb{n}_{c}}\):
\(\mathbb{B}_{n=}(\mathbb{T}_{r}^{\ 1},\mathbb{T}_{r}^{\ 2}\ldots\ldots\ldots \ldots\mathbb{T}_{r}^{\ N})\) (4)
Where \(\mathbb{T}_{r_{i}}\in\mathbb{n}_{l}^{\mathbb{n}_{c}}\) and \(\mathbb{B}_{last}^{\mathbb{n}_{c}}\) represents the last block in the local chain of \(\mathbb{n}_{c}\), according to protocol and it starts to solve the proof-of-work for \((\mathbb{B}_{last}^{\mathbb{n}_{c}}\,\mathbb{B}_{n})\). Many \(\mathbb{n}_{c}\) are doing this parallelly i.e., competing with each other.
After a given time frame, the node generates a new block and starts solving for the next block. Suppose a new block \(\mathbb{B}_{new}\) along with its header is created by node n and sent to the nearest node in the network. \(\mathbb{B}_{new}\) is received by a node \(\mathbb{n}_{l}\), if any of the transactions \(\mathbb{T}_{r}\) in \(\mathbb{B}_{new}\) is found invalid then the whole block is rejected. A complete node \(\mathbb{n}_{c}\)must decide whether to add \(\mathbb{B}_{new}\) to its local blockchain. If \(\mathbb{B}_{new}\) is added to the local version of its blockchain, then a hash of the previous block (\(\mathbb{B}_{prev}\)) in the transmitted header Head (\(\mathbb{B}_{new}\)) and the hash of the last block in the local blockchain are equal. Since \(\mathbb{n}_{c}\) has been accepted \(\mathbb{B}_{new}\) it removes the set of transactions present in this block from its local list. Suppose at the time \(\mathbb{t}_{\mathbb{t}_{\mathbb{t}_{\mathbb{t}_{\mathbb{t}_{\mathbb{t}_{\mathbb{t }_{\mathbb{t}_{\mathbb{t}_{\mathbb{t}_{\mathbb{t}_{\mathbb{t}_{\mathbb{t }_{\mathbb{t}}}}}}}}}}}}}}\) entries a nodes of networks are agreed to the same version of the blockchain, say \(\mathbb{B}k_{0}\) and let \(\mathbb{B}_{1}\)and \(\mathbb{B}_{2}\) be two blocks broadcasted by two different nodes at some time \(\mathbb{t}_{\mathbb{t}_{\mathbb{t}_{\mathbb{t}_{\mathbb{t}_{\mathbb{t}_{\mathbb{t }_{\mathbb{t}_{\mathbb{t}_{\mathbb{t}_{\mathbb{t}}_{\mathbb{t}_{\mathbb{t}}_{ \mathbb{t}_{\mathbb{t}}_{\mathbb{t}}_{\mathbb{t}_{\mathbb{t}}_{\mathbb{t}}_{\mathbb{t }_{\mathbb{t}}_{\mathbb{t}}_{\mathbb{t}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\)\)\)\)\)\)\)\)\)\)\\\,\,\,\,\,\ { {\ \{\\{\}\\,\\\,\,\,\,\,\,\,\,\,\,\,\,\, \, \, \,\
\(\mathcal{B}_{3}\) reaches a node (say \(\eta_{f}\)) with local blockchain \(\mathcal{B}k_{0}+\mathcal{B}_{2}\) there will be a problem. Since \(\mathcal{B}_{3}\)needs \(\mathcal{B}_{1}\)as the previous block, it can't be added to the local chain of \(\eta_{f}\)(dilemma condition). Since the dilemma is for the blocks after \(\mathcal{B}k_{0}\), there are two options: \(\mathcal{B}k_{0}+\mathcal{B}_{1^{+}}\mathcal{B}_{3}\) and \(\mathcal{B}k_{0}+\mathcal{B}_{2}\) As per protocol, the longest chain will be kept i.e., \(\mathcal{B}k_{0}+\mathcal{B}_{1^{+}}\mathcal{B}_{3}\)and the transaction of block \(\mathcal{B}_{2}\)is again taken into consideration for the new block. As the depth of the chain goes on increasing, the confidence for the deeper nodes keeps on increasing in the whole blockchain network. Ultimately all the nodes in the network achieve the consensus.
## 4 Proposed Model
The proposed blockchain-enabled cyber physical EHR sharing system architecture uses different techniques and systems to manage the block transactions. The system's EHR may be disseminated to blockchain network members via a shared symmetric key and public key. A simple stepwise process of the proposed system is shown in Figure 7 which consists of several entities like the patient, doctor, blockchain, and others. Internet of Things (IoT) devices or sensors are used to collect healthcare data from patients and send it to edge devices for processing without delay. The entire records of patients' data are saved at edge devices by developed applications.
The registered doctor sends the request for accessing that data and the request is granted only to legitimate doctors, and allows to upload the prescription for the patients based on the healthcare data. In addition, doctors cannot share the sensitive data of patients without permission. Here, blockchain technology plays an important role to secure the data over the internet during transmission. It is the responsibility of the blockchain to authorize legitimate entities and validate the transaction data using protocols. The basic functionalities of different tools and technologies used in the implementation of blockchain-based EHR are discussed briefly.
**A. Interplanetary File System (IPFS):** The IPFS file system stores and shares data by implementing a peer-to-peer (P2P) system. For every item of material that is uploaded to IPFS, a unique hash is associated with the item. Even if you make only one single change, the hash stays completely unique. Instead of utilizing a domain name, IPFS may utilize the content of the file to determine its location, and the data's name is unchangeable. Entire data references are kept in a Kademlia-based DHT. Routing involves announcing fresh data to the network and also helps to find requested data. A number of small data values (about 1KB in size) are directly integrated into the DHT. As the DHT becomes large, it stores references of the node that have the block data.
**B. BigchainDB:** It is Built on High Throughput, Low Latency, Powerful Query Capabilities, Decentralized Control, Immutable data Storage, and Built-in Asset Support. Because the software is open-source, it supports several programming languages (Java, Python, and Javascript), as well as Docker. These transactions are defined in a real-time database. The initial CREATE transactions enable users to create new records in the database. The second transaction is the TRANSFER transaction, which
Figure 7: Securing process of health records in the Cyber-Physical blockchain environment
transfers ownership of a specified record to another user. Another important aspect of the BigchainDB transaction blocks is that they consist of three major elements: Asset, Metadata, and Transaction ID.
**C. Tendermint:** It is a piece of software that may be used to reliably and securely replicate a program on many computers. Tendermint continues to operate even if one-third of computers may potentially fail in random ways. The faulty machines never view the same transaction log, and the states computed by each computer are always different. A basic issue in distributed systems is the lack of reliable and consistent replication; it occurs in many applications such as currencies, elections, and infrastructure orchestration, and it is essential for system fault tolerance. If a system can stand with malfunctions, such as turning malevolent occurred is called Byzantine fault tolerance (BFT). The block diagram of the tendermint node is shown in Figure 8.
**D. MongoDB:** A username and password may be kept in an authentication system, but in the case of blockchain or IPFS, using these methods on the user interface carries a cost burden. As a result, all additional local information was stored in a replacement storage location, and each account was assigned a cryptographically secure public key. Thus, MongoDB was the logical selection.
**E. Cryptographic Algorithm:** To restrict access to the data and allow some specific people, it is necessary to use some kind of encryption. Symmetric encryption includes a key used to encrypt documents and convert them to unreadable form. In order to recover the original document, the decryption process is required using the same key. We have used Advanced Encryption Standard (AES) as their symmetric algorithm for encryption and decryption to implement the blockchain-enabled proposed architecture for the healthcare system.
## 5 Blockchain-enabled Secure Architecture for Healthcare Systems
We have proposed blockchain-enabled secure architecture for the healthcare system that is patient-centric and used for creating a permission-based EHR sharing system. The aim of the proposed architecture is to provide a platform that is free from modern types of attack. The proposed architecture shown in Figure 9 demonstrates the deployment of blockchain technology for exchanging EHR in three levels. Different IoT sensors have been used to collect health-related data like Heart Rate, Blood Pressor, Motion Sensor, etc., which is attached to the Patient's body for various purposes like real-time monitoring of heart rate, and oxygen saturation level for enhanced treatment. Some of these are implanted devices that have resource constraints especially unable to perform computation. Hence, an emerging computing paradigm named edge devices are used to send the data or process a limited amount of time-sensitive data. The data is stored, shared, and updates by the entities of healthcare personnel in the blockchain networks. This is a medical software application that allows doctors and patients to transmit and
Figure 8: Block diagram of data in Tendermint node
receive health information or services from distant locations using blockchain technology. The proposed blockchain-enabled architecture ensures the privacy and security of patient data.
## 6 Implementation details, Results, and Discussion
This section has been divided into two sections. The first section dictates the experimental setup for implementing the proposed blockchain approach, the AES algorithm, storage of encrypted files and details about the blockchain transaction. The second section discusses the registration of patients and doctors at the developed platform, adding the records of patients, collection of patient data, and securing the healthcare data of patients and results in detail.
### Implementation Details:
Medical data may be safely kept in an open database like the IPFS by using the aforementioned technologies. The massive data is not kept on the blockchain itself, but on the distributed network that uses the IPFS, to make this as a function, and start the procedure. Healthcare 4.0, combined with Tendermint blockchain and the Interplanetary File System (IPFS), can introduce transformative changes to the healthcare industry. Blockchain can play a significant role by providing a secure, transparent, and decentralized infrastructure for various healthcare applications. Let's discuss some technical aspects of how Tendermint blockchain can be applied in healthcare 4.0:
**Secure and Immutable Data Storage:** Blockchain, in conjunction with IPFS, can provide a secure and decentralized infrastructure for storing healthcare data. IPFS allows for the distributed storage of files, utilizing a content-addressable system where files are identified by their unique hash. This ensures data integrity and prevents tampering or unauthorized modifications. By storing the file hashes on the blockchain, the immutability and transparency of data storage can be ensured.
**Data Interoperability and Exchange:** One of the key challenges in healthcare is the fragmented nature of patient data across different healthcare providers and systems. Blockchain can facilitate data interoperability and exchange by creating a unified and standardized platform for securely storing and sharing patient health records. Using a decentralized blockchain, patients can have control over their data and grant permissions to healthcare providers or researchers, ensuring data privacy and consent management.
**Secure Medical Records:** Blockchain can be used to create an immutable and tamper-resistant ledger for storing electronic health records (EHRs) and medical information. Each transaction or update to the medical records can be recorded as a block on the blockchain, ensuring transparency, data integrity, and protection against unauthorized modifications. This secure and auditable storage mechanism enhances patient trust and enables seamless access to medical records across different healthcare providers.
**Consent Management and Privacy:** Patient consent management is critical in healthcare, particularly when sharing sensitive medical information for research or treatment purposes. With blockchain, consent management can be implemented using smart contracts. Patients can define granular consent rules and grant temporary access to their medical data for specific purposes or periods. The blockchain's transparent nature ensures that consent agreements are recorded, immutable, and enforceable, enhancing privacy and trust between patients and healthcare stakeholders.
**Clinical Trials and Research:** Blockchain can streamline and enhance the integrity of clinical trials and medical research. By recording the entire research process, including protocols, data collection, and analysis, on the blockchain, transparency and traceability are ensured. This enables verifiable and reproducible research results, reduces fraud, and improves the overall credibility of clinical trials. Additionally, blockchain-based incentive mechanisms, such as tokenization or rewards, can encourage patient participation in trials and data sharing.
**Telemedicine and Remote Patient Monitoring:** With the increasing adoption of telemedicine and remote patient monitoring, Blockchain can provide a secure and transparent infrastructure for managing patient data and ensuring the integrity of remote healthcare services. Blockchain-based smart contracts can facilitate automated payments, enforce service-level agreements, and maintain an auditable log of telemedicine interactions and patient monitoring data.
**Scalability and Performance:** Blockchain's consensus algorithm and IPFS's distributed file storage enable scalability and high-performance data handling. Tendermint's PBFT-based consensus algorithm allows for fast transaction finality and higher throughput, suitable for managing a large volume of healthcare data. IPFS's distributed storage architecture facilitates parallel retrieval and storage of files, ensuring efficient data access and retrieval.
It's important to note that while blockchain offers several advantages for healthcare applications, implementation considerations, regulatory compliance, scalability, and integration with existing systems are key challenges that need to be addressed. Realizing the full potential of healthcare 4.0 requires collaboration among healthcare providers, technology experts, regulators, and policymakers to design and deploy robust and interoperable blockchain solutions.
**6.1.1 Key management**: In Tendermint blockchain, keys play a crucial role in establishing the identity and permissions of network participants, particularly validators. Here's a detailed explanation of how keys are managed in Tendermint:
**Key Generation:** The process starts with the generation of cryptographic key pairs by network participants. The key pair consists of a public key and a corresponding private key. The private key must be securely stored and kept secret by the owner, while the public key is shared with the network.
**Validator Identity:** In Tendermint, validators are identified by their public keys. When joining the network, a validator's public key is registered and associated with their identity. This ensures that validators can be uniquely identified and authenticated during the consensus process.
**Consensus Signing:** Validators in Tendermint use their private keys to digitally sign messages during the consensus protocol. These signatures provide cryptographic proof of the validator's authenticity and ensure the integrity and validity of consensus-related messages exchanged among validators.
**Authentication:** Keys are used for authentication purposes within the blockchain network. Validators use their private keys to sign and verify messages, establishing their identity and proving that they are authorized participants in the consensus process. This authentication process helps prevent unauthorized actors from participating in the consensus and protects the network from malicious activity.
**Secure Key Storage:** As private keys are essential for signing messages and asserting identity, it is crucial to securely store them. Validators must safeguard their private keys to prevent unauthorized access and potential compromise. Common practices include storing private keys in hardware security modules (HSMs), encrypted key stores, or using secure offline storage solutions.
**Key Rotation:** To enhance security, It encourages regular key rotation. Validators can periodically generate new key pairs and associate the new public keys with their identity. This process helps mitigate the risk of key compromise and strengthens the overall security of the network.
**Key Management Systems:** In real-world deployments, key management systems (KMS) may be employed to enhance key security and management. KMS solutions provide centralized control and protection of keys, allowing for key generation,
rotation, storage, and access control. They can integrate with Blockchain network to streamline key management processes and enforce best security practices.
Overall, key management is crucial in Tendermint blockchain to establish and verify the identities of network participants, ensure the integrity of consensus messages, and protect the network from unauthorized access. Proper key generation, storage, rotation, and management practices are essential to maintaining a secure and trusted blockchain network.
#### 6.1.2 Involvement of entities to improvement the performance
Several entities are involved in Tendermint for performance improvement. Here are some key entities and their roles:
**Validators:** Validators are the network nodes responsible for participating in the consensus process and validating transactions. They propose new blocks, vote on the validity of proposed blocks, and participate in the consensus protocol to agree on the state of the blockchain. Validators play a crucial role in improving performance by efficiently processing and validating transactions.
**Consensus Algorithm:** It utilizes a consensus algorithm based on Practical Byzantine Fault Tolerance (PBFT). This algorithm enables validators to reach agreement on the order and validity of transactions in a distributed network. The consensus algorithm ensures that the network can achieve high throughput and low latency, leading to improved performance.
**Block Propagation:** It employs a gossip-based protocol for block propagation. When a validator proposes a new block, it disseminates the block to a subset of other validators, who then further propagate it throughout the network. This approach helps in reducing the propagation time of blocks, enhancing performance by minimizing the delay in reaching consensus.
**Block Time and Finality:** It allows for the configuration of block time, which determines how frequently new blocks are added to the blockchain. By adjusting the block time, network operators can optimize performance according to the requirements of their specific use case. Additionally, It provides fast finality, meaning that once a block is committed, it is considered finalized, reducing the need for lengthy confirmation times.
**Peer-to-Peer Networking:** It utilizes a peer-to-peer networking layer to facilitate communication among nodes in the network. It optimizes network performance by efficiently transmitting blocks and messages between validators. The networking layer employs various techniques, such as protocol buffers and efficient data serialization, to enhance the speed and reliability of data transmission.
**Parallel Processing:** It allows for parallel processing of transactions. This means that validators can process multiple transactions concurrently, improving overall throughput and performance.
By leveraging these entities and techniques, Tendermint aims to achieve high-performance blockchain networks with fast transaction processing, low latency, and scalability. These performance improvements are crucial for building decentralized applications and supporting many users and transactions on the blockchain.
#### 6.1.3 Security Scenario:
When a person's health data leaks, it can pose significant security risks. Here's a detailed explanation of the potential risks associated with health data breaches:
**Identity Theft:** If an individual's health data, including personally identifiable information (PII) such as name, address, social security number, or medical insurance details, is exposed, it becomes easier for malicious actors to commit identity theft. They can use the stolen information to create fraudulent accounts, make unauthorized transactions, or access other sensitive personal and financial data.
**Medical Identity Theft:** Health data breaches can lead to medical identity theft, where an unauthorized person uses someone else's health information for personal gain. This can result in fraudulent medical billing, obtaining prescription drugs illegally, or receiving medical treatment using the victim's identity, potentially leading to incorrect diagnoses or inappropriate medical care.
**Financial Consequences:** If health data breaches expose financial information, such as credit card details or banking information, individuals can face financial consequences. Cybercriminals may use the stolen data for fraudulent transactions, draining bank accounts, or making unauthorized purchases, leading to financial losses and potential damage to credit scores.
**Stigmatization and Discrimination:** Certain health conditions, such as mental health disorders or sexually transmitted diseases, may carry social stigma or lead to discrimination. If such sensitive health data is leaked, individuals may face discrimination in various areas of life, including employment, insurance coverage, or personal relationships.
**Medical Fraud and Insurance Abuse:** Compromised health data can be exploited for medical fraud or insurance abuse. Criminals may use the stolen information to submit false insurance claims, obtain prescription medications for illegal resale, or fraudulently bill for medical services not provided. This type of fraud can result in financial losses for insurance companies, healthcare providers, and individuals themselves.
**Reputational Damage:** When health data leaks, it can cause significant reputational damage to individuals and healthcare organizations. The loss of trust from patients or customers can have long-lasting consequences for healthcare providers, leading to a decline in patient loyalty, negative publicity, and potential legal actions.
**Targeted Attacks and Spear Phishing:** Health data breaches can provide attackers with valuable information to launch targeted attacks, such as spear phishing. With knowledge of an individual's health conditions or medical history, cybercriminals can craft convincing and personalized phishing emails, aiming to deceive the victims into providing more sensitive information, financial details, or login credentials.
**Research and Intellectual Property Theft:** If research data or intellectual property related to healthcare innovations or drug development is exposed, it can lead to intellectual property theft and compromise ongoing research efforts. This can result in financial losses for research organizations, setbacks in medical advancements, and potential harm to public health.
### Environment Initialization
* Start the BigchainDB server and connect it to MongoDB running on localhost.
* Configure the tendermint to connect with BigchainDB then run the tendermint core.
* Start Block Dashboard API on localhost:3000 to view the blockchain blocks
* Start the Node.Js server and run the app.
### Configuration of System:
The configuration of the system is shown in Table 2 for the implementation of the proposed blockchain-enabled healthcare architecture.
### Generate Private and Public Keys:
EdDSA generates the private and public keys for everyone in the network. The private key is securely kept, while the public key is available for all people. Every participant publicly disclosed the public key, but they have the private key corresponding to it [22].
### Data encryption using AES algorithm
We have used the AES algorithm for EdDSA key and Signature generation as shown in algorithm 1. This data will be encrypted using the AES-256 symmetric encryption method. The first step is to get a key that is created randomly. This random key is used to encrypt the report and produce an encrypted file. This key can only be created from scratch if you first encrypted the file. The encryption and decryption function is shown in Figure 10.
**E. Store the file encrypted using AES on IPFS:** The data will be stored on IPFS. Once a record is saved on IPFS, it produces an unique hash. The hash key grants access to the data. It has now been made available on the blockchain as shown in Figure 11.
\begin{table}
\begin{tabular}{|l|l|} \hline Resource & Specialization \\ \hline OS & Ubunt 21.04 \\ \hline Processor & Intel(R) Core(TM) i7-4500U CPU @ \\ & 1.80GHz \\ \hline RAM & 8.00 GB \\ \hline Programming Language & Java Script (React Library and Express Js) \\ and API & and Node Js \\ \hline \end{tabular}
\end{table}
Table 2: System Configurations
Figure 10: Encryption and decryption function using AES-256
**F. Commit the transaction on Blockchain:** The transaction will now commit in BigchainDB on the local node as well as on the peer node using Tendermint as shown in Figure 12.
**G. Granting and Revoking access to the doctor:** As we discussed earlier, this is a patient-centric application. So, the patient has all the control of their data. They can grant access and can revoke access to or from the Doctor. We have implemented a function for this, where patients grant access to the doctor and patients can revoke to the doctors.
**H. Asset retrieval**: Figure 13 shows the complete flow to retrieve the assets which is stored in BigchainDB and IPFS. To retrieve the data, start the node.js server, connect it to the IPFS network then connect to BigchainDB. After a successful connection calls the getAssetId function with the respective user public key.
**F. Commit the transaction on Blockchain:** The transaction will now commit in BigchainDB on the local node as well as on the peer node using Tendermint as shown in Figure 12.
**G. Granting and Revoking access to the doctor:** As we discussed earlier, this is a patient-centric application. So, the patient has all the control of their data. They can grant access and can revoke access to or from the Doctor. We have implemented a function for this, where patients grant access to the doctor and patients can revoke to the doctors.
**H. Asset retrieval**: Figure 13 shows the complete flow to retrieve the assets which is stored in BigchainDB and IPFS. To retrieve the data, start the node.js server, connect it to the IPFS network then connect to BigchainDB. After a successful connection calls the getAssetId function with the respective user public key.
**F. Commit the transaction on Blockchain:** The transaction will now commit in BigchainDB on the local node as well as on the peer node using Tendermint as shown in Figure 12.
**G. Granting and Revoking access to the doctor:** As we discussed earlier, this is a patient-centric application. So, the patient has all the control of their data. They can grant access and can revoke access to or from the Doctor. We have implemented a function for this, where patients grant access to the doctor and patients can revoke to the doctors.
**H. Asset retrieval**: Figure 13 shows the complete flow to retrieve the assets which is stored in BigchainDB and IPFS. To retrieve the data, start the node.js server, connect it to the IPFS network then connect to BigchainDB. After a successful connection calls the getAssetId function with the respective user public key.
**G. Granting and Revoking access to the doctor:** As we discussed earlier, this is a patient-centric application. So, the patient has all the control of their data. They can grant access and can revoke access to or from the Doctor. We have implemented a function for this, where patients grant access to the doctor and patients can revoke to the doctors.
**H. Asset retrieval**: Figure 13 shows the complete flow to retrieve the assets which is stored in BigchainDB and IPFS. To retrieve the data, start the node.js server, connect it to the IPFS network then connect to BigchainDB. After a successful connection calls the getAssetId function with the respective user public key.
**G. Granting and Revoking access to the doctor:** As we discussed earlier, this is a patient-centric application. So, the patient has all the control of their data. They can grant access and can revoke access to or from the Doctor. We have implemented a function for this, where patients grant access to the doctor and patients can revoke to the doctors.
**H. Asset retrieval**: Figure 13 shows the complete flow to retrieve the assets which is stored in BigchainDB and IPFS. To retrieve the data, start the node.js server, connect it to the IPFS network then connect to BigchainDB. After a successful connection calls the getAssetId function with the respective user public key.
**G. Granting and Revoking access to the doctor:** As we discussed earlier, this is a patient-centric application. So, the patient has all the control of their data. They can grant access and can revoke access to or from the Doctor. We have implemented a function for this, where patients grant access to the doctor and patients can revoke to the doctors.
**H. Asset retrieval**: Figure 13 shows the complete flow to retrieve the assets which is stored in BigchainDB and IPFS. To retrieve the data, start the node.js server, connect it to the IPFS network then connect to BigchainDB. After a successful connection calls the getAssetId function with the respective user public key.
**G. Granting and Revoking access to the doctor:** As we discussed earlier, this is a patient-centric application. So, the patient has all the control of their data. They can grant access and can revoke access to or from the Doctor. We have implemented a function for this, where patients grant access to the doctor and patients can revoke to the doctors.
**H. Asset retrieval**: Figure 13 shows the complete flow to retrieve the assets which is stored in BigchainDB and IPFS. To retrieve the data, start the node.js server, connect it to the IPFS network then connect to BigchainDB. After a successful connection calls the getAssetId function with the respective user public key.
**G. Granting and Revoking access to the doctor:** As we discussed earlier, this is a patient-centric application. So, the patient has all the control of their data. They can grant access and can revoke access to or from the Doctor. We have implemented a function for this, where patients grant access to the doctor and patients can revoke to the doctors.
**H. Asset retrieval**: Figure 13 shows the complete flow to retrieve the assets which is stored in BigchainDB and IPFS. To retrieve the data, start the node.js server, connect it to the IPFS network then connect to BigchainDB. After a successful connection calls the getAssetId function with the respective user public key.
**G. Granting and Revoking access to the doctor:** As we discussed earlier, this is a patient-centric application. So, the patient has all the control of their data. They can grant access and can revoke access to or from the Doctor. We have implemented a function for this, where patients grant access to the doctor and patients can revoke to the doctors.
**H. Asset retrieval**: Figure 13 shows the complete flow to retrieve the assets which is stored in BigchainDB and IPFS. To retrieve the data, start the node.js server, connect it to the IPFS network then connect to BigchainDB. After a successful connection calls the getAssetId function with the respective user public key.
**G. Granting and Revoking access to the doctor:** As we discussed earlier, this is a patient-centric application. So, the patient has all the control of their data. They can grant access and can revoke access to or from the Doctor. We have implemented a function for this, where patients grant access to the doctor and patients can revoke to the doctors.
**H. Asset retrieval**: Figure 13 shows the complete flow to retrieve the assets which is stored in BigchainDB and IPFS. To retrieve the data, start the node.js server, connect it to the IPFS network then connect to BigchainDB. After a successful connection calls the getAssetId function with the respective user public key.
**G. Granting and Revoking access to the doctor:** As we discussed earlier, this is a patient-centric application. So, the patient has all the control of their data. They can grant access and can revoke access to or from the Doctor. We have implemented a function for this, where patients grant access to the doctor and patients can revoke to the doctors.
**H. Asset retrieval**: Figure 13 shows the complete flow to retrieve the assets which is stored in BigchainDB and IPFS. To retrieve the data, start the node.js server, connect it to the IPFS network then connect to BigchainDB. After a successful connection calls the getAssetId function with the respective user public key.
**G. Granting and Revoking access to the doctor:** As we discussed earlier, this is a patient-centric application. So, the patient has all the control of their data. They can grant access and can revoke access to or from the Doctor. We have implemented a function for this, where patients grant access to the doctor and patients can revoke to the doctors.
**H. Asset retrieval**: Figure 13 shows the complete flow to retrieve the assets which is stored in BigchainDB and IPFS. To retrieve the data, start the node.js server, connect it to the IPFS network then connect to BigchainDB. After a successful connection calls the getAssetId function with the respective user public key.
**G. Granting and Revoking access to the doctor:** As we discussed earlier, this is a patient-centric application. So, the patient has all the control of their data. They can grant access and can revoke access to or from the Doctor. We have implemented a function for this, where patients grant access to the doctor and patients can revoke to the doctors.
**H. Asset retrieval**: Figure 13 shows the complete flow to retrieve the assets which is stored in BigchainDB and IPFS. To retrieve the data, start the node.js server, connect it to the IPFS network then connect to BigchainDB. After a successful connection calls the getAssetId function with the respective user public key.
**G. Granting and Revoking access to the doctor:** As we discussed earlier, this is a patient-centric application. So, the patient has all the control of their data. They can grant access and can revoke access to or from the Doctor. We have implemented a function for this, where patients grant access to the doctor and patients can revoke to the doctors.
**H. Asset retrieval**: Figure 13 shows the complete flow to retrieve the assets which is stored in BigchainDB and IPFS. To retrieve the data, start the node.js server, connect it to the IPFS network then connect to BigchainDB. After a successful connection calls the getAssetId function with the respective user public key.
**G. Granting and Revoking access to the doctor:** As we discussed earlier, this is a patient-centric application. So, the patient has all the control of their data. They can grant access and can revoke access to or from the Doctor. We have implemented a function for this, where patients grant access to the doctor and patients can revoke to the doctors.
**H. Asset retrieval**: Figure 13 shows the complete flow to retrieve the assets which is stored in BigchainDB and IPFS. To retrieve the data, start the node.js server, connect it to the IPFS network then connect to BigchainDB. After a successful connection calls the getAssetId function with the respective user public key.
**G. Granting and Revoking access to the doctor:** As we discussed earlier, this is a patient-centric application. So, the patient has all the control of their data. They can grant access and can revoke access to or from the Doctor. We have implemented a function for this, where patients grant access to the doctor and patients can revoke to the doctors.
**H. Asset retrieval**: Figure 13 shows the complete flow to retrieve the assets which is stored in BigchainDB and IPFS. To retrieve the data, start the node.js server, connect it to the IPFS network then connect to BigchainDB. After a successful connection calls the getAssetId function with the respective user public key.
**G. Granting and Revoking access to the doctor:** As we discussed earlier, this is a patient-centric application. So, the patient has all the control of their data. They can grant access and can revoke access to or from the Doctor. We have implemented a function for this, where patients grant access to the doctor and patients can revoke to the doctors.
**H. Asset retrieval**: Figure 13 shows the complete flow to retrieve the assets which is stored in BigchainDB and IPFS. To retrieve the data, start the node.js server, connect it to the IPFS network then connect to BigchainDB. After a successful connection calls the getAssetId function with the respective user public key.
**G. Granting and Revoking access to the doctor:** As we discussed earlier, this is a patient-centric application. So, the patient has all the control of their data. They can grant access and can revoke access to or from the Doctor. We have implemented a function for this, where patients grant access to the doctor and patients can revoke to the doctors.
**H. Asset retrieval**: Figure 13 shows the complete flow to retrieve the assets which is stored in BigchainDB and IPFS. To retrieve the data, start the node.js server, connect it to the IPFS network then connect to BigchainDB. After a successful connection calls the getAssetId function with the respective user public key.
**G. Granting and Revoking access to the doctor:** As we discussed earlier, this is a patient-centric application. So, the patient has all the control of their data. They can grant access and can revoke access to or from the Doctor. We have implemented a function for this, where patients grant access to the doctor and patients can revoke to the doctors.
**H. Asset retrieval**: Figure 13 shows the complete flow to retrieve the assets which is stored in BigchainDB and IPFS. To retrieve the data, start the node.js server, connect it to the IPFS network then connect to BigchainDB. After a successful connection calls the getAssetId function with the respective user public key.
**G. Granting and Revoking access to the doctor:** As we discussed earlier, this is a patient-centric application. So, the patient has all the control of their data. They can grant access and can revoke access to or from the Doctor. We have implemented a function for this, where patients grant access to the doctor and patients can revoke to the doctors.
**H. Asset retrieval**: Figure 13 shows the complete flow to retrieve the assets which is stored in BigchainDB and IPFS. To retrieve the data, start the node.js server, connect it to the IPFS network then connect to BigchainDB. After a successful connection calls the getAssetId function with the respective user public key.
**G. Granting and Revoking access to the doctor:** As we discussed earlier, this is a patient-centric application. So, the patient has all the control of their data. They can grant access and revoke access to or from the Doctor. We have implemented a function for this, where patients grant access to the doctor and patients can revoke to the doctors.
**H. Asset retrieval**: Figure 13 shows the complete flow to retrieve the assets which is stored in BigchainDB and IPFS. To retrieve the data, start the node.js server, connect it to the IPFS network then connect to BigchainDB. After a successful connection calls the getAssetId function with the respective user public key.
**G. Granting and Revoking access to the doctor:** As we discussed earlier, this is a patient-centric application. So, the patient has all the control of their data. They can grant access and can revoke access to or from the Doctor. We have implemented a function for this, where patients grant access to the doctor and patients can revoke to the doctors.
**H. Asset retrieval**: Figure 13 shows the complete flow to retrieve the
**I. Front-end view of Doctor and Patient**: The Patient's view and Doctor's view is shown in Figure 14 and 15 respectively.
We will now address the individual in question and provide the necessary procedures for keeping his healthcare data in MediChain, which medical personnel may seek and access if allowed. Figure 16 illustrates a rough sketch of these processes.
* Registered patients can log into the portal.
Fig. 13: Backend view of asset retrieval
Fig. 14 patient view
Fig. 15 Doctor's view
* A patient can upload his/her past medical history and other health-related information.
* In the other hand, doctors and other health-related persons can log in to their respective portals.
* They can upload clinical data.
* Request patients for granted access to their health-related data.
* Patients can allow access to their data if they want.
* After getting access to data, then doctors can prescribe accordingly.
* Patients can also revoke access to their data anytime if they want.
All large files like Radiology scans and large lab reports are pushed to off-chain (IPFS) and encrypted Content ID (CID) are stored on the blockchain.
### Result and Discussion
The system is used for four main purposes: uploading health data, granting permission to the Doctor, accessing the patient data, and revoking the access. Another benefit is that the system offers RSA and blockchain-based security, bigchainDB allows users to query their data. Current systems utilize cloud technology to get information from a centralized database. A distributed ledger technology (DLT) runs our system, even if a node is damaged. However, if someone tries to alter the information, we can identify and correct that erroneous information. However, our system saves data in an immutable manner, so it will recover information even if the system crashes and loss of local data.
User Registration: Users can register by providing the appropriate detail. The doctor registration process has three steps (See Figure 17).
* Enter basic details First Name, Last Name, Email Address, Password, and Phone No.
* Verify using OTP received at the email address provided earlier.
* Provide professional details of Hospital, Qualification, Specialization, Work Experience, and Current Workspace.
The patient registration process has two steps (See Figure 18).
* Enter basic details First Name, Last Name, Email Address, Password, Gender, Date of Birth, Phone No. and Emergency Email.
* Verify using OTP received at the email address provided earlier.
**Patient Portal:** After successful registration to the portal. Patients can now log in and interact with the features provided by the developed app like adding previous medical records, various lab records, radiology scans etc.
Initially, the Dashboard of doctors and other medical professionals are empty. Then Patients grants access to their resources (See Figure 19). After getting access to patient data Doctors prescribe medicine and other things accordingly (Figure 20). Then the patient can now remove access from the Doctor if they want (Figure 21).
## 7 Conclusion and Future Works
Healthcare is one of the trending areas in the field of computer science after the adoption of advanced technology such as IoT, edge computing, and other AI-enabled technologies, but the existing healthcare systems facing several challenges such as non-patient centric systems, transparency, data breaches, privacy, and security of the patient data, etc. In addition, Current EHR systems suffer from significant shortcomings owing to the existence of potential data fragmentation, falsification of data, and system glitches that prevent access to critical information in the event of an emergency. This conventional EHR system is vulnerable to newer technologies that have been put in place to fight the current threats. Thus, there is a dire need to overcome the mentioned issues in the Healthcare domain. Hence, in this paper, a blockchain-assisted secure and reliable data exchange architecture for Cyber Physical Healthcare 4.0 is proposed. Initially, a system is proposed where electronic health records are stored and shared on a decentralized platform that can be utilized to build decentralized access using blockchain technology and replace the existing centralized system. Secondly, the proposed Blockchain-enabled architecture used the tendermint and IPFS technologies to resolve the issues like data fragmentation, data leakage, and illegal access to patient data that are prevalent in existing Electronic Health Record (EHR) systems. Finally, privacy and security are effectively preserved by considering the Blockchain-enabled AES-256 algorithm for data security. Thus, the proposed work provides a secure Cyber physical blockchain proofs-of-concept platform and a cost-effective solution. The proposed work may be extended to other applications by replacing more secure encryption techniques [37]. Moreover, the proposed work can be extended to transfer a large amount of data over the Cloud and monitor by intelligent technologies [38, 40]. In addition, ChatGPT and IoT can be combined with a cyber-physical healthcare system to expedite and improve patient care [39]. They make an outstanding duo that is changing how people interact with technology and may greatly improve our lives in the decades to come [41].
## Declaration of Competing Interest Statement
The authors declare that no conflict of interest and they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
2306.04056 | Defrosting and Blast Freezing Dark Matter | We show that the present-day dark matter abundance can be produced through a
novel mechanism that involves a very rapid thermal freeze-out caused by
inhomogeneous heating and successive fast cooling of small fireballs in the
early Universe. The fireballs can be produced from energy deposited in small
scale structure growth induced by Yukawa interactions in certain particle
species. Yukawa interactions are known to cause growth of halos even during a
radiation dominated era, and the same interactions facilitate cooling and
collapse of the halos by the emission of scalars. Energy deposited in the
Standard Model plasma at the locations of the halo collapse can heat the
plasma, re-establishing thermal equilibrium. The subsequent expansion and
cooling of plasma fireballs leads to freeze-out of dark matter on timescales
much shorter than the Hubble time. This mechanism can produce the right
abundance of dark matter for masses and annihilation cross sections previously
thought to be ruled out. | Marcos M. Flores, Chris Kouvaris, Alexander Kusenko | 2023-06-06T23:14:17Z | http://arxiv.org/abs/2306.04056v2 | # Defrosting and Blast Freezing Dark Matter
###### Abstract
We show that the present-day dark matter abundance can be produced through a novel mechanism that involves a very rapid thermal freeze-out caused by inhomogeneous heating and successive fast cooling of small fireballs in the early Universe. The fireballs can be produced from energy deposited in small scale structure growth induced by Yukawa interactions in certain particle species. Yukawa interactions are known to cause growth of halos even during a radiation dominated era, and the same interactions facilitate cooling and collapse of the halos by the emission of scalars. Energy deposited in the Standard Model plasma at the locations of the halo collapse can heat the plasma, re-establishing thermal equilibrium. The subsequent expansion and cooling of plasma fireballs leads to freeze-out of dark matter on time scales much shorter than the Hubble time. This mechanism can produce the right abundance of dark matter for masses and annihilation cross sections previously thought to be ruled out.
+
Footnote †: preprint: IPMU23-0015
Weakly interacting massive particle (WIMP) is a well-motivated dark matter (DM) candidate. The most commonly assumed production scenario is based on freeze-out: the DM abundance is frozen at the temperature at which the WIMP annihilation rate becomes slower than the expansion rate of the Universe. Thus, it is the Hubble rate that determines the WIMP abundance.
However, there may be another relevant time scale affecting freeze-out. A recently discovered phenomenon of halo formation in some particle species during the radiation dominated era [1; 2; 3; 4; 5; 6] can create inhomogeneous heating of plasma, with subsequent cooling of fireballs, which introduces a new time scale, much shorter than the Hubble time scale. The WIMP freeze-out is then determined by this shorter time scale, rather than the Hubble rate, leading to a different dependence of DM abundance on the annihilation cross sections. In this _letter_ we will explore the implications of defrosting and blast-freezing plasma for WIMP abundance. We will show that this possibility opens a new range of WIMP parameters, which has important implications for direct and indirect DM searches.
The traditional DM formation scenario involves a heavy particle \(X\) which is weakly coupled to the Standard Model (SM) early in the evolution of the Universe. At high temperatures, the \(X\) population is initially in thermal equilibrium with the SM. As the Universe expands, the DM abundance is diluted until \(XX\leftrightarrow\) SM interactions occur slowly compared to the Hubble rate. Once interactions become rare, the comoving number density of \(X\) particles remains fixed to the present day. This "freeze-out" process is described by the Boltzmann equation,
\[\dot{n}_{X}+3Hn_{X}=-\langle\sigma_{\rm ann}v\rangle\left(n_{X}^{2}-(n_{X}^{ \rm eq})^{2}\right), \tag{1}\]
where \(\langle\sigma_{\rm ann}v\rangle\) is the thermally-averaged cross section times the relative particle velocity. The temperature at which the final DM abundance is frozen out, \(T_{\rm FO}^{X}\), can be approximated by solving
\[\Gamma_{\rm ann}(T_{\rm FO}^{X})\equiv\langle\sigma_{\rm ann}v\rangle n_{X}^ {\rm eq}(T_{\rm FO}^{X})=H(T_{\rm FO}^{X}). \tag{2}\]
The present day \(X\) abundance is given by [7],
\[\Omega_{X}\simeq 5.2\times 10^{-2}\frac{x_{\rm FO}^{X}}{\sqrt{g_{*}(M_{X})}} \left(\frac{10^{-8}\ {\rm GeV}^{-2}}{\langle\sigma_{\rm ann}v\rangle}\right), \tag{3}\]
where \(x_{\rm FO}^{X}=m_{X}/T_{\rm FO}^{X}\). The above result is insensitive to the mass of the heavy particle, and primarily determined by the cross section. The fact that the observed DM particle density is reproduced if
\[\sqrt{\langle\sigma_{\rm ann}v\rangle}\sim 0.1\sqrt{G_{F}}, \tag{4}\]
where \(G_{F}\) is the Fermi four-interaction strength, is referred to as the _WIMP miracle_.
Simultaneously, Eq. (3) illustrates the fact that larger cross sections can lead to an under-abundance of DM. The principle goal of this _letter_ is to introduce a scenario which can produce the proper DM abundance, particularly in the high-annihilation cross section parameter space where the traditional freeze-out formalism produces too little DM, thus failing to generate the abundance observed today.
This scenario relies on the recently explored principle of early structure formation. The generation of overdensities of DM by long-range forces, like Yukawa interactions, has been examined in many contexts [1; 2; 3; 4; 5; 6]. Yukawa forces are generally stronger than gravity, thus allowing for the formation of structure during both the matter and radiation dominated eras. The growth of
overdensities through Yukawa forces can lead to bound states [8; 9; 10], or in the presence of radiative cooling via the same Yukawa interaction, collapse and formation of primordial black holes [3; 11]. Alternatively, an overdensity may evaporate due to annihilation of its constituent particles. However, if these particles couple to the SM, the formation, collapse and annihilation of an overdense region can locally heat the SM plasma. This inhomogeneous, local heating has proven useful when applied to either the matter anti-matter asymmetry of the Universe [12; 13] or the generation of primordial magnetic fields [14].
Following Ref. [3] we consider a dark (sub)sector with a heavy fermion \(\psi\) and a light scalar \(\chi\) interacting via Yukawa coupling:
\[\mathcal{L}\supset\frac{1}{2}m_{\chi}^{2}\chi^{2}+y\chi\bar{\psi}\psi+ \mathcal{L}_{\rm Y-SM}. \tag{5}\]
This sector is introduced in addition to the SM and the WIMP \(X\) (which may be accompanied by some additional new physics). The interactions in the Yukawa sector of (5) are designed to create the inhomogeneous heating. The Yukawa sector is weakly coupled to the SM via the cross terms in \(\mathcal{L}_{\rm Y-SM}\). We will parameterize the strength of this coupling below, when we discuss the energy transfer from the dark Yukawa fireballs to the SM plasma.
We require that the fermions \(\psi\) are either stable or have a total decay width \(\Gamma\ll m_{\psi}^{2}/M_{\rm Pl}\) where \(M_{\rm Pl}\approx 2\times 10^{18}\) GeV. This ensures there is a cosmological epoch where the \(\psi\) particles can become nonrelativistic, decoupled from equilibrium and interact via the long-range force mediated by the \(\chi\) field.
The strength of the Yukawa interaction is generally much larger than gravity. This is demonstrated by comparing the strength of each force, i.e. through \(\beta\equiv yM_{\rm Pl}/m_{\psi}\gg 1\). It should also be briefly noted that another key difference between Yukawa interactions and gravity is the fact that Yukawa interactions couple to the number density of \(\psi\) rather than its energy density.
The presence of additional long-range scalar forces can lead to the rapid development of structure as the overdensities \(\Delta\equiv(n_{\psi}-\bar{n}_{\psi})/\bar{n}_{\psi}\) grow considerably fast. In particular, it has be demonstrated that [1; 2; 3; 4; 5],
\[\Delta\propto a^{\beta},\quad\beta\gg 1, \tag{6}\]
even during radiation domination. For reference, matter perturbations under the influence of gravity grow as \(\delta\propto\ln a\) during radiation domination and \(\delta\propto a\) during matter domination. The rapid growth of structures is generally faster than the Hubble rate, implying that the overdensities become non-linear within a Hubble time. This is followed by the formation of virializied DM halos comprised of \(\psi\) particles.
The details of this early structure formation has been explored both analytically and numerically in Ref. [6]. In this study, the quartic terms in the \(\chi\) potential are included. Both analytical results and \(N\)-body simulations point to the possibility of a rapid structure formation in the presence of the background dynamics of the scalar field \(\chi\).
For the formation of overdensities to occur, we require that \(\bar{\psi}\psi\leftrightarrow\chi\chi\) interactions freeze-out so that a fixed population of \(\psi\) particles can be captured into DM halos. To do so, we will use a similar framework as described above and define the \(\psi\) freeze-out temperature as the solution to the following equation,
\[\frac{\Gamma(T_{\rm FO}^{\psi})}{H(T_{\rm FO}^{\psi})}=1\ \text{for}\ \Gamma(T)\simeq\frac{y^{4}}{4\pi(T^{2}+m_{\psi}^{2})}n_{\psi}^{\rm eq}(T). \tag{7}\]
Once the temperature of the \(\psi\)-fluid reaches \(T_{\rm FO}^{\psi}\), scalar forces can begin to coalesce material into DM-\(\psi\) halos, as previously described.
Before calculating this temperature we must note that unlike with gravitational forces, the binding energy of the Yukawa interactions can contribute to the total energy budget in a nontrivial fashion. To accommodate for this possibility, we include an additional energy component into the equation describing the evolution of the Hubble parameter,
\[3M_{\rm Pl}^{2}H^{2}=\rho_{\rm rad}+\rho_{\psi}+\rho_{y}, \tag{8}\]
where \(\rho_{y}\) accounts for the energy density of Yukawa potential energy. The length scale of the scalar force \(m_{\chi}^{-1}\) requires that we consider two regimes, namely when \(H^{-1}<m_{\chi}^{-1}\) or \(H^{-1}>m_{\chi}^{-1}\). When the horizon is smaller than the Compton wavelength of the mediator, the entire Hubble volume is subject to influence from the scalar interaction. Alternatively, when the horizon grows beyond \(m_{\chi}^{-1}\) only subhorizon regions can communicate via the Yukawa force. In this case the number of regions subject to scalar interactions within the horizon is \(N_{h}=(m_{\chi}/H)^{3}\). The relationship between the horizon size and the mediator mass gives two expressions for the Yukawa energy density,
\[\rho_{y}(T)=\frac{3y^{2}}{4\pi m_{\psi}^{2}H^{-3}}\begin{cases}M_{\rm hor}^{2} /H^{-1}&H^{-1}<m_{\chi}^{-1}\\ N_{h}M_{\rm hal}^{2}/m_{\chi}^{-1}&H^{-1}>m_{\chi}^{-1}\end{cases}, \tag{9}\]
where
\[\begin{pmatrix}M_{\rm hor}\\ M_{\rm hal}\end{pmatrix}=\frac{4\pi}{3}m_{\psi}n_{\psi}^{\rm eq}(T)\begin{pmatrix} H(T)^{-3}\\ m_{\chi}^{-3}\end{pmatrix}. \tag{10}\]
Depending on the selection of \(\{m_{\psi},y,m_{\chi}\}\) we have three relevant temperatures. The Universe originally begins from a radiation dominated era which eventually transitions to Yukawa domination at \(T_{\rm eq}^{\rm RD\to YD}\). After this, the horizon size grows to exceeds \(m_{\chi}^{-1}\) at \(T_{m_{\chi}=H}\). Later on as the Universe keeps expanding, the number density of the \(\psi\)-fluid rapidly decreases as the temperature falls below \(m_{\psi}\). This allows for reestablishment of radiation domination at \(T_{\rm eq}^{\rm YD\to RD}\). This leads to an evolution of
the Hubble parameter given by
\[H(T)^{2}=\begin{cases}\frac{\pi^{2}}{90}g_{*}\frac{T^{4}}{M_{\rm Pl}^{2}}&T \lesssim T_{\rm eq}^{\rm YD\to RD}\ \&\ T\gtrsim T_{\rm eq}^{\rm RD\to YD}\\ \\ \frac{2\pi^{1/2}}{3M_{\rm Pl}}yp^{\rm eq}_{\psi}(T)&T_{m_{\chi}=H}\lesssim T \lesssim T\lesssim T_{\rm eq}^{\rm RD\to YD}\\ \\ \frac{4\pi}{9M_{\rm Pl}^{2}}\frac{y^{2}n_{\phi}^{\rm eq}(T)^{2}}{m_{\chi}^{2}} &T_{\rm eq}^{\rm YD\to RD}\lesssim T\lesssim T_{m_{\chi}=H}.\end{cases} \tag{11}\]
Depending on the choice of parameters one could have a situation where \(T_{\rm eq}^{\rm YD\to RD}>T_{m_{\chi}=H}\). In this case the evolution of the Hubble parameter instead follows
\[H(T)^{2}=\begin{cases}\frac{\pi^{2}}{90}g_{*}\frac{T^{4}}{M_{\rm Pl}^{2}}&T \lesssim T_{\rm eq}^{\rm YD\to RD}\ \&\ T\gtrsim T_{\rm eq}^{\rm RD\to YD}\\ \\ \frac{2\pi^{1/2}}{3M_{\rm Pl}}yp^{\rm eq}_{\psi}(T)&T_{\rm eq}^{\rm YD\to RD }\lesssim T\lesssim T_{\rm eq}^{\rm RD\to YD}.\end{cases} \tag{12}\]
Having determined the Hubble rate in this general framework, we can now determine when the \(\bar{\psi}\psi\leftrightarrow\chi\chi\) freeze-out using (7). Without energy dissipation the formation of virialized DM halos through Yukawa interactions would be the end of the story, with the newly formed halos either remaining stable or evaporating once the constituent particles decay. However, the same Yukawa interaction allows for the emission of scalar radiation much in the same way as electromagnetic interactions between charged particles allows for energy dissipation. Initially the energy is carried away primarily through \(\psi\) pair interactions i.e. through free-free or bremsstrahlung emission. As the halo continues collapsing, it becomes optically thick to the scalar mediator \(\chi\) and radiation becomes trapped. This allows radiation to be emitted only from the surface, determining whether radiative cooling is efficient enough to facilitate collapse within a Hubble time. The characteristic time scale associated with the energy loss during the surface radiation stage is,
\[\tau_{\rm cool}\equiv\frac{E}{|dE/dt|}\sim R_{h}\ll H^{-1}, \tag{13}\]
thus implying that radiative collapse is rapid.
For our exploration of DM generation, we assume that the formation and collapse of the \(\psi\) halos occurs quickly after \(\bar{\psi}\psi\leftrightarrow\chi\chi\) freeze out. Specifically, we assume that the formation and collapse occurs rapidly so the change in background temperature is negligible. Therefore, we will set the background temperature \(T_{\rm bg}\) equal to the \(\psi\) freeze-out temperature \(T_{\rm FQ}^{\psi}\). At formation, we will assume that the halos initially have radius \(R_{h}=m_{\chi}^{-1}\) and masses,
\[M_{h}=\frac{4\pi}{3}m_{\psi}n_{\psi}(T_{\rm bg})R_{h}^{3}. \tag{14}\]
In principle, the masses and radii of the halos should be derived from an underlying distribution such as that described in the Press-Schechter formalism. Existing \(N\)-body simulations have yet to determine the precise nature of the mass distribution of the \(\psi\) halos. Our assumption that the halos have a similar composition is motivated by both simplicity and the fact that the strength of Yukawa interactions will facilitate a rich merger history, as demonstrated in [6], which form halos of a maximal radius given by the Compton wavelength of the mediator \(\chi\).
Without an asymmetry in the \(\psi\) population, the halos will annihilate after the initial stage of collapse. Annihilations will begin when the average distance between particles within the halo is less than the Compton wavelength, i.e.
\[R_{\rm ann}\equiv\frac{1}{m_{\psi}}\left(\frac{3}{4\pi}\frac{M_{h}}{m_{\psi}} \right)^{1/3}. \tag{15}\]
The energy released through scalar quanta during the initial collapse is given by,
\[\Delta E_{\rm col}=\frac{y^{2}M_{h}^{2}}{m_{\psi}^{2}R_{\rm ann}}\left(1- \frac{R_{\rm ann}}{R_{h}}\right). \tag{16}\]
The annihilation will also release energy into the ambient plasma, \(\Delta E_{\rm ann}=\epsilon_{\rm ann}M_{h}\) where \(\epsilon_{\rm ann}\leq 1\) parameterizes the efficiency of annihilation. The total energy emitted through scalar particles is the sum, \(\Delta E\equiv\Delta E_{\rm col}+\Delta E_{\rm ann}\). We will assume that the dark sector, which contains \(\psi\) and \(\chi\), is weakly coupled to the SM. The sudden release of a large amount of energy from collapse and halo annihilation locally heats the SM plasma. We will assume that the collapsing halos become relativistic so that the initial temperature of the heated region is
\[T_{i}^{4}=\frac{90\xi_{s}\Delta E}{4\pi^{3}g_{*}(T_{i})R_{i}^{3}}, \tag{17}\]
where \(R_{i}\) is the initial radius of the heated region, \(g_{*}(T_{i})\) are the relativistic degrees of freedom at \(T_{i}\) and, crucially, \(\xi_{s}\) is the efficiency of energy transfer from the dark sector \(\chi\) particles to the SM plasma. Once heated above the background temperature, the excessive energy spreads out via both a shockwave and diffusion. In the first case the expanding shockwave travels through the SM plasma at the speed of sound. Using energy conservation, we determine the characteristic time scale associated with the explosion to be
\[\tau_{\rm exp}\equiv\frac{T}{|dT/dt|}=\frac{4R_{i}}{\sqrt{3}}\left(1+\frac{t- t_{i}}{\sqrt{3}R_{i}}\right). \tag{18}\]
Most of the energy released during the collapse occurs right before annihilation. Therefore, we approximate \(R_{i}\sim R_{\rm ann}\). The region may also cool through diffusion. The time scale associated with this processes is approximately [12],
\[\tau_{\rm diff}\sim\frac{R_{i}^{2}}{4D}\left(\frac{T_{i}}{T}\right)^{8/3}, \tag{19}\]
where \(D\) is a diffusion constant. As in Ref. [12] we take \(D\sim 1/\gamma_{g}\) where \(\gamma_{g}\sim 0.3g_{*}^{2}T\) and \(g_{s}\) is the strong coupling [15]. The dissipation timescale is defined as
\(\min\{\tau_{\rm exp},\tau_{\rm diff}\}\). Generally, \(\tau_{\rm exp}\ll\tau_{\rm diff}\). This means that the expanding fireball is the most rapid method of energy transport. The collapse of the \(\psi\) halos reheats local regions above \(T=m_{X}\). Within these heated regions, thermal equilibrium is reestablished allowing DM production followed by a rapid imhomogeneous refreeze-out. The evolution of the \(X\) number density in these heated regions is described by
\[\frac{dn_{X}(T)}{d\ln T}=-\frac{\Gamma_{X}(T)}{\tau_{\rm diss}^{-1}(T)}\left[n _{X}^{2}-(n_{X}^{\rm eq})^{2}\right]. \tag{20}\]
Unlike the standard paradigm, here DM freeze-out reccurs at the temperature where \(\Gamma_{X}(T_{f})=\tau_{\rm diss}^{-1}(T_{f})\) where \(T_{f}\) is the freeze-out temperature within an expanding heated region found by solving the above equation. The fact that the fireballs expand at a much faster rate than the Hubble parameter, cause a rapid DM (re)-freeze-out leaving a significant DM abundance even for annihilation cross sections much larger than the weak one required in the standard freeze-out paradigm. The resultant DM energy density is thus given by,
\[\rho_{X}(T_{\rm bg})\simeq f\cdot m_{X}n_{X}^{\rm eq}(T_{f}), \tag{21}\]
where \(f\) is a volume filling factor defined as
\[f\equiv N_{h}H^{3}(T_{\rm bg})R_{i}^{3}\left(\frac{T_{i}}{T_{f}}\right)^{4}. \tag{22}\]
We will restrict our parameter space such that \(f<1\), \(T_{i}<M_{\rm Pl}\), and \(T_{i}>m_{\psi}\), \(T_{\rm bg}\). Furthermore, to ensure that the DM produced is cold, or non-relativistic we require that \(T_{f}<m_{X}\). Lastly, we need to impose a condition for the produced DM. Once the fireball stops expanding i.e., once its temperature drops to the background value, we have to guarantee that the generated DM density does not annihilate with a rate higher than the Hubble expansion. In such a case we will once again have a depletion of the DM population. This condition reads
\[\frac{3\sqrt{10}}{\pi}g_{*}^{-1/2}\frac{T_{\rm bg}M_{\rm Pl}}{\tau_{\rm diss} (T_{f})}<T_{f}^{3}. \tag{23}\]
In Fig. 1 we show the present day DM abundance as function of the DM mass, \(m_{X}\), and annihilation coupling, \(\alpha_{X}\). The black contours illustrate the abundance predicted by the traditional freeze-out scenario, where we have taken the thermal averaged cross section to be,
\[\langle\sigma v\rangle\sim\alpha_{X}^{2}/m_{X}^{2}. \tag{24}\]
The _red line_ denotes the region of parameter space within our setup where \(\Omega_{X}=\Omega_{\rm DM}\). The green region indicates parameter space which fails the depletion limit, (23). Fig. 1 illustrates that our scenario can generate the full abundance of DM in regions of parameter space which are ruled out in the standard freeze-out paradigm. In the presented example, we have used \(y=0.75\), \(m_{\psi}=10^{0.3}\) GeV, \(m_{\chi}=10^{-16}\) GeV and \(\xi_{s}=10^{-3}\).
As touched upon in [3; 13], the scalar \(\chi\) may act as an additional relativistic degree of freedom thus leading to a modification to \(\Delta N_{\rm eff}\). This is permissible, and may even help resolve the Hubble tension [16; 17; 18; 19; 20; 21; 22; 23; 24]. Crucially, the collapse of \(\psi\) halos is generically asymmetric, thus leading to a non-vanishing quadrupole moment. This naturally implies the generation of gravitational waves, and studies have shown that such a signal may be detectable by future gravitational wave observatories [5].
Here we demonstrate that early structure formation from Yukawa interactions can lead to the generation of DM. This occurs due to the collapse of halos of heavy fermions due to scalar radiation, which exchanges energy with SM, thus allowing for local heating of the background SM plasma. These inhomogenous hot spots reestablish thermal equilibrium, allowing for the local formation of DM. Freeze-out, now dependent on the time-scale pertaining to the rapid expansion of the heated region, produces a non-negligible abundance of matter which may present itself as the DM observed today. This is particularly applicable to regions of parameter space which are ruled-out in the traditional freeze-out paradigm.
###### Acknowledgements.
The work of A.K. and M.M.F was supported by the U.S. Department of Energy (DOE) Grant No. DE-SC0009937. M.M.F was also supported by the University of California, Office of the President Dissertation Year Fellowship and donors to the UCLA Department of
Figure 1: Predicted DM density for a variety of DM masses and couplings. The red line indicates values within the fireball scenario where the abundance of \(X\) is exactly the observed abundance of DM today. Black contours are the predicted DM density in the conventional freeze-out scenario i.e., (3). The green region is excluded by the depletion limit (23). Here \(y=0.75\), \(m_{\psi}=10^{0.3}\) GeV, \(m_{\chi}=10^{-16}\) GeV and \(\xi_{s}=10^{-3}\).
Physics & Astronomy. A.K. was also supported by World Premier International Research Center Initiative (WPI), MEXT, Japan, and by Japan Society for the Promotion of Science (JSPS) KAKENHI Grant No. JP20H05853. This work used computational and storage services associated with the Hoffman2 Shared Cluster provided by UCLA Institute for Digital Research and Education's Research Technology Group.
|
2310.19087 | Transport-of-Intensity Model for Single-Mask X-ray Differential Phase
Contrast Imaging | X-ray phase contrast imaging holds great promise for improving the visibility
of light-element materials such as soft tissues and tumors. Single-mask
differential phase contrastnimaging method stands out as a simple and effective
approach to yield differential phase contrast. In this work, we introduce a
novel model for a single-mask phase imaging system based on the
transport-of-intensity equation. Our model provides an accessible understanding
of signal and contrast formation in single-mask X-ray phase imaging, offering a
clear perspective on the image formation process, for example, the origin of
alternate bright and dark fringes in phase contrast intensity images. Aided by
our model, we present an efficient retrieval method that yields differential
phase contrast imagery in a single acquisition step. Our model gives insight
into the contrast generation and its dependence on the system geometry and
imaging parameters in both the initial intensity image as well as in retrieved
images. The model validity as well as the proposed retrieval method is
demonstrated via both experimental results on a system developed in-house as
well as with Monte Carlo simulations. In conclusion, our work not only provides
a model for an intuitive visualization of image formation but also offers a
method to optimize differential phase imaging setups, holding tremendous
promise for advancing medical diagnostics and other applications. | Jingcheng Yuan, Mini Das | 2023-10-29T17:21:58Z | http://arxiv.org/abs/2310.19087v2 | # Transport-of-Intensity Model for Single-Mask X-ray Differential Phase Contrast Imaging
###### Abstract
X-ray phase contrast imaging has emerged as a promising technique for enhancing contrast and visibility of light-element materials, including soft tissues and tumors. In this paper, we propose a novel model for a single-mask phase imaging system based on the transport-of-intensity equation. Our model offers an intuitive understanding of signal and contrast formation in single-mask phase imaging systems. We also demonstrate efficient retrieval of attenuation and differential phase contrast with just one intensity image without requiring spectral information or mask/detector movement. The model validity as well as the proposed retrieval method is demonstrated via both experimental results on a system developed in-house as well as with Monte Carlo simulations. Our proposed model overcomes the limitations of existing models by providing an intuitive visualization of the image formation process. It also allows optimizing differential phase imaging geometries for practical applications, further enhancing broader applicability. Furthermore, the general methodology described herein offers insight on deriving transport-of-intensity models for novel X-ray imaging systems with periodic structures in the beam path.
## 1 Introduction
Conventional X-ray imaging relies on the variations of X-ray attenuation properties among different tissue types. However, it has limited contrast for low atomic number materials such as organs, tumors, and other soft tissue [1][2][3][4]. In recent years, X-ray phase contrast imaging (PCI) has gained much attention for its potential to enhance this soft tissue contrast by utilizing relative phase changes with X-ray propagation through the object. Among the various techniques available, single-mask differential phase contrast imaging method stands out as a simple and effective approach yielding higher contrast than optic-free methods like propagation based phase imaging.
Propagation-based (PB) phase contrast imaging, which is one of the simplest PCI techniques, does not require any additional optics in the beam path, but only an increase in the object-to-detector distance and a partially coherent source [5][6]. At a longer propagation distance, the wavefront distortions caused by the object are recorded as intensity variations on the detector plane. These variations can be modeled by the approximated form of the transport-of-intensity equation (TIE) [7]:
\[I(z,\vec{r})=I(0,\vec{r})-\frac{z}{k}(\nabla_{\perp}I(0,\vec{r})\cdot\nabla_{ \perp}\phi(\vec{r})+I(0,\vec{r})\nabla_{\perp}^{2}\phi(\vec{r})) \tag{1}\]
Here \(I(z,\vec{r})\) and \(I(0,\vec{r})\) are the x-ray intensity at the object plane and detector plane respectively, \(\phi(\vec{r})\) is the beam's phase shift caused by the object, \(z\) is the object-to-detector distance, \(k\) is the wave number, and \(\vec{r}\) is the coordinate in x-y plane. In most applications of interest with predominantly soft materials in the beam path, we can assume the intensity variation is slow in the x and y direction, so the second term \(\nabla_{\perp}I(0,\vec{r})\cdot\nabla_{\perp}\phi(\vec{r})\) can be neglected [7][8]. Hence the equation becomes:
\[I(z,\vec{r})=I(0,\vec{r})-\frac{z}{k}I(0,\vec{r})\nabla_{\perp}^{2}\phi(\vec{ r}) \tag{2}\]
Thus in addition to the attenuation signal (\(I(0,\vec{r})\)), the intensity at each detector pixel is predominantly influenced by the Laplacian of X-ray phase shift caused by the object. This Laplacian phase signal manifests as bright and dark borders along the edges, leading to edge enhancement.
A single-mask phase imaging technique [9] is similar to PB phase imaging but with an added periodic X-ray absorption mask positioned between the source and the object, in close proximity to the object (Fig.1a). The mask creates X-ray beamlets by periodically blocking X-rays with thin strips of heavy-element materials like gold. The mask is aligned with respect to the detector such that the center of each thin and long strip of beamlet is aligned to every other pixel boundary [9][10]. Hence, with a proper alignment, in the absence of an object in the beam path, the signal intensity on each detector pixel column is uniform, showing no discernible patterns (Fig.1b)). When the object is introduced, the heterogeneities within the object induce refraction effects that alter the original directions of the beamlets. Thus, intensity differences appear between neighboring pixels, resulting in the appearance of bright and dark fringes on the detector. Fig.1b shows the schematic and 2b shows experimental results to be described in detail later. In terms of wave optics, this can also be explained as the modification of the Fresnel diffraction pattern of the periodic mask with the introduction of the object. These relative intensity variations can allow disentangling differential phase information from attenuation-related intensity variations on the detector plane when the appropriate light-transport model is known.
We note that the single-mask PCI method is a significantly simplified version of the double-mask edge-illumination (EI) method developed earlier [11][12] and also avoids 'wasting' large number of photons that has already transmitted through the object.
The formulation of single-mask PCI has been previously attempted using both refraction [9] and wave-optics [13] models. However, these existing models have limitations in terms of providing intuitive visualizations of signal and contrast formation in the images. In this paper, we present a new model based on TIE and show how this model can be used for efficient retrieval of absorption and differential phase. Furthermore, we show a single-shot (only one acquisition with no movement of object or optical components), low-dose phase-imaging that yields multiple image features and contrast types. While our prior work has shown efficient phase retrieval methods with spectral data (using photon counting detectors) [14][15][16][17][18], the retrieval shown here does not require spectral data.
Figure 1: Schematic of the single mask phase imaging method. (a) Top view of the set up. The X-ray beam propagates in \(z\) direction, and the detector pixels are placed in the x-y plane. (b) Diagram of mask alignment with detector pixels. The mask strips are along y direction.
## 2 Methods
### Formulation
Our formulation for single-mask PCI starts with the TIE, (Eqn.(1)). Unlike the propagation-based method, here we have a high-contrast periodic absorption mask, so the term \(\nabla_{\perp}I(0,\vec{r})\cdot\nabla_{\perp}\phi(\vec{r})\) can no longer be neglected. Here, the transmitted intensity at the object plane is \(I(0,\vec{r})=T(\vec{r})\cdot M(x)\), where \(T(\vec{r})\) and \(M(x)\) is the transmission function of the object and the mask, respectively. Therefore:
\[\nabla_{\perp}I(0,\vec{r})=T\nabla_{\perp}M+M\nabla_{\perp}T\approx T\partial_ {x}M \tag{3}\]
Here we applied the approximation that \(\nabla_{\perp}I(0,\vec{r})\) is mainly contributed by the mask so \(M\nabla_{\perp}T\) can be neglected. After substituting Eqn.(3) into Eqn.(1), the x-ray intensity measured by each detector pixel can be calculated by integrating Eqn.(1) over the range of the corresponding pixel:
\[I_{n}=\int_{x_{n}}^{x_{n+1}}T\cdot M\,\mathrm{d}x-\frac{z}{k}\int_{x_{n}}^{x_{ n+1}}T\cdot\partial_{x}M\cdot\partial_{x}\phi\,\mathrm{d}x-\frac{z}{k}\int_{x_{n }}^{x_{n+1}}T\cdot M\cdot\nabla_{\perp}^{2}\phi\,\mathrm{d}x \tag{4}\]
where \(n\) is the pixel index in the horizontal direction when the masks with slits in the vertical direction is used, \(x_{n}\) and \(x_{n+1}\) is the coordinate of the left and right boundary of the corresponding pixel respectively.
Here we assume that the attenuation, phase, and differential phase of the sample vary slowly within the range of a pixel. Then Eqn.(4) becomes:
\[I_{n} =T_{n}\int_{x_{n}}^{x_{n+1}}M(x)\,\mathrm{d}x-\frac{z}{k}T_{n} \partial_{x}\phi_{n}\int_{x_{n}}^{x_{n+1}}\partial_{x}M(x)\,\mathrm{d}x-\frac {z}{k}T_{n}\nabla_{\perp}^{2}\phi_{n}\int_{x_{n}}^{x_{n+1}}M(x)\,\mathrm{d}x \tag{5}\] \[=T_{n}(1-L_{n})\int_{np}^{(n+1)p}M(x)\,\mathrm{d}x-T_{n}D_{n} \int_{np}^{(n+1)p}\partial_{x}M(x)\,\mathrm{d}x\]
Here \(T_{n}=T(x_{n})\), which represents the object attenuation function averaged within each pixel; \(L_{n}=\frac{z}{k}\nabla_{\perp}^{2}\phi(x_{n})\), which is the Laplacian of phase shift caused by the object; \(D_{n}=\frac{z}{k}\partial_{x}\phi(x_{n})\), which is the gradient of phase shift, and is proportional to the x-ray refraction angle.
In the case of a perfect mask, as we have demonstrated in our previous paper [16][19], the mask transmission function \(M(x)\) can be expressed as a square wave. Considering the imperfection of the mask, a more general form of its transmission function can be expressed as a Fourier series:
\[M(x)=\sum_{m}C_{m}\cos\frac{2\pi mx}{(2p)} \tag{6}\]
Figure 2: PCI intensity images of a PMMA rod and their cross-section profile (blue curve) with (a) propagation-based method and (b) single-mask method.
where \(p\) is the detector pixel size, which means the period of the mask is two times of detector pixel size (See Fig.1a for reference on mask vs detector period). Then we have:
\[\int_{np}^{(n+1)p}M(x)\,\mathrm{d}x=\int_{np}^{(n+1)p}\sum_{m}C_{m} \cos\frac{\pi mx}{p}\,\mathrm{d}x=C_{0}p \tag{7a}\] \[\int_{np}^{(n+1)p}\partial_{x}M(x)\,\mathrm{d}x=M((n+1)p)-M(np)=-2 \sum_{m}C_{2m+1}\cdot(-1)^{n} \tag{7b}\]
The results of Eqn.(7) are related to the mask transmission function and are not related to the object property. Thus, Eqn.(5) becomes:
\[I_{n}=w_{e}T_{n}(1-L_{n})-\alpha(-1)^{n}T_{n}D_{n} \tag{8}\]
One can see from this equation that the signal is a combination of two distinct effects. The first term in Eqn.(8), which we refer to as the propagation-based (PB) part, shares the same form as the propagation-based PCI (Eqn.(2)).
The second term, referred to as the differential phase contrast (DPC) term, gives rise to the characteristic bright and dark fringes within the image, as demonstrated in the example depicted in Fig.2b. This is because it contains the factor \((-1)^{n}\), where \(n\) denotes the pixel column index. The magnitude of these fringes is directly proportional to the DPC signal \(D_{n}\).
Also, the two parts of the signal are multiplied by two mask-related coefficients \(w_{e}\) and \(\alpha\) respectively. The coefficients' values are determined by combining Eqn.(5) and Eqn.(7).
\[w_{e} =C_{0}p \tag{9a}\] \[\alpha =2\sum_{m}C_{2m+1} \tag{9b}\]
According to Eqn.(7a), the coefficient \(w_{e}\) represents the integration of the mask-transmission function within a pixel, which corresponds to the average transmission of the mask. Thus, it can be interpreted as the effective transparent width or aperture size. It is also similar to the \(w_{e}\) in our previous model for double mask method [19]. On the other hand, \(\alpha\) is a unit-less coefficient that depends on the odd Fourier coefficients of the mask's transmission function. It can be understood as the contrast of the mask pattern.
The two coefficients can be interpreted as separate filters for the PB part and the DPC part independently. In comparison with the PB method, the intensity of the PB signal in the single-mask method is reduced by the coefficient \(w_{e}\). This implies that the mask selectively reduces the X-ray intensity that contributes to the PB part, thereby allowing for a reduction in X-ray radiation dose to the sample without affecting the signal intensity of the DPC part. The second coefficient, \(\alpha\), which represents the contrast of the mask, determines the efficiency of obtaining the DPC signal.
### Retrieval Method
From the last section we could see the attenuation, Laplacian phase and differential phase have contributions to the measured intensity. Among them, the DPC part is shown as high-frequency fringes in Fig.2b. A retrieval process is needed to separate the PB part and the DPC part.
In an experimental realization, a single image is taken with the object and the mask in the beam path. This image (represented as \(I_{n(M+S)}\)) can be compared with the image with mask only (flat field \(I_{n(M)}\)). The formula for the mask-and-sample image \(I_{n(M+S)}\) is shown in Eqn.(8); for the mask-only (or flat-field) image, \(I_{n(M)}=w_{e}\). After doing flat-field correction, we obtain:
\[\bar{I}_{n}=\frac{I_{n(M+S)}}{I_{n(M)}}=T_{n}(1-L_{n})-(-1)^{n}\frac{\alpha}{w_ {e}}T_{n}D_{n} \tag{10}\]
Thus we can write the corrected intensity for \(n^{th}\) and \((n+1)^{th}\) pixels in a same row:
\[\begin{split}\bar{I}_{n}&=T_{n}(1-L_{n})-(-1)^{n} \frac{\alpha}{w_{e}}T_{n}D_{n}\\ \bar{I}_{n+1}&=T_{n+1}(1-L_{n+1})+(-1)^{n}\frac{ \alpha}{w_{e}}T_{n+1}D_{n+1}\end{split} \tag{11}\]
We can separate PB and DPC signals by adding and subtracting the intensity values on \(n^{th}\) and \((n+1)^{th}\) pixels:
\[\bar{I}_{n}+\bar{I}_{n+1} \approx 2T_{n}(1-L_{n}) \tag{12a}\] \[\bar{I}_{n}-\bar{I}_{n+1} \approx 2(-1)^{n}\frac{\alpha}{w_{e}}T_{n}D_{n} \tag{12b}\]
From Eqn.(12a), we can easily have the retrieval of the PB part (Eqn.(13a)). In order to retrieve \(D_{n}\), if we consider the intensity of the Laplacian of phase to be relatively weak compared with 1, we can apply the approximation of \(1-L_{n}\approx 1\) when calculating differential phase \(D_{n}\). Then we can arrive at the retrieval formula for propagation based PCI and differential phase:
\[T_{n}(1-L_{n})\approx\frac{\bar{I}_{n}+\bar{I}_{n+1}}{2} \tag{13a}\] \[D_{n}\approx(-1)^{n}\frac{w_{e}}{\alpha}\frac{\bar{I}_{n}-\bar{I }_{n+1}}{\bar{I}_{n}+\bar{I}_{n+1}} \tag{13b}\]
where (13a) is the retrieved PB image and (13b) is the retrieved DPC image.
### Experiment
We used a polychromatic micro-focus x-ray tube (Hama-matsu L8121-03) operating with a focal spot of 7 um and the tube voltage of 40 kV. The source-to-object and object-to-detector distance were both around 60 cm. The sample in consideration is a PMMA rod with a diameter of 3 mm. We used a mask with gold strips, approximately 52 um in periodicity, fabricated on a silicon substrate. The data was collected using a Silicon photon-counting detector with the pixel size of 55 um[20], which was carefully calibrated and corrected [21][22]. While spectral data is available with this detector, the methods presented here do not use this spectral information and treats it as an energy integrating detector. The raw image is shown in Fig.2b and the retrieved PB and DPC images are shown in Fig.3. The results also include images of a dried wasp specimen shown in Fig.4.
## 3 Results and Discussion
The raw image (Fig.2b) obtained from the single-mask method reveals distinct signal components, including attenuation, the Laplacian of phase, and differential phase. The presence of attenuation results in darker regions in the middle of the cylinder. The Laplacian phase manifests as bright and dark borders along the edges. Additionally, the differential phase appears as bright and dark fringes specifically in regions with non-zero phase gradient. As the differential phase signal varies across the sample, it gives rise to variations in the intensity of the fringes. These observed signal components align well with the outcomes predicted by our newly proposed model (Eqn.(8)), validating its reliability in capturing and explaining the underlying physics of the single-mask phase imaging method.
Additionally, The retrieved PB image (Fig.3a) obtained from our model closely resembles the image captured using the propagation-based method (Fig.2a). The minor difference between the attenuation levels can be attributed to the shift of the spectrum induced by the mask's silicon
substrate. Furthermore, the retrieved differential phase contrast (DPC) image (Fig.(b)b) exhibits excellent contrast and visibility. These results indicate that our proposed retrieval method, based on our formulated model, effectively separates and provides visualization of different signal components all from a single image.
The retrieved PB and DPC images of the dried was using our model are shown in Fig.4. Both these images, retrieved from a single-shot of single-mask phase contrast intensity image, show fine details of the specimen. Both of these images exhibit a high sensitivity to tissue boundaries within the sample, due to the visualization of Laplacian and gradient of phase respectively. Note that the retrieved PB image is a combination of the Laplacian of phase and the attenuation.
Also, discernible differences exist between the two images. The retrieved PB image captures edge information in every orientation in the 2D plane. Conversely, the retrieved DPC image accentuates features that align perpendicular to the mask strips.
In addition, it is interesting to note that the DPC images have higher sensitivity to features with slower variations, such as the bubbles within the adhesive used to affix the specimen shown at
Figure 4: Retrieved images of a wasp specimen taken with single mask method in experiment. (a) Retrieved PB image; (b) Retrieved DPC image.
Figure 3: Retrieved (a) PB image and (b) DPC image of a PMMA rod taken with single mask method in experiment, together with their average cross-section profiles (light blue curve).
the lower region of the image.
For further verification of our TIE model, we compared the results obtained from the TIE model and the Monte-Carlo simulation [23]. For the TIE model (Eqn.(10)), the calculation is based on known \(w_{e}\) and \(\alpha\) without using the mask transmission function. In contrast, the Monte-Carlo simulation employs the mask's transmission function. Both methods model the same experimental geometry and sample. The results are shown in Fig.5. From the figure, we observe that, the results obtained via our TIE model calculation align consistently with the Monte-Carlo simulation outcomes with different mask selections. It shows our new model demonstrates overall accuracy in comparison with Monte-Carlo simulations.
According to our model, the final signal depends not on the specific mask transmission function but rather on two mask parameters: effective aperture size (\(w_{e}\)) and mask contrast (\(\alpha\)). The flat-field corrected raw image in Fig.5 reveals components corresponding to Eqn.(10), including attenuation, DPC, and Laplacian phase. Notably, smaller \(w_{e}\) values yield higher DPC signal contrast, due to increased filtration of photons contributing to the PB term. This enhances DPC signal proportion relative to the PB signal, thus improving X-ray dose efficiency. However, it is important to note that smaller \(w_{e}\) values may present challenges in mask manufacturing and potentially require longer exposure times to maintain image quality. Careful consideration of trade-offs between dose efficiency, mask fabrication feasibility, and exposure time is crucial in practical single-mask method applications.
## 4 Conclusion
We have presented a novel light-transport model for single-mask (SM) X-ray phase contrast imaging which yields strong differential phase signatures from a simple system design. The measured X-ray intensity with the SM method combines attenuation, Laplacian phase, and differential phase effects. Our proposed model provides intuitive understanding of the relative contributions of these effects to the detector pixel intensities. Our model also shows how these effects depend on the design parameters of the imaging system. Aided by our new model, we show an effective retrieval method yielding PB image (combining attenuation with Laplacian phase) and a differential phase contrast image in a single acquisition, thus yielding images with two types of edge enhancement and shape-based contrast.
Our TIE model suggests that the mask transmission function can be characterized by two
Figure 5: Comparison of TIE model calculation and Monte-Carlo simulation with different Mask the transmission functions. (a) Plot of transmission functions, where Mask 1 represents a perfect square wave, while Mask 2 is defined as \(M(x)=0.5+0.5\cos(\frac{\pi\,x}{P})\); (b)-(c) Comparison of the flat-field corrected raw image between TIE model calculation and Monte-Carlo simulation of Mask 1 and Mask 2;
parameters that have a significant influence on the final signal. By considering these two parameters, one finds a flexibility and adaptability in mask design and performance optimization in practical applications. Our single-shot retrieval method combined with the simple system design yields multiple contrast. This offers a pathway for practical translatability of PCI for a broad range of applications.
|
2304.05531 | Measurable functions on charge spaces | The concept of measurability of functions on a charge space is generalised
for functions taking values in a uniform space. Several existing forms of
measurability generalise naturally in this context, and new forms of
measurability are proposed. Conditions under which the various forms of
measurability are logically equivalent are identified. Applying these concepts
to real-valued functions, some recent characterisations of measurable functions
on a bounded charge space are generalised to the unbounded case. | Jonathan M. Keith | 2023-04-11T23:03:07Z | http://arxiv.org/abs/2304.05531v2 | # Measurable functions on charge spaces
###### Abstract
New forms of measurability are proposed for functions on a charge space, first for real-valued functions, then for functions taking values in a uniform space. Conditions under which the various new and existing forms of measurability are logically equivalent are identified.
keywords: \(T_{1}\)-measurable, \(T_{2}\)-measurable, ray measurable, base measurable, uniform space, finitely additive measure +
Footnote †: journal: Journal of Mathematical Analysis and Applications
## 1 Introduction
In (countably additive) measure and integration theory, measurable functions are usually defined with a domain that is a measurable space and a codomain that is a topological space.
**Definition 1.1**.: _Let \((X,\mathcal{F})\) be a measurable space and let \(Y\) be a topological space. A function \(f:X\to Y\) is measurable if \(f^{-1}(U)\in\mathcal{F}\) for every open set \(U\)._
For any measurable function thus defined, it follows that \(f^{-1}(B)\in\mathcal{F}\) for any Borel set \(B\), and this equivalent property can thus be taken as an alternative definition of measurability (as in Section 4.2 of [1] for example). A more general definition is sometimes used, with an arbitrary \(\sigma\)-field of subsets of \(Y\) used in place of the Borel sets (see Definition 2.1.3 of [2] for example). In that case, \(Y\) need not be a topological space. However, various pathologies can arise in this more general setting: for example, even a continuous function may not have \(f^{-1}(L)\in\mathcal{F}\) for every Lebesgue measurable set \(L\subseteq Y\) (Theorem 4.2.1 of [1]).
On a charge space, however, measurable functions are defined differently. A brief summary of elementary notations and definitions from the theory of charges is provided in Appendix A. Familiarity with these will be assumed throughout what follows.
**Definition 1.2**.: _A function \(f:X\to\mathbb{R}\) on a charge space \((X,\mathcal{F},\mu)\) is:_
1. \(T_{1}\)-measurable _if there is a sequence of simple functions_ \(\{f_{i}\}_{i=1}^{\infty}\) _converging_ _hazily to_ \(f\)_, and_
2. \(T_{2}\)-measurable _if for every_ \(\epsilon>0\) _there is a partition of_ \(X\) _into sets_ \(A_{0},A_{1},\ldots,A_{n}\in\mathcal{F}\) _such that_ \(\mu(A_{0})<\epsilon\) _and_ \(|f(x)-f(x^{\prime})|<\epsilon\) _for all_ \(x,x^{\prime}\in A_{i}\) _and_ \(i\in\{1,\ldots,n\}\)_._
It can be shown that \(f\) is \(T_{1}\)-measurable if and only if it is \(T_{2}\)-measurable (Theorem 4.4.7 of [3]). Moreover, \(T_{2}\)-measurable functions are necessarily _smooth_.
**Definition 1.3**.: _A function \(f:X\to\mathbb{R}\) is smooth if for every \(\epsilon>0\) there is \(k\in(0,\infty)\) such that \(\mu^{*}(\{x\in X:|f(x)|>k\})<\epsilon\)._
When \(\mathcal{F}\) is a \(\sigma\)-field and \(f\) is a real-valued function, Definition 1.1 also implies smoothness (see Proposition 4.2.17 of [3]).
Hazy convergence, \(T_{1}\)-measurability, \(T_{2}\)-measurability, and smoothness can be generalised in a straightforward way when the codomain of \(f\) is a real or complex Banach space, by replacing the absolute value with the norm of the Banach space. (See Chapter III of Dunford and Schwartz [4], where the term _totally measurable_ is used instead of \(T_{1}\)-measurable.) In this general context, \(T_{1}\)-measurability is equivalent to Definition 1.1 under certain conditions (see Theorem 3.6.10 of [4]). However, the equivalence may fail if, for example, \((X,\mathcal{F},\mu)\) is not a complete measure space. Other differences between Definitions 1.1 and 1.2 include: (1) the former places no restrictions on the codomain \(Y\), whereas the latter is specific to functions with codomain \(\mathbb{R}\) (or more generally a Banach space); (2) the former permits any topology on \(Y\), whereas the latter invokes the metric structure of the codomain; and (3) the former does not mention any specific measure on \(X\), whereas the latter involves a specific charge (for \(T_{1}\)-measurability, the charge is implicitly used in the definition of hazy convergence).
A naive way to generalise Definition 1.1 would be to relax the requirement that \(\mathcal{F}\) must be a \(\sigma\)-field, instead permitting it to be an arbitrary field. However, this introduces a number of problems. First, this modified condition
no longer entails that \(f\) is \(T_{2}\)-measurable, or even smooth, when \(Y\) is a Banach space. Second, simple functions composed from measurable sets can converge hazily to a function that would not be measurable according to this modified condition. To see this, consider a charge space \((X,\mathcal{F},\mu)\) that is not Peano-Jordan complete, so that there is some increasing sequence \(\{A_{n}\}_{n=1}^{\infty}\subseteq\mathcal{F}\) with \(A:=\cup_{n=1}^{\infty}A_{n}\in\overline{\mathcal{F}}\setminus\mathcal{F}\). Then \(I_{A_{n}}\xrightarrow{h}I_{A}\), but \(I_{A}^{-1}(a,\infty)=A\notin\mathcal{F}\) for all \(a\in[0,1)\).
These concerns naturally raise questions about how the different notions of measurability are related, and whether more general notions of measurability with respect to fields and charges might be useful. This paper introduces several new forms of measurability for functions on a charge space, one of which (base measurability) resembles Definition 1.1. These new forms of measurability are motivated by the characterisations of \(T_{1}\)-measurability identified in Theorem 3.4 of [5]. There are two main goals: 1) to describe useful new forms of measurability, in particular for functions on a charge space that are not real-valued; and 2) to explore logical relationships between the various new and existing forms of measurability and identify conditions under which they are equivalent.
With regard to functions that are not real-valued, this paper focusses on functions taking values in a _uniform space_. A summary of terms and notations pertaining to uniform spaces is provided in Appendix B. This focus on uniform spaces is in part motivated by a paper of Avallone and Basile [6], in which it is shown that key results in the approach to integration developed by Dunford and Schwarz [4], including dominated convergence, emerge from the interplay of two uniformities: one defined on the space of \(T_{1}\)-measurable functions in terms of hazy convergence, and the other defined on the smaller space of simple functions in terms of the pseudometric induced by the integral. Although this present paper does not develop any novel concepts in integration, the new forms of measurability proposed here are intended to prepare the way for future innovations in integration theory, in particular by relating the uniform structure of the codomain of functions to the uniform structure of \(L_{p}\) spaces.
The paper is structured as follows. Section 2 proposes several new forms of measurability for real-valued functions on a charge space, specifically _ray measurability_, _base measurability_, and a stronger form of \(T_{1}\)-measurability called _regular \(T_{1}\)-measurability_. The main theorem of this section (Theorem 2.7) identifies conditions under which the various new and existing
forms of measurability for real-valued functions are equivalent. Theorem 2.7 is a generalisation of Theorem 3.4 of [5], which applied only to bounded charges. Section 3 proposes generalisations of \(T_{1}\)- and \(T_{2}\)-measurability, and base measurability, for functions taking values in a uniform space. It identifies logical relationships between various notions of measurability, including Definition 1.1, in this context. A short final section considers "inheritance" of measurability with respect to a coarser or finer uniformity, with application to uniformities induced by pseudometrics, weak uniformities, and product uniformities.
## 2 Measurable real-valued functions
For real-valued functions on a measurable space \((X,\mathcal{F})\), Definition 1.1 holds iff \(f^{-1}(y,\infty)\in\mathcal{F}\) for all \(y\in\mathbb{R}\). (Some authors take this to be the definition of measurability for real-valued functions: see Definition 121C and Theorem 121E(f) of [7].) This condition can be generalised in various ways.
**Definition 2.1**.: _Consider a charge space \((X,\mathcal{F},\mu)\). A function \(f:X\to\mathbb{R}\) is_
1. right ray measurable _if_ \(f^{-1}(y,\infty)\in\overline{\mathcal{F}}\) _for_ \(y\in D\)_, where_ \(D\) _is a dense subset of_ \(\mathbb{R}\)_,_
2. left ray measurable _if_ \(f^{-1}(-\infty,y)\in\overline{\mathcal{F}}\) _for_ \(y\in D\)_, where_ \(D\) _is a dense subset of_ \(\mathbb{R}\)_,_
3. ray measurable _if both_ \(f^{-1}(y,\infty)\in\overline{\mathcal{F}}\) _and_ \(f^{-1}(-\infty,y)\in\overline{\mathcal{F}}\) _for_ \(y\in D\)_, where_ \(D\) _is a dense subset of_ \(\mathbb{R}\)_,_
4. base measurable _if for each_ \(y\in\mathbb{R}\) _there is a neighbourhood base_ \(\rho(y)\) _(for the usual topology on_ \(\mathbb{R}\)_) such that_ \(f^{-1}(U)\in\overline{\mathcal{F}}\) _for each_ \(U\in\rho(y)\)_,_
5. uniformly base measurable _if there is an entourage base_ \(\mathcal{B}\) _(for the usual uniformity on_ \(\mathbb{R}\)_) such that_ \(f^{-1}(E[y])\in\overline{\mathcal{F}}\) _for all_ \(E\in\mathcal{B}\) _and_ \(y\in Y\)_, and_
6. regularly \(T_{1}\)-measurable _if there are sequences of_ \(\overline{\mathcal{F}}\)_-simple functions_ \(\{s_{i}^{+}\}_{i=1}^{\infty}\) _and_ \(\{s_{i}^{-}\}_{i=1}^{\infty}\) _converging hazily to_ \(f^{+}\) _and_ \(f^{-}\) _respectively, where_ 1. \(s_{i}^{+}:=\sum_{k=1}^{n_{i}}y_{ik}I_{A_{ik}^{+}}\) _and_ \(s_{i}^{-}:=\sum_{k=1}^{n_{i}}y_{ik}I_{A_{ik}^{-}}\) _with_ \(n_{i}:=i2^{i}-1\)_,_ 2. \(y_{ik}:=k2^{-i}\delta\) _for each_ \(k\in\{1,\ldots,n_{i}+1\}\)_,_ \(i\in\mathbb{N}\) _and some_ \(\delta>0\)_, and_ 3. \(A_{ik}^{+}:=f^{-1}(y_{ik},y_{i,k+1}]\in\overline{\mathcal{F}}\) _and_ \(A_{ik}^{-}:=f^{-1}[-y_{k+1},-y_{k})\in\overline{\mathcal{F}}\) _for each_ \(k\in\{1,\ldots,n_{i}\}\) _and_ \(i\in\mathbb{N}\)
In Part 6 of the above definition, and throughout this paper, \(f^{+}:=\max\{f,0\}\), \(f^{-}:=-\min\{f,0\}\), and \(\mathbb{N}\) represents the positive integers (excluding \(0\)).
Some logical relationships between these different forms of measurability can be stated immediately.
**Proposition 2.2**.: _Consider a charge space \((X,\mathcal{F},\mu)\) and a function \(f:X\to\mathbb{R}\)._
1. _If_ \(f\) _is ray measurable, then it is left ray measurable and right ray measurable._
2. \(f\) _is ray measurable iff both_ \(f^{+}\) _and_ \(f^{-}\) _are ray measurable._
3. _If_ \(\mathcal{F}\) _is a_ \(\sigma\)_-field and_ \(f\) _is measurable in the sense of Definition_ 1.1_, then_ \(f\) _is ray measurable._
4. _If_ \((X,\mathcal{F},\mu)\) _is a complete measure space and_ \(f\) _is ray measurable, then_ \(f\) _is measurable in the sense of Definition_ 1.1_._
5. _If_ \(f\) _is right ray measurable, then it is base measurable, with_ \[\rho(y):=\{(y_{1},y_{2}]:y_{1}<y_{2},f^{-1}(y_{1},\infty)\in\overline{ \mathcal{F}}\text{ and }f^{-1}(y_{2},\infty)\in\overline{\mathcal{F}}\}\] _for each_ \(y\in\mathbb{R}\)_. Similarly, if_ \(f\) _is left ray measurable, then it is base measurable._
6. _If_ \(\zeta\) _is a subbase for the usual topology on_ \(\mathbb{R}\) _such that_ \(f^{-1}(E)\in\overline{\mathcal{F}}\) _for all_ \(E\in\zeta\)_, then_ \(f\) _is base measurable._
7. _If_ \(\mathcal{Z}\) _is an entourage subbase for the usual uniformity on_ \(\mathbb{R}\) _such that_ \(E[y]\in\overline{\mathcal{F}}\) _for all_ \(E\in\mathcal{Z}\) _and_ \(y\in\mathbb{R}\)_, then_ \(f\) _is uniformly base measurable._
8. \(f\) _is uniformly base measurable iff it is base measurable._
Claims 1-7 are straightforward to prove, and the proofs are omitted. Proof of Claim 8 is deferred to the next section, where it is generalised for any function with a uniformly locally compact codomain (Theorem 3.2(4)).
Additional relationships between these various types of measurability hold under the conditions of Theorem 2.7 below. However, before these conditions can be stated, some new definitions, notations and lemmas are required.
**Definition 2.3**.: _Consider a charge space \((X,\mathcal{F},\mu)\) and a function \(f:X\to\mathbb{R}\). For each \(z\in\mathbb{R}\), define the boundary mass limit \(\phi_{f}:\mathbb{R}\to[0,\infty]\) to be the function:_
\[\phi_{f}(z):=\lim_{\delta\to 0}\mu_{*}(f^{-1}(z-\delta,z+\delta)).\]
**Lemma 2.4**.: _The function \(\phi_{f}\) has the following properties._
1. _For any_ \(z\in\mathbb{R}\) _with_ \(\phi_{f}(z)=0\)_, and any_ \(\epsilon>0\)_, there is_ \(\delta>0\) _with_ \(\phi_{f}(w)<\epsilon\) _for all_ \(w\in(z-\delta,z+\delta)\)_._
2. _For any_ \(z\in\phi_{f}^{-1}(0,\infty)\)_, there is_ \(\delta>0\) _such that_ \(\phi_{f}(z)>\sum_{i=1}^{n}\phi_{f}(w_{i})\) _for any_ \(w_{1},\ldots,w_{n}\in(z-\delta,z+\delta)\setminus\{z\}\)_._
3. _The set_ \(\phi_{f}^{-1}(0,\infty)\) _is countable._
Proof.: Property 1a can be shown by contradiction. Suppose it is false, so that there is \(z\in\mathbb{R}\) with \(\phi_{f}(z)=0\) and there is \(\epsilon>0\) such that for any \(\delta>0\) there is \(w\in(z-\delta,z+\delta)\) with \(\phi_{f}(w)\geq\epsilon\). Consider any such \(\delta\) and \(w\), and choose \(\gamma>0\) so that \((w-\gamma,w+\gamma)\subset(z-\delta,z+\delta)\). But then
\[\mu_{*}(f^{-1}(z-\delta,z+\delta))\geq\mu_{*}(f^{-1}(w-\gamma,w+\gamma))\geq\epsilon.\]
Letting \(\delta\to 0\) gives \(\phi_{f}(z)\geq\epsilon\), a contradiction.
Property 2 can similarly be shown by contradiction. Suppose it is false, implying there is \(z\in\phi_{f}^{-1}(0,\infty)\) such that for any \(\delta>0\) there are \(w_{1},\ldots,w_{n}\in(z-\delta,z+\delta)\setminus\{z\}\) with \(\phi_{f}(z)\leq\sum_{i=1}^{n}\phi_{f}(w_{i})\). Choose \(\delta^{(1)}>0\) and obtain \(w_{1}^{(1)},\ldots,w_{n_{1}}^{(1)}\in(z-\delta^{(1)},z+\delta^{(1)})\setminus\{z\}\) with \(\phi_{f}(z)\leq\sum_{i=1}^{n}\phi_{f}(w_{i}^{(1)})\). Choose \(\delta^{(2)}\in(0,\delta^{(1)})\) such that the intervals \(\{(w_{i}^{(1)}-\delta^{(2)},w_{i}^{(1)}+\delta^{(2)})\}_{i=1}^{n_{1}}\) and the interval \((z-\delta^{(2)},z+\delta^{(2)})\) are pairwise disjoint subintervals of \((z-\delta^{(1)},z+\delta^{(1)})\). But then there are \(w_{1}^{(2)},\ldots,w_{n_{2}}^{(2)}\in(z-\delta^{(2)},z+\delta^{(2)})\setminus\{z\}\) with \(\phi_{f}(z)\leq\sum_{i=1}^{n_{2}}\phi_{f}(w_{i}^{(2)})\). This process can be iterated for all \(k\in\mathbb{N}\) to obtain \(\delta^{(k)}\) and \(w_{1}^{(k)},\ldots,w_{n_{k}}^{(k)}\in(z-\delta^{(k)},z+\delta^{(k)})\setminus\{z\}\).
Now \((z-\delta^{(1)},z+\delta^{(1)})\) contains all the pairwise disjoint intervals
\[\{(w_{i}^{(k)}-\delta^{(k+1)},w_{i}^{(k)}+\delta^{(k+1)}):i\in\{1,\ldots,n_{k} \},k\in\mathbb{N}\}.\]
Hence for any \(K\in\mathbb{N}\),
\[\mu_{*}(f^{-1}(z-\delta^{(1)},z+\delta^{(1)})) \geq \sum_{k=1}^{K}\sum_{i=1}^{n_{k}}\mu_{*}(f^{-1}(w_{i}^{(k)}-\delta ^{(k+1)},w_{i}^{(k)}+\delta^{(k+1)}))\] \[\geq \sum_{k=1}^{K}\sum_{i=1}^{n_{k}}\phi_{f}(w_{i}^{(k)})\] \[\geq K\phi_{f}(z).\]
Thus \(\mu_{*}(f^{-1}(z-\delta^{(1)},z+\delta^{(1)}))=\infty\), since \(\phi_{f}(z)>0\). But this holds for any \(\delta^{(1)}>0\), hence \(\phi_{f}(z)=\infty\), a contradiction.
To show Property 3, consider \(z\in\phi_{f}^{-1}(0,\infty)\) and choose \(\delta_{z}>0\) with the property specified in 2. Then \(\phi_{f}^{-1}(1/n,\infty)\cap(z-\delta_{z},z+\delta_{z})\) must be a finite set for each \(n\in\mathbb{N}\), hence \(\phi_{f}^{-1}(0,\infty)\cap(z-\delta_{z},z+\delta_{z})\) is countable. The sets
\[\{\phi_{f}^{-1}(0,\infty)\cap(z-\delta_{z},z+\delta_{z}):z\in\phi_{f}^{-1}(0, \infty)\}\]
form an open cover of \(\phi_{f}^{-1}(0,\infty)\) in the subspace topology (note \(\phi_{f}^{-1}(0,\infty)\) need not be open). Hence there is a countable subcover, since \(\mathbb{R}\) is hereditarily Lindelof. Thus \(\phi_{f}^{-1}(0,\infty)\) is a countable union of countable sets, and is therefore countable.
**Corollary 2.5**.: _Let \(I\) be any interval (bounded or unbounded) in \(\mathbb{R}\). The following statements are logically equivalent._
1. \(\phi_{f}^{-1}(\infty)\) _is nowhere dense in_ \(I\)_._
2. \(\phi_{f}^{-1}(0)\) _is comeagre in_ \(I\)_._
3. \(\phi_{f}^{-1}(0)\) _is dense in_ \(I\)_._
4. \(\phi_{f}^{-1}[0,\infty)\) _is dense in_ \(I\)_._
Proof.: \(1\implies 2\) because if \(\phi_{f}^{-1}(\infty)\) is nowhere dense in \(I\), then \(\phi_{f}^{-1}(0,\infty]\) is meagre in \(I\) by Lemma 2.4(3). \(2\implies 3\) because \(I\) is a Baire space, hence a comeagre set is dense. \(3\implies 4\) because \(\phi_{f}^{-1}(0)\subseteq\phi_{f}^{-1}[0,\infty)\). \(4\implies 1\) because if \(\phi_{f}^{-1}(\infty)\) is dense in some sub-interval of \(I\), Lemma 2.4(1 and 2) imply that sub-interval does not intersect \(\phi_{f}^{-1}[0,\infty)\), contradicting 4.
The main reason boundary mass limits are introduced here is that they relate to \(T_{2}\)-measurability (and hence also \(T_{1}\)-measurability) in the manner described in the following lemma.
**Lemma 2.6**.: _Consider a charge space \((X,\mathcal{F},\mu)\) and a function \(f:X\to\mathbb{R}\). Suppose \(\phi_{f}(z)=0\) for some \(z\in\mathbb{R}\). If at least one of the following conditions holds:_
1. \(f\) _is_ \(T_{2}\)_-measurable, or_
2. \(f\) _is uniformly base measurable and there exists some_ \(a\in\mathbb{R}\) _with_ \(f^{-1}(-\infty,a)\in\overline{\mathcal{F}}\) _or_ \(f^{-1}(a,\infty)\in\overline{\mathcal{F}}\)_, or_
3. \(f\) _is left or right ray measurable,_
_then \(f^{-1}(-\infty,z)\in\overline{\mathcal{F}}\) and \(f^{-1}(z)\in\overline{\mathcal{F}}\) with \(\overline{\mu}(f^{-1}(z))=0\)._
Proof.: Given \(\epsilon>0\), choose \(\delta>0\) such that \(\mu_{*}(f^{-1}(z-\delta,z+\delta))<\epsilon/2\).
Suppose \(f\) is \(T_{2}\)-measurable, so that there is a partition of \(X\) into sets \(A_{0},\ldots,A_{n}\in\mathcal{F}\) such that \(\mu(A_{0})<\epsilon/2\) and \(|f(x)-f(x^{\prime})|<\delta\) for all \(x,x^{\prime}\in A_{i}\) and all \(i\in\{1,\ldots,n\}\). Define \(B\) to be the union of those elements of \(\{A_{1},\ldots,A_{n}\}\) that are contained in \(f^{-1}(-\infty,z)\), or the empty set if there are no such elements. Define \(E\) to be the union of those elements of \(\{A_{1},\ldots,A_{n}\}\) that intersect \(f^{-1}(z)\), or the empty set if there are no such elements. Then \(E\subseteq f^{-1}(z-\delta,z+\delta)\). Define \(C:=B\cup A_{0}\cup E\), and note
\[\mu(C\setminus B)=\mu(A_{0}\cup E)\leq\mu(A_{0})+\mu_{*}(f^{-1}(z-\delta,z+ \delta))<\epsilon.\]
Now \(B,C\in\mathcal{F}\) with \(B\subseteq f^{-1}(-\infty,z)\subseteq C\), hence \(f^{-1}(-\infty,z)\in\overline{\mathcal{F}}\). Similarly, \(f^{-1}(z)\subseteq C\setminus B\), hence \(f^{-1}(z)\in\overline{\mathcal{F}}\) with \(\overline{\mu}(f^{-1}(z))=0\).
Alternatively, suppose Condition 2 of the lemma holds. Uniform base measurability implies there is an entourage \(E\) such that \(E[y]\subset(y-\delta,y+\delta)\) with \(f^{-1}(E[y])\in\overline{\mathcal{F}}\), for each \(y\in\mathbb{R}\). Since \(E\) is an entourage, there is also some \(\gamma\in(0,\delta)\) such that \((y-\gamma,y+\gamma)\subset E[y]\) for each \(y\in\mathbb{R}\). Consider the case there exists \(a\leq z\) with \(f^{-1}(-\infty,a)\in\overline{\mathcal{F}}\). Since \([a,z]\) is totally bounded, there is some finite set \(F\subset[a,z]\) such that \([a,z]\subset\cup_{y^{\prime}\in F}(y^{\prime}-\gamma,y^{\prime}+\gamma)\subset \cup_{y^{\prime}\in F}E[y^{\prime}]\). Define
\[B:=f^{-1}(-\infty,a)\cup\bigcup\{f^{-1}(E[y^{\prime}]):y^{\prime}\in F,E[y^{ \prime}]\subseteq(-\infty,z)\},\]
or the empty set if there are no elements in that union. Define
\[C:=f^{-1}(-\infty,a)\cup\bigcup\{f^{-1}(E[y^{\prime}]):y^{\prime}\in F\}\]
or the empty set if there are no elements in that union. Then \(B,C\in\overline{\mathcal{F}}\) with \(B\subseteq f^{-1}(-\infty,z)\subseteq C\) and \(z\in C\setminus B\subseteq f^{-1}(z-\delta,z+\delta)\). The latter implies \(\overline{\mu}(C\setminus B)\leq\epsilon/2\), hence \(f^{-1}(-\infty,z)\in\overline{\mathcal{F}}\) and \(f^{-1}(z)\in\overline{\mathcal{F}}\) with \(\overline{\mu}(f^{-1}(z))=0\). A similar argument applies if \(f^{-1}(a,\infty)\in\overline{\mathcal{F}}\). In the case \(a>z\), the above argument can be applied to \(-f\), and the conclusion then follows for \(f\).
Finally, suppose \(f\) is left or right ray measurable. Proposition 2.2(5) gives that \(f\) is base measurable, and Proposition 2.2(8) gives that \(f\) is uniformly base measurable. Hence Condition 2 of the lemma holds, and the conclusion follows as above.
Armed with these new concepts, the following relationships between the various forms of measurability proposed in Definition 2.1 can now be proved for a certain class of real-valued functions.
**Theorem 2.7**.: _Consider a charge space \((X,\mathcal{F},\mu)\) and a function \(f:X\to\mathbb{R}\). Suppose \(\phi_{f}^{-1}[0,\infty)\) is dense in \(\mathbb{R}\)._
1. _The following statements are logically equivalent._ 1. \(f\) _is left ray measurable._ 2. \(f\) _is right ray measurable._ 3. \(f\) _is ray measurable._ 4. \(f\) _is uniformly base measurable and_ \(\exists y\in\mathbb{R}\) _with_ \(f^{-1}(y,\infty)\in\overline{\mathcal{F}}\)_._ 5. \(f^{+}\) _and_ \(f^{-}\) _are uniformly base measurable._
2. _The following statements are logically equivalent._ 1. \(f\) _is regularly_ \(T_{1}\)_-measurable._ 2. \(f\) _is_ \(T_{1}\)_-measurable._ 3. \(f\) _is_ \(T_{2}\)_-measurable._ 4. \(f\) _is ray measurable and smooth._ 5. \(f\) _is uniformly base measurable and smooth._
Proof.: First note \(D:=\phi_{f}^{-1}(0)\) is dense in \(\mathbb{R}\), by Corollary 2.5.
\((1a\iff 1b\iff 1c)\) If \(f\) is ray measurable, then it is trivially left ray measurable and right ray measurable. So suppose \(f\) is left ray measurable. For any \(y\in D\) and \(\epsilon>0\), choose \(\delta>0\) so that \(\mu_{*}(f^{-1}(y-\delta,y+\delta))<\epsilon\). Since \(f\) is left ray measurable, there is \(u\in(y-\delta,y)\) such that \(B:=f^{-1}(-\infty,u)\in\overline{\mathcal{F}}\) and \(v\in(y,y+\delta)\) such that \(C:=f^{-1}(-\infty,v)\in\overline{\mathcal{F}}\). But then \(B\subseteq f^{-1}(-\infty,y)\subseteq f^{-1}(-\infty,y)\subseteq C\) with \(\overline{\mu}(C\setminus B)\leq\mu_{*}(f^{-1}(y-\delta,y+\delta))<\epsilon\), implying \(f^{-1}(-\infty,y)\in\overline{\mathcal{F}}\) and \(f^{-1}(-\infty,y]\in\overline{\mathcal{F}}\), hence also \(f^{-1}(y,\infty)\in\overline{\mathcal{F}}\). Thus \(f\) is ray measurable. A similar argument gives that if \(f\) is right ray measurable then it is ray measurable.
\((1c\iff 1d)\) The forward implication follows from Proposition 2.2(5 and 8). For the converse, Lemma 2.6 (second condition) implies \(f^{-1}(-\infty,z)\in\overline{\mathcal{F}}\) and \(f^{-1}(z,\infty)\in\overline{\mathcal{F}}\) for all \(z\in D\). That is, \(f\) is ray measurable.
\((1c\iff 1e)\)\(f\) is ray measurable iff both \(f^{+}\) and \(f^{-}\) are ray measurable. This in turn occurs iff \(f^{+}\) and \(f^{-}\) are uniformly base measurable, since \(1c\iff 1d\). Note the extra condition of 1d is superfluous because \((f^{+})^{-1}(-1,\infty)=(f^{-})^{-1}(-1,\infty)=X\in\overline{\mathcal{F}}\).
\((2a\implies 2b\iff 2c)\) If \(f\) is regularly \(T_{1}\)-measurable, then it is \(T_{1}\)-measurable with respect to the charge space \((X,\overline{\mathcal{F}},\overline{\mu})\), hence also \(T_{1}\)-measurable with respect to \((X,\mathcal{F},\mu)\) by Proposition 1.8(c) of [8]. Theorem 4.4.7 of [3] gives \(2b\iff 2c\).
\((2c\implies 2d\implies 2e\implies 2c)\) If \(f\) is \(T_{2}\)-measurable, then \(f\) is smooth and Lemma 2.6 implies \(f\) is ray measurable, since \(D\) is dense in \(\mathbb{R}\). But then
\(f\) is uniformly base measurable, since \(1c\implies 1d\). \(2e\implies 2c\) is a special case of Theorem 3.2(5), which is proved in Section 3, with \(Y=\mathbb{R}\).
\((2d\implies 2a)\) By Corollary 2.5, \(D\) is comeagre in \(\mathbb{R}\). But then the set
\[\{\delta\in(0,\infty):k2^{-i}\delta\in D^{c}\text{ for some }i\in\mathbb{N},k\in \mathbb{Z}\}\]
is meagre in \((0,\infty)\), since it is a countable union of meagre sets. Since \((0,\infty)\) is not meagre in itself, there is some \(\delta\in(0,\infty)\) such that \(k2^{-i}\delta\in D\) for all \(i\in\mathbb{N}\) and \(k\in\mathbb{Z}\). Now, if \(f\) is ray measurable, then by Lemma 2.6 (Condition 3), \(f^{-1}(k2^{-i}\delta,\infty)\in\overline{\mathcal{F}}\) and \(f^{-1}(-\infty,k2^{-i}\delta)\in\overline{\mathcal{F}}\) for all \(i\in\mathbb{N}\) and \(k\in\mathbb{Z}\). One can therefore define sequences of simple functions \(\{s_{i}^{+}\}_{i=1}^{\infty}\) and \(\{s_{i}^{-}\}_{i=1}^{\infty}\) as in Definition 2.1(6). If \(f\) is smooth, then for any \(\epsilon,M>0\) there exists some \(k\in\mathbb{N}\) such that \(\mu^{*}(\{x\in X:|f(x)|>k\})<\epsilon\) and \(2^{-k}<M\). But then for \(i\geq k\),
\[\mu^{*}(\{x\in X:|s_{i}^{+}(x)-f^{+}(x)|>M\})\leq\mu^{*}(\{x\in X:|f(x)|>k\})<\epsilon.\]
Thus \(s_{i}^{+}\to f^{+}\) and similarly \(s_{i}^{-}\to f^{-}\), implying \(f\) is regularly \(T_{1}\)-measurable.
The requirement in Theorem 2.7 that \(\phi_{f}^{-1}[0,\infty)\) be dense in \(\mathbb{R}\) could be replaced by any of the equivalent conditions mentioned in Corollary 2.5. This requirement trivially holds if \(\mu\) is bounded, since then \(\phi_{f}^{-1}(\infty)\) is empty. Another sufficient condition is that for each \(A\in\mathcal{F}\), either \(\mu(A)<\infty\) or \(\mu(A^{c})<\infty\), since this implies \(\phi_{f}^{-1}(\infty)\) may contain at most one element. Yet another sufficient condition is that \(f\) is integrable (in the sense of Definition 4.4.11 of [3]), since then \(\phi_{f}^{-1}(\infty)\) may only contain zero.
## 3 Measurable functions with values in a uniform space
Several of the new and existing forms of measurability discussed above can be generalised in a natural way for functions with a codomain that is a uniform space.
**Definition 3.1**.: _Consider a charge space \((X,\mathcal{F},\mu)\) and a uniform space \(Y\). A function \(f:X\to Y\) is:_
1. \(T_{1}\)-measurable _if for every_ \(\epsilon>0\) _and every entourage_ \(E\) _there is a simple function_ \(s\) _such that_ \[\mu^{*}(\{x\in X:(s(x),f(x))\notin E\})<\epsilon.\]
2. \(T_{2}\)-measurable _if for every \(\epsilon>0\) and every entourage \(E\) there is a partition of \(X\) into sets \(A_{0},A_{1},\ldots,A_{n}\in\mathcal{F}\) such that \(\mu(A_{0})<\epsilon\) and \(f(A_{i})\times f(A_{i})\subseteq E\) for each \(i\in\{1,\ldots,n\}\), and_
3. smooth _if for every \(\epsilon>0\) and every entourage \(E\) there exists a finite collection of sets \(B_{1},\ldots,B_{n}\in\mathcal{P}(Y)\) such that \(B_{i}\times B_{i}\subseteq E\) for each \(i\in\{1,\ldots,n\}\), and_ \[\mu^{*}(\{x\in X:f(x)\notin\cup_{i=1}^{n}B_{i}\})<\epsilon.\]
4. base measurable _with respect to a topology \(\tau\) if each \(y\in Y\) has a neighbourhood base \(\rho(y)\) such that \(f^{-1}(U)\in\overline{\mathcal{F}}\) for all \(U\in\rho(y)\)._
5. uniformly base measurable _with respect to a uniformity \(\mathcal{U}\) if there is an entourage base \(\mathcal{B}\subseteq\mathcal{U}\) such that \(f^{-1}(E[y])\in\overline{\mathcal{F}}\) for all \(E\in\mathcal{B}\) and \(y\in Y\)._
_._
The notation \(\mathcal{P}(Y)\) used in the definition of smoothness denotes the power set of \(Y\), here and throughout the paper.
The definition of smoothness calls to mind total boundedness. However, smoothness does not in general entail that for any \(\epsilon>0\) there is a totally bounded set \(K\subseteq Y\) with \(\mu^{*}(f^{-1}(K^{c}))<\epsilon\). The latter condition straightforwardly implies smoothness. A sufficient condition for the converse is that there exists an entourage \(E^{*}\) such that \(E^{*}[y]\) is totally bounded for all \(y\in Y\). (To see this, apply the definition of smoothness to a function \(f\), with \(E=E^{*}\) and a given \(\epsilon>0\), thus obtaining \(B_{1},\ldots,B_{n}\subseteq Y\). Note \(B_{i}\) is totally bounded for \(i\in\{1,\ldots,n\}\), since \(B_{i}\times B_{i}\subseteq E^{*}\). Define \(K:=\cup_{i=1}^{n}B_{i}\). Then \(K\) is totally bounded, and \(\mu^{*}(f^{-1}(K^{c}))<\epsilon\).)
The definition of base measurability makes no mention of the uniform structure of \(Y\), and in principle requires only that \(Y\) must be a topological space. In this respect, it resembles Definition 1.1. However, the rest of this paper considers base measurability only in the context of a codomain \(Y\) with uniform structure.
Logical relationships between these generalised forms of measurability are identified in the two theorems contained in this section. The second theorem is separated from the first because some new definitions and notations must first be introduced.
**Theorem 3.2**.: _Consider a charge space \((X,\mathcal{F},\mu)\), a uniform space \((Y,\mathcal{U})\), and a function \(f:X\to Y\)._
1. \(f\) _is_ \(T_{1}\)_-measurable iff_ \(f\) _is_ \(T_{2}\)_-measurable. If either holds, then_ \(f\) _is smooth._
2. _If_ \(\mathcal{F}\) _is a_ \(\sigma\)_-field and_ \(f\) _is measurable in the sense of Definition_ 1.1_, then_ \(f\) _is base measurable._
3. _If_ \((X,\mathcal{F},\mu)\) _is a complete measure space,_ \(Y\) _is hereditarily Lindelof, and_ \(f\) _is base measurable, then_ \(f\) _is measurable in the sense of Definition_ 1.1_._
4. _If_ \(f\) _is uniformly base measurable then_ \(f\) _is base measurable. The converse holds if_ \(Y\) _is uniformly locally compact._
5. _If_ \(f\) _is uniformly base measurable and smooth, then_ \(f\) _is_ \(T_{2}\)_-measurable._
Proof.: The proof of 1 somewhat resembles the proof of Theorem 4.4.7 in [3]. Suppose \(f\) is \(T_{1}\)-measurable and choose \(\epsilon>0\) and an entourage \(E\). Then there is a symmetric entourage \(U\) such that \(U\circ U\subseteq E\). Moreover, there is some simple function \(s:=\sum_{i=1}^{n}b_{i}I_{B_{i}}\), where \(b_{1},\ldots,b_{n}\in Y\) are distinct and \(\{B_{1},\ldots,B_{n}\}\) is a partition of \(X\) into elements of \(\mathcal{F}\), such that \(\mu^{*}(B_{0})<\epsilon\), where
\[B_{0}:=\{x\in X:(s(x),f(x))\notin U\}.\]
Hence there is some \(A_{0}\in\mathcal{F}\) with \(\mu(A_{0})<\epsilon\) such that \(B_{0}\subseteq A_{0}\). For each \(i\in\{1,\ldots,n\}\), define \(A_{i}:=B_{i}\setminus A_{0}\in\mathcal{F}\). Then for any \(i\in\{1,\ldots,n\}\) and \(x,x^{\prime}\in A_{i}\), one must have \((f(x),b_{i})=(f(x),s(x))\in U\) and similarly \((f(x^{\prime}),b_{i})=(f(x^{\prime}),s(x^{\prime}))\in U\). Hence \((f(x),f(x^{\prime}))\in U\circ U\subseteq E\). implying \(f\) is \(T_{2}\)-measurable.
Conversely, suppose \(f\) is \(T_{2}\)-measurable and choose \(\epsilon>0\) and an entourage \(E\). Then there is a partition of \(X\) into sets \(A_{0},\ldots,A_{n}\in\mathcal{F}\) such that \(\mu(A_{0})<\epsilon\) and \(f(A_{i})\times f(A_{i})\subseteq E\) for each \(i\in.\{1,\ldots,n\}\). For each \(i\in\{1,\ldots,n\}\), choose any \(a_{i}\in f(A_{i})\), and define \(s:=\sum_{i=1}^{n}a_{i}I_{A_{i}}\). Then \((s(x),f(x))=(a_{i},f(x))\in f(A_{i})\times f(A_{i})\subseteq E\) for \(x\in A_{i}\). It follows that
\[\mu^{*}(\{x\in X:(s(x),f(x))\notin E\})\leq\mu(A_{0})<\epsilon,\]
hence \(f\) is \(T_{1}\)-measurable.
Regarding smoothness, consider \(\epsilon>0\) and an entourage \(E\), and let \(A_{0},\ldots,A_{n}\in\mathcal{F}\) be as in the definition of \(T_{2}\)-measurability. For each \(i\in.\{1,\ldots,n\}\), define \(B_{i}:=f(A_{i})\), so that \(B_{i}\times B_{i}\subseteq E\) and
\[\mu^{*}(\{x\in X:f(x)\notin\cup_{i=1}^{n}B_{i}\})\leq\mu(A_{0})<\epsilon,\]
which implies \(f\) is smooth.
For 2, suppose \(\mathcal{F}\) is a \(\sigma\)-field and \(f\) is measurable in the sense of Definition 1.1. Then \(f\) is immediately base measurable with a neighbourhood base \(\rho(y)\) comprised of all open sets containing \(y\) for each \(y\in Y\), remembering \(\mathcal{F}\subseteq\overline{\mathcal{F}}\).
For 3, suppose \(f\) is base measurable, and for any \(y\in Y\) let \(\rho(y)\) be a neighbourhood base at \(y\) such that \(f^{-1}(E)\in\overline{\mathcal{F}}\) for all \(E\in\rho(y)\). Consider any open set \(U\subseteq Y\). For any \(y\in U\), there is \(E_{y}\in\rho(y)\) and an open set \(U_{y}\) with \(y\in U_{y}\subseteq E_{y}\subseteq U\). The sets \(\{U_{y}:y\in U\}\) cover \(U\), so if \(Y\) is hereditarily Lindelof, there is a countable set \(C\subseteq U\) such that \(\{U_{y}:y\in C\}\) covers \(U\). But then \(\{E_{y}:y\in C\}\) also covers \(U\), hence if \(\overline{\mathcal{F}}\) is a \(\sigma\)-field, then \(f^{-1}(U)=\cup_{y\in C}f^{-1}(E_{y})\in\overline{\mathcal{F}}\). If \((X,\mathcal{F},\mu)\) is complete, \(\overline{\mathcal{F}}=\mathcal{F}\) and thus \(f\) is measurable in the sense of Definition 1.1.
To show 4, note that if \(f\) is uniformly base measurable with entourage base \(\mathcal{B}\) as in Definition 3.1(5), then \(\rho(y):=\{E[y]:E\in\mathcal{B}\}\) is a neighbourhood base at \(y\) for any \(y\in Y\), and \(f^{-1}(B)\in\overline{\mathcal{F}}\) for all \(B\in\rho(y)\). Hence \(f\) is base measurable. For the partial converse, consider \(E\in\mathcal{U}\). If \(Y\) is uniformly locally compact, there exists \(D\in\mathcal{U}\) such that \(D\subseteq E\) and \(D[y]\) is compact for all \(y\in Y\). If \(f\) is base measurable, for any \(z\in D[y]\) there is a neighbourhood \(U_{z}\) of \(z\) such that \(U_{z}\subseteq E[y]\) and \(f^{-1}(U_{z})\in\overline{\mathcal{F}}\). The interiors \(\{U_{z}^{\circ}:z\in D[y]\}\) form an open cover of \(D[y]\), hence there is a finite set \(F\subseteq D[y]\) such that \(\{U_{z}^{\circ}:z\in F\}\) covers \(D[y]\). Define \(V_{y}:=\cup_{z\in F}U_{z}\), then \(D[y]\subseteq V_{y}\subseteq E[y]\) and \(V_{y}\in\overline{\mathcal{F}}\). But then \(V:=\cup_{y\in Y}\{y\}\times V_{y}\) is an entourage with \(V[y]=V_{y}\in\overline{\mathcal{F}}\) for all \(y\in Y\), implying \(f\) is uniformly base measurable.
To show 5, choose any \(\epsilon>0\) and any entourage \(E\). Then there is some symmetric entourage \(D\) with \(D\circ D\subseteq E\). If \(f\) is uniformly base measurable, there is some entourage \(C\subseteq D\) such that \(f^{-1}(C[y])\in\overline{\mathcal{F}}\) for all \(y\in Y\). Since \(f\) is smooth, there are sets \(G_{1},\ldots,G_{n}\in\mathcal{P}(Y)\) such that \(G_{i}\times G_{i}\subseteq C\) for \(i\in\{1,\ldots,n\}\) and \(\mu^{*}(f^{-1}((\cup_{i=1}^{n}G_{i})^{c}))<\epsilon\). For each \(i\in\{1,\ldots,n\}\), choose \(y_{i}\in G_{i}\) and note \(G_{i}\subseteq C[y_{i}]\) and \(C[y_{i}]\times C[y_{i}]\subseteq D[y_{i}]\times D[y_{i}]\subseteq D\circ D \subseteq E\). Form a partition of \(Y\) as follows: define \(B_{1}:=C[y_{1}]\) and \(B_{i}:=C[y_{i}]\setminus\cup_{j=1}^{i-1}B_{j}\) for \(i\in\{2,\ldots,n\}\). Then define \(B_{0}:=Y\setminus\cup_{i=1}^{n}B_{i}\subseteq(\cup_{i=1}^{n}G_{i})^{c}\). Set \(A_{i}:=f^{-1}(B_{i})\) for \(i\in\{0,\ldots,n\}\); then \(A_{0},A_{1},\ldots,A_{n}\) form a partition of \(X\) with the properties of Definition 3.1(2), hence \(f\) is \(T_{2}\)-measurable.
The second theorem in this section identifies a partial converse to Theorem 3.2(5). The conditions of this theorem require the following definitions and notations. Let \(\Xi\) denote the usual uniformity on \(\mathbb{R}\). Given a uniform
space \((Y,\mathcal{U})\), a function \(g:Y\to\mathbb{R}\), and \(E\subseteq\mathbb{R}\times\mathbb{R}\), define
\[g^{-1}(E):=\{(y,y^{\prime})\in Y\times Y:(g(y),g(y^{\prime}))\in E\}\]
and note \(g^{-1}(E)\in\mathcal{U}\) whenever \(g\) is uniformly continuous and \(E\in\Xi\). Also define \(g^{-1}(\Xi):=\{g^{-1}(E):E\in\Xi\}\).
**Definition 3.3**.: _Consider a charge space \((X,\mathcal{F},\mu)\), a uniform space \((Y,\mathcal{U})\), and a function \(f:X\to Y\). Let \(\mathcal{S}\) be a family of uniformly continuous real-valued functions on \(Y\), and for each \(g\in\mathcal{S}\) define_
\[\eta_{g}:=\{g^{-1}(-\infty,z),g^{-1}(z,\infty):\phi_{g\circ f}(z)<\infty,z\in \mathbb{R}\}\]
_and_
\[\zeta_{g}:=\{g^{-1}(-\infty,z),g^{-1}(z,\infty):\phi_{g\circ f}(z)=0,z\in \mathbb{R}\}.\]
_Also define_
\[\mathcal{E}_{g}:=\{E\in g^{-1}(\Xi):E[y]\in\eta_{g}\text{ for all }y\in Y\}\]
_and_
\[\mathcal{Z}_{g}:=\{E\in g^{-1}(\Xi):E[y]\in\zeta_{g}\text{ for all }y\in Y\}.\]
_Define \(\eta:=\cup_{g\in\mathcal{S}}\eta_{g}\), \(\zeta:=\cup_{g\in\mathcal{S}}\zeta_{g}\), \(\mathcal{E}:=\cup_{g\in\mathcal{S}}\mathcal{E}_{g}\), and \(\mathcal{Z}:=\cup_{g\in\mathcal{S}}\mathcal{Z}_{g}\)._
Thus \(\eta\) is the collection of all inverse images of open rays with finite boundary mass limits, and \(\mathcal{E}\) is a collection of entourages formed from the elements of \(\eta\). Similarly, \(\zeta\) is the collection of all inverse images of open rays with zero boundary mass limits, and \(\mathcal{Z}\) is a collection of entourages formed from the elements of \(\zeta\).
Armed with the above notations, a partial converse to Theorem 3.2(5) can now be stated.
**Theorem 3.4**.: _Consider a charge space \((X,\mathcal{F},\mu)\), a uniform space \((Y,\mathcal{U})\), and a \(T_{2}\)-measurable function \(f:X\to Y\). Let \(\mathcal{S}\), \(\eta\) and \(\mathcal{E}\) be as in Definition 3.3._
1. _If_ \(\eta\) _is a subbase for the topology induced by_ \(\mathcal{U}\)_, then_ \(f\) _is base measurable._
2. _If_ \(\mathcal{E}\) _is an entourage subbase for_ \(\mathcal{U}\)_, then_ \(f\) _is uniformly base measurable._
The proof requires a lemma that generalises Lemma 2.6.
**Lemma 3.5**.: _Consider a charge space \((X,\mathcal{F},\mu)\), a uniform space \(Y\), and a function \(f:X\to Y\). Suppose \(g:Y\to\mathbb{R}\) is a uniformly continuous function with \(\phi_{g\circ f}(z)=0\) for some \(z\in\mathbb{R}\). Define \(D:=g^{-1}(-\infty,z)\), and suppose at least one of the following conditions holds:_
1. \(f\) _is_ \(T_{2}\)_-measurable, or_
2. \(f\) _is uniformly base measurable and_ \(D\) _is totally bounded._
_Then \(f^{-1}(D)\in\overline{\mathcal{F}}\) and \(f^{-1}(\partial D)\in\overline{\mathcal{F}}\) with \(\overline{\mu}(f^{-1}(\partial D))=0\)._
Proof.: If \(f\) is \(T_{2}\)-measurable, then \(g\circ f\) is \(T_{2}\)-measurable (this follows straightforwardly from the definitions of \(T_{2}\)-measurability and uniform continuity). But then Lemma 2.6 gives \(f^{-1}(D)=(g\circ f)^{-1}(-\infty,z)\in\overline{\mathcal{F}}\) and \(f^{-1}(\partial D)=(g\circ f)^{-1}(z)\in\overline{\mathcal{F}}\) with \(\overline{\mu}(f^{-1}(\partial D))=0\).
Alternatively, suppose \(f\) is uniformly base measurable and \(D\) is totally bounded. Given \(\epsilon>0\), choose \(\delta>0\) such that \(\mu_{*}((g\circ f)^{-1}(z-\delta,z+\delta))<\epsilon\). Since \(g\) is uniformly continuous, the set \(\{(y,y^{\prime}):|g(y)-g(y^{\prime})|<\delta\}\) is an entourage, and thus contains an entourage \(E\) with \(f^{-1}(E[y])\in\overline{\mathcal{F}}\) for all \(y\in Y\). Total boundedness of \(D\) then implies there is some finite set \(F\subseteq Y\) such that \(D\subseteq\cup_{y\in F}E[y]\). Define \(B:=\bigcup\{f^{-1}(E[y]):y\in F,E[y]\subseteq D\}\), or the empty set if there are no elements in that union. Define \(C:=\cup_{y\in F}f^{-1}(E[y])\). Then \(B,C\in\overline{\mathcal{F}}\) with \(B\subseteq f^{-1}(D)\subseteq C\) and \(C\setminus B\subseteq f^{-1}(g^{-1}(z-\delta,z+\delta))\). The latter implies \(\overline{\mu}(C\setminus B)\leq\epsilon\), hence \(f^{-1}(D)\in\overline{\mathcal{F}}\) and \(f^{-1}(\partial D)\in\overline{\mathcal{F}}\) with \(\overline{\mu}(f^{-1}(\partial D))=0\).
Theorem 3.4 can now be proved.
Proof.: **(of Theorem 3.4)** To show 1, it will suffice to show that for any \(E\in\eta\) and \(y\in E\), there is \(D\in\zeta\) with \(y\in D\subseteq E\), for then \(\zeta\) is also a subbase for the topology induced by \(\Phi\). Moreover, Lemma 3.5 implies \(f^{-1}(D)\in\overline{\mathcal{F}}\) for all \(D\in\zeta\), hence \(f\) is base measurable. So suppose \(y\in E\in\eta\). Then either \(E=g^{-1}(-\infty,z)\) or \(E=g^{-1}(z,\infty)\) for some \(g\in\mathcal{S}\) and \(z\in\mathbb{R}\). Suppose \(E=g^{-1}(-\infty,z)\) (the proof proceeds similarly for \(E=g^{-1}(z,\infty)\)). Since \(\phi_{g\circ f}(z)<\infty\), there is \(\delta>0\) such that \(\phi_{g\circ f}(w)<\infty\) for any \(w\in(z-\delta,z+\delta)\), by Lemma 2.4(2). One can also choose \(\delta\) so that \(g(y)<z-\delta\). But then Lemma 2.4(3) implies there is \(w\in(z-\delta,z)\) with \(\phi_{g\circ f}(w)=0\). Hence \(y\in D:=g^{-1}(-\infty,w)\subseteq E\), as required.
To show 2, it will suffice to show any entourage \(E\in\mathcal{E}\) contains an entourage \(D\in\mathcal{Z}\), for then \(\mathcal{Z}\) also forms an entourage subbase for \(\mathcal{U}\). Moreover, Lemma 3.5 implies \(f^{-1}(D[y])\in\overline{\mathcal{F}}\) for all \(D\in\mathcal{Z}\) and \(y\in Y\), hence
\(f\) is uniformly base measurable. So consider \(E\in\mathcal{E}_{g}\) for some \(g\in\mathcal{S}\). Then \(E=g^{-1}(U)\) for some \(U\in\Xi\), and moreover there is \(\gamma>0\) such that \(U\supset\{(z,z^{\prime}):|z-z^{\prime}|\leq\gamma\}\). Now, for each \(y\in Y\), either \(E[y]=g^{-1}(-\infty,z_{y})\) or \(E[y]=g^{-1}(z_{y},\infty)\), for some \(z_{y}\in\mathbb{R}\). Suppose \(E[y]=g^{-1}(-\infty,z_{y})\). Reasoning as in the proof of 1, there is \(w_{y}\in(g(y)+\gamma,z_{y})\) with \(\phi_{g\circ f}(w_{y})=0\). Define \(U^{\prime}_{g(y)}:=(-\infty,w_{y})\). Alternatively, if \(E[y]=g^{-1}(z_{y},\infty)\), there is \(w_{y}\in(z_{y},g(y)-\gamma)\) with \(\phi_{g\circ f}(w_{y})=0\). In that case, define \(U^{\prime}_{g(y)}:=(w_{y},\infty)\). For each \(v\in\mathbb{R}\setminus g(Y)\), define \(U^{\prime}_{v}:=U[v]\). Then \(U^{\prime}:=\cup_{v\in\mathbb{R}}\{v\}\times U^{\prime}_{v}\in\Xi\) because \(U^{\prime}\supset\{(z,z^{\prime}):|z-z^{\prime}|\leq\gamma\}\). Hence \(D:=g^{-1}(U^{\prime})\in\mathcal{Z}\) with \(D\subseteq E\), as required.
The condition of Theorem 2.7 - that \(\phi_{f}^{-1}[0,\infty)\) is dense in \(\mathbb{R}\) - is a special case of the condition of Theorem 3.4(1) - that \(\eta\) is a topological subbase. To see this, suppose \(Y=\mathbb{R}\) and \(\mathcal{S}\) contains only the identity function \(\iota(y)=y\) in Definition 3.3. Then \(\eta=\{(-\infty,z),(z,\infty):z\in\phi_{f}^{-1}[0,\infty)\}\), which is a sub-base for the usual topology on \(\mathbb{R}\) if \(\phi_{f}^{-1}[0,\infty)\) is dense in \(\mathbb{R}\). In fact, the condition of Theorem 2.7 also implies the condition of Theorem 3.4(2) - that \(\mathcal{E}\) is an entourage subbase (proof omitted).
Like the condition of Theorem 2.7, the conditions of Theorem 3.4(1 and 2) trivially hold if \(\mu\) is bounded. An alternative sufficient condition for them to hold is that, for each \(A\in\mathcal{F}\), either \(\mu(A)<\infty\) or \(\mu(A^{c})<\infty\). If \(Y\) is a Banach space, yet another sufficient condition is that \(f\) is integrable, in the sense of Definition III.2.17 of [4].
## 4 Inheritance of measurability
This final section identifies conditions under which measurability of a function \(f:X\to Y\) is "inherited" by a coarser or finer uniformity. The three main results are as follows.
**Theorem 4.1**.: _Consider a charge space \((X,\mathcal{F},\mu)\) and a function \(f:X\to Y\)._
1. _If_ \(f\) _is_ \(T_{2}\)_-measurable or smooth with respect to a uniformity_ \(\mathcal{U}_{1}\) _on_ \(Y\)_, then_ \(f\) _is respectively_ \(T_{2}\)_-measurable or smooth with respect to any coarser uniformity_ \(\mathcal{U}_{2}\subseteq\mathcal{U}_{1}\)_._
2. _If_ \(f\) _is base measurable wrt each topology on_ \(Y\) _in a collection_ \(\mathcal{C}\)_, then_ \(f\) _is base measurable wrt_ \(\bigvee\mathcal{C}\)_._
3. _If_ \(f\) _is uniformly base measurable wrt each uniformity on_ \(Y\) _in a collection_ \(\mathcal{C}\)_, then_ \(f\) _is uniformly base measurable wrt_ \(\bigvee\mathcal{C}\)
In Claim 2 of this theorem, the _least upper bound_\(\bigvee\mathcal{C}\) of a collection of topologies \(\mathcal{C}\) is the smallest topology that contains \(\bigcup\mathcal{C}\). Similarly, in Claim 3, the _least upper bound_\(\bigvee\mathcal{C}\) of a collection of uniformities \(\mathcal{C}\) is the smallest uniformity that contains \(\bigcup\mathcal{C}\).
Claim 1 is immediate from the definitions of \(T_{2}\)-measurability and smoothness. Claim 2 follows from the fact that a base for \(\bigvee\mathcal{C}\) can be obtained by taking finite intersections of elements drawn from a union of bases for each of the topologies in \(\mathcal{C}\). Statement 3 follows in a similar manner. Thus none of the claims requires detailed proof. Nevertheless, the theorem has some useful corollaries.
**Corollary 4.2**.: _Consider a charge space \((X,\mathcal{F},\mu)\), a uniform space \(Y\) with uniformity \(\mathcal{U}\) induced by a family of pseudometrics \(\mathcal{S}\), and a function \(f:X\to Y\)._
1. _If_ \(f\) _is_ \(T_{2}\)_-measurable with respect to_ \(\mathcal{U}\)_, then_ \(f\) _is_ \(T_{2}\)_-measurable with respect to the uniformities induced by each_ \(p\in\mathcal{S}\)_._
2. _If_ \(f\) _is uniformly base measurable wrt the uniformities induced by each_ \(p\in\mathcal{S}\)_, then_ \(f\) _is uniformly base measurable wrt_ \(\mathcal{U}\)_._
**Corollary 4.3**.: _Consider a charge space \((X,\mathcal{F},\mu)\), a uniform space \((Y,\mathcal{U})\) with the weak uniformity \(\mathcal{U}\) induced by a family of functions \(\{h_{\alpha}:\alpha\in\Omega\}\), where \(h_{\alpha}:Y\to Y_{\alpha}\) and \((Y_{\alpha},\mathcal{U}_{\alpha})\) is a uniform space for each \(\alpha\in\Omega\), and a function \(f:X\to Y\)._
1. _If_ \(f\) _is_ \(T_{2}\)_-measurable with respect to_ \(\mathcal{U}\)_, then_ \(h_{\alpha}\circ f\) _is_ \(T_{2}\)_-measurable with respect to_ \(\mathcal{U}_{\alpha}\) _for each_ \(\alpha\in\Omega\)_._
2. _If_ \(h_{\alpha}\circ f\) _is uniformly base measurable with respect to_ \(\mathcal{U}_{\alpha}\) _for each_ \(\alpha\in\Omega\)_, then_ \(f\) _is uniformly base measurable wrt_ \(\mathcal{U}\)_._
In particular, Corollary 4.3 applies when \(Y:=\prod_{\alpha\in\Omega}Y_{\alpha}\) with the product uniformity, and \(h_{\alpha}\) is the coordinate projection of \(Y\) onto \(Y_{\alpha}\) for each \(\alpha\in\Omega\).
## Appendix A Charge theory
A _charge_ is an alternative name for a finitely additive set function, sometimes also called a _content_. Concise reviews of the theory of charges can be found in [5] and [8]. The standard reference on charge theory is [3]. Here only a few elementary definitions are required.
A _field of sets_ (or more concisely a _field_) is a collection \({\cal F}\) of subsets of a _sample space_\(X\), such that \({\cal F}\) contains the empty set \(\emptyset\) and is closed under complements and pairwise unions.
A _charge_ is a finitely additive set function \(\mu:{\cal F}\to[-\infty,\infty]\), the domain of which is a field. Here _finitely additive_ means \(\mu(A\cup B)=\mu(A)+\mu(B)\) for disjoint \(A,B\in{\cal F}\), which also implies \(\mu(\emptyset)=0\). To avoid difficulties arising from the fact that \(\infty+(-\infty)\) is undefined, a charge may take at most one of the values \(\{-\infty,\infty\}\) on \({\cal F}\). A _charge space_ is a triple \((X,{\cal F},\mu)\) consisting of a _sample space_\(X\), a field of sets \({\cal F}\subseteq{\cal P}(X)\), and a charge \(\mu\).
A _\(\sigma\)-field_ (also called a _\(\sigma\)-algebra_) is a field that is closed under countable unions. A _measure_ is a countably additive charge defined on a \(\sigma\)-field. By _countably additive_ is meant that \(\mu(\cup{\cal C})=\sum_{C\in{\cal C}}\mu(C)\) for a countable collection \({\cal C}\subseteq{\cal F}\) of pairwise disjoint sets. A _measurable space_ is a pair \((X,{\cal F})\) consisting of a sample space \(X\) and a \(\sigma\)-field \({\cal F}\subseteq{\cal P}(X)\). A _measure space_ is a charge space \((X,{\cal F},\mu)\) in which \({\cal F}\) is a \(\sigma\)-field and \(\mu\) is a measure.
A charge is said to be _positive_ if \(\mu(A)\geq 0\) for all \(A\in{\cal F}\). Throughout the body of this paper, \(\mu\) always denotes a positive charge.
For a positive charge \(\mu\), the _outer charge_ and _inner charge_ derived from \(\mu\) are:
\[\mu^{*}(A)=\inf\{\mu(B):B\in{\cal F},B\supseteq A\}\]
and
\[\mu_{*}(A)=\sup\{\mu(B):B\in{\cal F},B\subseteq A\}\]
respectively, for any \(A\in{\cal P}(X)\). Note \(\mu^{*}\) is _sub-additive_, that is, \(\mu^{*}(A\cup B)\leq\mu^{*}(A)+\mu^{*}(B)\) for any \(A,B\in{\cal P}(X)\), whereas \(\mu_{*}\) is _super-additive_, that is, \(\mu_{*}(A\cup B)\geq\mu_{*}(A)+\mu_{*}(B)\) for _disjoint_\(A,B\in{\cal P}(X)\). Both \(\mu^{*}\) and \(\mu_{*}\) are monotonic, that is, \(\mu^{*}(A)\leq\mu^{*}(B)\) for \(A\subseteq B\), and similarly for \(\mu_{*}\).
A charge \(\mu\) is said to be _bounded_ or _finite_ if \(|\mu(A)|<\infty\) for all \(A\in{\cal F}\). Otherwise, \(\mu\) is said to be _unbounded_.
A _simple function_ on a charge space \((X,{\cal F},\mu)\) is a real-valued function of the form \(\sum_{k=1}^{n}a_{j}I_{A_{k}}\), where \(a_{1},\ldots,a_{n}\in{\mathbb{R}}\) are distinct and \(\{A_{1},\ldots,A_{n}\}\) is a partition of \(X\) into elements of \({\cal F}\). Here \(I_{A}\) represents the indicator function for the set \(A\). The notion of simple function generalises naturally for a function \(f:X\to Y\) with arbitrary codomain \(Y\): the notation \(\sum_{k=1}^{n}a_{k}I_{A_{k}}\) can still be used to denote a function with finite range \(\{a_{1},\ldots,a_{n}\}\subseteq Y\), even if no addition operation is defined on \(Y\). (This generalised notion of a simple function is used in Section 3.)
A sequence of functions \(f_{i}:X\to\mathbb{R}\) (\(i\in\mathbb{N}\)) is said to _converge hazily_ to \(f:X\to\mathbb{R}\) if
\[\mu^{*}(\{x\in X:|f_{i}(x)-f(x)|>\epsilon\})\to 0\text{ as }i\to\infty,\]
Hazy convergence is denoted by \(f_{i}\xrightarrow{h}f\).
The following definition is modified from [8]. The _Peano-Jordan completion_ of a charge space \((X,\mathcal{F},\mu)\) is the charge space \((X,\overline{\mathcal{F}},\overline{\mu})\) where
\[\overline{\mathcal{F}}:=\{A\subseteq X:\forall\epsilon>0,\exists B,C\in \mathcal{F}\text{ such that }B\subseteq A\subseteq C\text{ and }\mu(C\setminus B)<\epsilon\}\]
and
\[\overline{\mu}(A):=\sup\{\mu(B):B\subseteq A,B\in\mathcal{F}\}=\inf\{\mu(C):A \subseteq C,C\in\mathcal{F}\}.\]
A charge space is said to be _Peano-Jordan complete_ if it is equal to its Peano-Jordan completion.
In this paper, an overline placed over a field always indicates the Peano-Jordan completion of the field. This should not be confused with the topological closure of a set, which is also commonly denoted using an overline. The latter usage is not required in this paper.
## Appendix B Uniform spaces
A _uniform space_ is a generalisation of a metric space, introduced by Andre Weil in 1937 [9]. Concise introductions to the theory of uniform spaces, with varying levels of detail, can be found in [10, 11, 12, 13, 14]. A more thorough introduction is [15].
Whereas a metric space has a notion of absolute distance determined by a metric \(d:Y\times Y\to[0,\infty)\), a uniform space has a concept of relative distance determined by a collection \(\mathcal{U}\) of _entourages_ called a _uniformity_. Each entourage \(E\in\mathcal{U}\) is a subset of \(Y\times Y\) and, loosely speaking, represents pairs of points that can be regarded as being within a certain degree of closeness: they are _\(E\)-close_.
More precisely, a _uniformity_, also called a _uniform structure_, on a non-empty set \(Y\), is a collection \(\mathcal{U}\) of subsets of \(Y\times Y\), called _entourages_, satisfying the following axioms:
1. If \(E\in\mathcal{U}\), then \(E\supseteq\Delta\),
2. If \(E\in\mathcal{U}\), then \(E^{-1}\in\mathcal{U}\),
3. If \(E\in{\cal U}\), then there exists \(D\in{\cal U}\) such that \(D\circ D\subseteq E\),
4. If \(D,E\in{\cal U}\), then \(D\cap E\in{\cal U}\), and
5. If \(E\supseteq D\) for some \(D\in{\cal U}\) and \(E\in Y\times Y\), then \(E\in{\cal U}\).
Axiom 1 refers to the _diagonal_ of \(Y\), defined as \(\Delta:=\{(y,y):y\in Y\}\). Axiom 2 refers to the _inverse_ of an entourage, defined as \(E^{-1}:=\{(y^{\prime},y):(y,y^{\prime})\in{\cal U}\}\). Axiom 3 refers to the _composition_ of two entourages \(D,E\in{\cal U}\), defined as
\[D\circ E:=\{(y,y^{\prime}):(y,z)\in E\mbox{ and }(z,y^{\prime})\in D\mbox{ for some }z\in Y\}.\]
An entourage \(E\) is said to be _symmetric_ if \(E^{-1}=E\). It can be shown that any entourage contains a symmetric entourage.
A _uniform space_ is a pair \((Y,{\cal U})\) comprised of a non-empty set \(Y\) and a uniformity \({\cal U}\) on \(Y\). It is sometimes convenient to refer to \(Y\) itself as a uniform space, when the uniformity \({\cal U}\) is either understood from the context, or does not require an explicit symbol.
If \({\cal U}_{1}\) and \({\cal U}_{2}\) are two uniformities on a common set \(Y\) such that \({\cal U}_{1}\subseteq{\cal U}_{2}\), then \({\cal U}_{1}\) is said to be _coarser_ than \({\cal U}_{2}\), whereas \({\cal U}_{2}\) is said to be _finer_ than \({\cal U}_{1}\).
An _entourage base_, also called a _fundamental system of entourages_, is a subset \({\cal B}\) of a uniformity \({\cal U}\) such that every element of \({\cal U}\) contains an element of \({\cal B}\). An _entourage subbase_ is a subset \({\cal E}\) of a uniformity \({\cal U}\) such that the collection of all finite intersections of elements of \({\cal E}\) forms an entourage base for \({\cal U}\). Any collection of subsets of \(Y\times Y\) satisfying Conditions 1, 2 and 3 above is an entourage subbase for some uniformity. Specifically, \({\cal E}\) is a subbase for the unique smallest uniformity containing \({\cal E}\), called the uniformity _induced by \({\cal E}\)_.
Given a uniform space \((Y,{\cal U})\), define \(E[y]:=\{y^{\prime}\in Y:(y,y^{\prime})\in{\cal U}\}\) for each \(E\in{\cal U}\) and \(y\in Y\). A uniformity \({\cal U}\) on \(Y\) induces a topology \(\tau({\cal U})\) on \(Y\) such that \(\{E[y]:E\in{\cal U}\}\) is a neighbourhood base at each \(y\in Y\). Different uniformities on a common set \(Y\) can induce the same topology.
Given a family \({\cal S}\) of pseudometrics on a nonempty set \(Y\), one can construct sets of the form
\[E_{p,a}:=\{(y,y^{\prime}):p(y,y^{\prime})\leq a\}\]
for any \(p\in{\cal S}\) and \(a\in(0,\infty)\). The collection \({\cal E}:=\{E_{p,a}:p\in{\cal S},a\in(0,\infty)\}\) satisfies Conditions 1, 2, and 3 above, and thus forms a subbase for a uniformity \({\cal U}\), said to be the uniformity _induced by \({\cal S}\)_. The topology of \(Y\) induced by \({\cal U}\) is the topology induced by the family of pseudometrics \({\cal S}\)
Every uniformity can be induced by some family of pseudometrics in this manner. However, two different families of pseudometrics may induce the same uniformity on \(Y\).
The _usual uniformity_ on \(\mathbb{R}\) is the uniformity induced by the natural (pseudo)metric formed from the absolute value function. That is, the usual uniformity on \(\mathbb{R}\) is the smallest uniformity containing all sets of the form \(\{(y,y^{\prime}):|y-y^{\prime}|\leq a\}\), for each \(a\in(0,\infty)\).
Another way to generate a uniformity on \(Y\) is to take a family of functions \(f_{\alpha}:Y\to Y_{\alpha}\), where \((Y_{\alpha},\mathcal{U}_{\alpha})\) is a uniform space for each \(\alpha\) in some index set \(\Omega\). Define \(F_{\alpha}(y,y^{\prime}):=(f_{\alpha}(y),f_{\alpha}(y^{\prime}))\) for each \(\alpha\in\Omega\) and \(y,y^{\prime}\in Y\). Then the _weak uniformity_ induced by \(\{f_{\alpha}\}_{\alpha\in\Omega}\) is the uniformity \(\mathcal{U}\) induced by the subbase
\[\{F_{\alpha}^{-1}(E):\alpha\in\Omega,E\in\mathcal{U}_{\alpha}\}.\]
The topology induced by \(\mathcal{U}\) is precisely the weak topology induced by \(\{f_{\alpha}\}_{\alpha\in\Omega}\).
An important example of a weak uniformity is the _product uniformity_. Suppose \(Y:=\prod_{\alpha\in\Omega}Y_{\alpha}\), that is, the Cartesian product of the sets \(\{Y_{\alpha}\}_{\alpha\in\Omega}\), and \(f_{\alpha}:Y\to Y_{\alpha}\) is the projection of \(Y\) onto coordinate \(\alpha\), for each \(\alpha\in\Omega\). Then the product uniformity on \(Y\) is the weak uniformity induced by \(\{f_{\alpha}\}_{\alpha\in\Omega}\).
It is _not_ true that every uniformity \(\mathcal{U}\) is a weak uniformity for some family of functions \(\{f_{\alpha}\}_{\alpha\in\Omega}\). Given a family of pseudometrics \(\mathcal{S}\) on \(Y\), one can form a collection of functions \(p_{y}:Y\to[0,\infty)\) defined as \(p_{y}(y^{\prime}):=p(y,y^{\prime})\) for each \(p\in\mathcal{S}\) and \(y\in Y\), and while it is true that the weak topology induced by the family of functions \(\{p_{y}:p\in\mathcal{S},y\in Y\}\) is the same as the topology induced by \(\mathcal{U}\), the weak uniformity induced by these functions is in general coarser than \(\mathcal{U}\).
A function \(f:Y_{1}\to Y_{2}\) mapping a uniform space \((Y_{1},\mathcal{U}_{1})\) to a uniform space \((Y_{2},\mathcal{U}_{2})\) is _uniformly continuous_ if for every \(E\in\mathcal{U}_{2}\) there is \(D\in\mathcal{U}_{1}\) such that \((y,y^{\prime})\in D\implies(f(y),f(y^{\prime}))\in E\).
It can be shown that the weak uniformity induced by a family of functions \(\{f_{\alpha}\}_{\alpha\in\Omega}\) is the coarsest uniformity making \(f_{\alpha}\) uniformly continuous for each \(\alpha\in\Omega\) (see Theorem 37.8 of [14], for example). This provides an alternative way to define the weak uniformity induced by a family of functions.
A uniform space \((Y,\mathcal{U})\) is _uniformly locally compact_ if there is an entourage \(E\in\mathcal{U}\) such that \(E[y]\) is compact in the induced topology \(\tau(\mathcal{U})\) for all \(y\in Y\). Note this definition does not require the uniformity \(\mathcal{U}\) to be separated. (A uniformity \(\mathcal{U}\) is _separated_ if \(\bigcap\{E:E\in\mathcal{U}\}=\Delta\).) Some authors define
a uniformly locally compact space in this way [10; 15]; others additionally require \(\mathcal{U}\) to be separated [9; 16]. By either definition, a uniformly locally compact space has an entourage base \(\mathcal{B}\subseteq\mathcal{U}\) such that \(E[y]\) is closed and compact for all \(E\in\mathcal{B}\) and \(y\in Y\).
A subset \(K\) of a uniform space \((Y,\mathcal{U})\) is _totally bounded_ if for every entourage \(E\in\mathcal{U}\) there exists a finite collection of sets \(B_{1},\ldots,B_{n}\in\mathcal{P}(Y)\) such that \(B_{k}\times B_{k}\subseteq E\) for each \(k\in\{1,\ldots,n\}\) and \(K\subseteq\cup_{k=1}^{n}B_{k}\). (Note [10] and others first define a totally bounded uniformity, then define \(K\subseteq Y\) to be totally bounded if it has a totally bounded relative uniformity, where the _relative uniformity_ for \(K\) is obtained by intersecting each of the members of \(\mathcal{U}\) with \(K\times K\). The definition given here is equivalent and more direct.)
|
2304.06038 | Knowledge-Distilled Graph Neural Networks for Personalized Epileptic
Seizure Detection | Wearable devices for seizure monitoring detection could significantly improve
the quality of life of epileptic patients. However, existing solutions that
mostly rely on full electrode set of electroencephalogram (EEG) measurements
could be inconvenient for every day use. In this paper, we propose a novel
knowledge distillation approach to transfer the knowledge from a sophisticated
seizure detector (called the teacher) trained on data from the full set of
electrodes to learn new detectors (called the student). They are both providing
lightweight implementations and significantly reducing the number of electrodes
needed for recording the EEG. We consider the case where the teacher and the
student seizure detectors are graph neural networks (GNN), since these
architectures actively use the connectivity information. We consider two cases
(a) when a single student is learnt for all the patients using preselected
channels; and (b) when personalized students are learnt for every individual
patient, with personalized channel selection using a Gumbelsoftmax approach.
Our experiments on the publicly available Temple University Hospital EEG
Seizure Data Corpus (TUSZ) show that both knowledge-distillation and
personalization play significant roles in improving performance of seizure
detection, particularly for patients with scarce EEG data. We observe that
using as few as two channels, we are able to obtain competitive seizure
detection performance. This, in turn, shows the potential of our approach in
more realistic scenario of wearable devices for personalized monitoring of
seizures, even with few recordings. | Qinyue Zheng, Arun Venkitaraman, Simona Petravic, Pascal Frossard | 2023-04-03T15:37:40Z | http://arxiv.org/abs/2304.06038v1 | # Knowledge-Distilled Graph Neural Networks for Personalized Epileptic Seizure Detection +
###### Abstract
Wearable devices for seizure monitoring detection could significantly improve the quality of life of epileptic patients. However, existing solutions that mostly rely on full electrode set of electroencephalogram (EEG) measurements could be inconvenient for every day use. In this paper, we propose a novel knowledge distillation approach to transfer the knowledge from a sophisticated seizure detector (called the teacher) trained on data from the full set of electrodes to learn new detectors (called the student). They are both providing lightweight implementations and significantly reducing the number of electrodes needed for recording the EEG. We consider the case where the teacher and the student seizure detectors are graph neural networks (GNN), since these architectures actively use the connectivity information. We consider two cases (a) when a single student is learnt for all the patients using pre-selected channels; and (b) when personalized students are learnt for every individual patient, with personalized channel selection using a Gumbel-softmax approach. Our experiments on the publicly available Temple University Hospital EEG Seizure Data Corpus (TUSZ) show that both knowledge-distillation and personalization play significant roles in improving performance of seizure detection, particularly for patients with scarce EEG data. We observe that using as few as two channels, we are able to obtain competitive seizure detection performance. This, in turn, shows the potential of our approach in more realistic scenario of wearable devices for personalized monitoring of seizures, even with few recordings.
Keywords:Personalized seizure detection Graph neural networks Knowledge distillation.
## 1 Introduction
Epilepsy is a neurological disorder that is characterized by recurring, unprovoked seizures caused by surges of electrical activity in the brain and affects
nearly three million people [26]. About one third of the patients do not respond to treatment by drugs [17]. Hence, real-time seizure monitoring is crucial for improving the patients' quality of life, for example, by alerting caregivers that their assistance is needed once a seizure occurs. A continuous monitoring of the electroencephalogram (EEG) is useful in identifying and even predicting seizures in critically ill patients [19], particularly with the use of deep-learning approaches [27, 21, 12, 1, 23] The monitoring is usually performed in a hospital environment over the course of several days, which makes it infeasible to monitor patients long-term in non-ambulatory settings. Wearable devices could overcome the need of specialised intrusive medical equipment and hospital environment and enable real-time seizure monitoring on a daily basis. Existing measurement devices [3] that use EEG head caps with over 20 wired electrodes are however uncomfortable and difficult to wear over prolonged intervals and lighter and more discrete wearables are desirable for patients. Previous studies have attempted to reduce the number of EEG electrodes needed for seizure detection [8, 28, 9] with promising results. However, these solutions typically involve training detection systems from scratch for the new setting, and fail to incorporate the already existing historical EEG data of the patient recorded with many electrodes. Due to the nature of the disorder itself, seizure data is sparse in the number of available seizures and difficult to collect, and it is thus important to meaningfully use previous data. Further, it is known that the signals from the different regions of the brain (captured through the EEG electrodes) are not independent and exhibit strong inter-channel dependencies that could be viewed as a brain graph or a network. Hence, we ask the question:
_How to transfer information gained from a full set of channels/graph to settings with a reduced number of channels/subgraph while actively using the connectivity information?_
In this paper, we address this question by developing a novel approach for knowledge distillation (KD) with graph neural networks (GNNs) applied to seizure detection. Our motivation for the use of GNNs comes from the observation that they have been used extensively in applications with graph-structured data, and more recently have shown to result in promising seizure detection performance [22, 30]. More specifically, we propose a seizure detection model that consists of three interconnected blocks. Firstly, we have the knowledge distillation block, whereby we transfer the knowledge from a pre-trained seizure detection model to obtain a model that is light-weight and uses only a reduced set of input channels and the corresponding subgraph. Secondly, a channel selection block, which takes the full multi-channel input and retains the signal only on a reduced set of channels that are either pre-selected or learnt in a fully data-driven manner. Lastly, we have the GNN based seizure detection model that classifies the input in the form of the multi-channel signal from a reduced set of channels/electrodes and the corresponding subgraph, into seizure or non-seizure segments.
Our goal is to also investigate the influence of two important aspects in seizure detection performance with reduced channels: (i) prior knowledge (through the
use of the teacher model), and (ii) personalization/ patient-specific detection. The specific contributions of our paper are as follows:
* We propose new GNN models for epileptic seizure detection that build on knowledge distillation to generate models that are both light-weight and work on subgraphs of reduced nodes/channels. To the best of our knowledge, this is the first KD approach dedicated to obtaining subgraph GNNs with reduced channels.
* We propose two different models for seizure detection with reduced channels, namely one with pre-selected (clinically motivated) channels and one with data-driven channels obtained from Gumbel softmax channel selection.
* By applying our approach on pre-trained GNN that uses a full electrode set, we obtain personalized (patient-specific) and global (non patient-specific) GNN models that are both lightweight (using only \(\approx 3\%\) of the parameters of the teacher) and requires only a reduced subset of electrodes (requiring as low as only \(10\%\) of the original electrodes)
* We demonstrate the results of our approach on the TUH Seizure Corpus, which is one of the most popular and diverse datasets for epileptic seizures.
* We show empirically that the combination of personalization and KD could significantly improve seizure detection in cases of very scarce data, and in cases when the measurements are made from the relatively 'non-informative' electrodes.
Finally, it could be noted that epilepsy seizure detection is a very active research problem. In particular, there has been a steady increase in the number of graph-based approaches, and particularly GNNs applied to the problem of seizure detection and classification [30, 22, 5]. However, to the best of our knowledge no prior works exist that tackle the problem of channel reduction with GNNs and KD, particularly for seizure detection. While KD has been used in multiple settings related to GNNs [6, 7, 15, 4, 31, 32, 33], it has not been employed to the task of data-driven subgraph identification, which is the main objective in this paper.
## 2 Preliminaries
We now briefly review some of the basic concepts from GNNs and KD.
**Graph Neural Networks** Graph Neural Networks (GNNs) refer to a class of deep learning models designed for graph-structured data [24]. GNNs learn the representations of the nodes/channels in a graph and predict the labels or properties of nodes/edges by actively using the the underlying graph structure. Due to the graph structure, GNNs naturally provide an aspect of interpretability or explainability. GNNs have been shown to significantly outperform the use of CNNs or other non-graph approaches in many applications. While study and development of GNNs is an active research area, we consider the specific case of Graph convolutional networks (GCNs) in our work, since they form one of the simplest and most popular GNNs that directly generalize the convolution
operation from CNNs to a graph setting [16]. A multi-layer GCN has the layer-wise propagation rule in the hidden layers:
\[H^{(l+1)}=\sigma(AH^{(l)}\Theta^{(l)}) \tag{1}\]
where \(H^{l}\in\mathbb{R}^{N\times D}\) is the hidden node features at \(l\)-th layer; \(H^{0}\) denoting the input, \(\sigma\) a non-linear activation function such as ReLU or sigmoid, \(A\) the adjacency matrix, and \(\Theta(l)\) being the weight matrix in the \(l\)-th layer that is learnt from the data for a given task. Put simply, the graph convolution operation takes the weighted sum of the features of the neighbors of a node and applies a non-linear activation function to produce the updated features for the node. This operation is repeated for each layer, allowing the model to learn more complex representations of the graph structure and node features. The final output of a GCN is typically obtained by applying a linear layer to the features of the nodes in the final layer. Finally, depending on whether the task is regression or classification, the parameters of the GNN are learned by minimizing a loss function, respectively.
**Knowledge Distillation** Knowledge distillation (KD) [11] refers to transferring knowledge from a large/sophisticated pre-trained neural network (known as the _teacher_ network) to a smaller network (known as the _student_ network). The student represents a light-weight model derived from the teacher while enforcing the performance to be similar to that of the teacher. A distillation loss is used during training to guide the student to replicate the teacher's behavior as closely as possible. Different types of knowledge can be transferred, but the most straightforward one is response-based KD, which refers to the response of the output layer of the teacher. A widely used example of this is the class probability called as _soft targets_ defined using a softmax function as
\[p(z_{i},T)=\exp(z_{i}/T)/\sum_{j}\,\exp(z_{j}/T), \tag{2}\]
where \(p_{i}\) is the probability of belonging to class \(i\), \(z\) is the vector of logits (outputs of the last layer of the teacher to a given input). The temperature \(T\) controls the contribution of each soft target to the knowledge. When \(T\) is equal to 1, we get the standard softmax function, but as \(T\) increases, the probability distribution is softened. The distillation loss can be seen as comparing the class probabilities obtained from the teacher and the student. It enforces the distribution of the outputs produced by the student to be close to that of the teacher. The Kullback-Leibler (KL) divergence is therefore often used as the distillation loss function, and minimizing this loss during training makes the logits of the student get closer to the logits of the teacher [10]. Let \(z_{t}\) and \(z_{s}\) denote the representation produced by the teacher and student models, respectively, for the same input. Then, the final loss function used to train the student is a weighted average of the two terms and is defined as
\[L_{S}=(1-\delta)L_{D}(p(z_{t},T),p(z_{s},T))+\,\delta L_{CE}(y,p(z_{s},1)), \tag{3}\]
where \(L_{D}\) is the distillation loss function, \(p(z_{t},T)\) are the teacher soft targets, \(p(z_{s},T)\) are the student soft targets, \(L_{CE}\) is the cross entropy loss function,
are the ground truth labels, and \(\alpha\) is the weighting factor. The parameter \(\delta\) represents the relative weight given to the teacher's knowledge over the new training data corresponding to the student training \(-\) the higher \(\delta\), the lesser the model relies on the teacher for the training of the student. We shall consider the KD as part of our approach later in Section 3.
## 3 KD with GNNs for Seizure Detection
### Proposed Model
We first propose our approach to design a global seizure detection student GNN that works on data with reduced nodes/channels and the corresponding subgraph, obtained using KD from a teacher GNN that operates on the complete node set. Let \(D\) denote the number of nodes/channels in the full measurement. Let \(A\) denote the adjacency matrix of the graph describing the connections between the different channels. The adjacency matrix could be obtained in different ways like a correlation matrix, functional connectivity, or simply the matrix that captures the physical proximity of the electrodes on the scalp. In our paper, we use the latter.
Let \(\mathbf{x}\in\mathbb{R}^{D\times T}\) denote the input signal consisting of the recordings /measurements from all the \(D\) channels for \(T\) time samples. Let us consider a GNN with parameters \(\theta\) and let \(z_{\theta}(\mathbf{x},A)\) denote the output of the last layer or the logits learnt by the GNN, where \(A\in\mathbb{R}^{D\times D}\) denotes the graph between the channels. Further, let us use subscripts \(t\) and \(s\) for the teacher and student GNNs, respectively: \(z_{\theta_{t}}(\cdot,A)\) and \(z_{\theta_{s}}(\cdot,A)\) denote the output layers from the teacher and student GNNs, respectively. The teacher network is learnt by minimizing the following the binary cross entropy function \(BCE(\cdot,\cdot)\) between the class label \(y\) and the model prediction \(f^{t}_{\theta_{t}}(\mathbf{x})\)
\[\mathcal{L}_{CE}(\theta_{t})=\mathbb{E}_{\mathbf{x}}\left(BCE(y,z_{\theta_{t }}(\mathbf{x},A))\right), \tag{4}\]
with respect to \(\theta_{t}\), where \(\mathbb{E}\) denotes the expected value obtained by averaging over all training samples \(\mathbf{x}\). We use the BCE function since we consider here only the seizure versus non-seizure classification problem. In order to train the student GNN from the pre-trained teacher, we minimize a regularized BCE cost, where the regularization term is given by the distillation loss that minimizes the KL divergence between the soft-output of the teacher and student GNNs:
\[\mathcal{L}_{D}(\theta_{t}*,\theta_{s})=\mathbb{E}_{\mathbf{x}}\left(KL(p(z_{ \theta_{t}*},T)(\mathbf{x},A),p(z_{\theta_{s}}(\mathbf{x},A),T))\right), \tag{5}\]
where \(\theta_{t}*\) denotes the parameters of the pre-trained teacher. Then, the student network is trained by minimizing the total loss function:
\[L_{S}(\theta_{s})\triangleq(1-\delta)\mathcal{L}_{D}(\theta_{t}*,\theta_{s})+ \delta\,\mathcal{L}_{BCE}(\theta_{s}). \tag{6}\]
Our formulation so far uses the same input for both the student and teacher, and hence, the same number of input channels. This is because the KD formulation
assumes that the input to both the student and the teacher are of the same class, as we discussed in the Preliminaries. However, our ultimate goal is to transfer knowledge to a student that uses the measurements from reduced set of nodes/channels \(\mathbf{x}^{d}\) with \(d<D\), and not \(\mathbf{x}\). In other words, we wish to train a student model that works on a subgraph \(A^{\prime}\) of the original graph \(A\). We achieve this by modifying the graph used by the student deleting the edges from the full graph with adjacency matrix \(A\) as follows:
\[A^{\prime}=W^{\top}\,A\,W, \tag{7}\]
where \(W\in\mathbb{R}^{D\times d}\) denotes the selection matrix which is a permutation of the matrix given by concatenation of a identity matrix of dimension \(d\) with an all zero matrix of size \((D-d)\times d\)\(-\) retains only the subgraph of \(d\)-size subset of the channels.3 The input \(\mathbf{x}_{d}\) is then given by \(\mathbf{x}_{d}=W^{\top}\mathbf{x}\in\mathbb{R}^{d}\), corresponding to the nodes of the subgraph defined by \(W\). This in turn means that we must use \(z_{\theta_{s}}(\mathbf{x}_{d},A^{\prime})\) and not \(z_{\theta_{s}}(\mathbf{x},A)\) in the total loss function in (6). Further, in order that the hidden nodes corresponding to the deleted channels are not pooled in the GNN, we multiple the output of each hidden layer of the GNN also by \(W\). This in turn means that in practice the student GNN working on \(D\) nodes can be fed with zeroes at the test time on the discarded channels, corresponding to having only the reduced set of measurement channels as input for seizure detection. We note that, while the specific application setting used in this work is that of scalp EEG channels, our proposed approach can be applied also to other multi-channel settings such as fMRI, where there is knowledge of connectivity across channels/measurements. The use of GNNs also makes our approach inherently interpretable in terms of connectivity of the brain regions.
Footnote 3: In general, \(A^{\prime}\) may not necessarily be a connected graph, unless specifically regularized to be so.
We consider three different instances of our model in this work: (a) **G**lobal **S**tudent GNN with **P**re-**S**elected channel reduction (GS-PS) model, (b) **G**lobal **S**tudent GNN with **d**ata-**d**riven channel reduction (GS-DD) model, and (c) **P**ersonalized **S**tudent with **D**ata-**D**riven channel reduction (PS-DD) model We describe them next.
### GS-PS Model
We first consider the case when the reduced electrodes are preselected, or known already. In particular, we chose the four channels of T3, T4, T5, and T6 of the 19 channels from the T-20 montage [14] as the reduced electrode set. This is motivated by input from neuroscientists that say these temporal channels can be relatively more indicative channels for seizure in general [9]. In this case, the \(W\) matrix from Eq. (7) corresponds to a diagonal matrix with ones only at the indices corresponding to T3, T4, T5, and T6. We also validate the choice of these channels through the following experiment. We conduct an experiment where a new model with the same architecture as the teacher (keeping the full electrode channels) is trained to learn relevance weights \(w\) for each electrode:
this was simply achieved by applying a learnable diagonal matrix \(M\in\mathbb{R}^{D\times D}\) to the input before the GNN such that the effective input to the GNN was defined as \(\mathbf{x}_{M}^{\prime}=M\cdot\mathbf{x}\in\mathbb{R}^{D\times D}\). We notice that the weights assigned to the temporal and some of the occipital electrodes were the highest, in particular, T2,T3,T4, and T5, were given large weights. A more practical reason for the choice of temporal channels is the development of wearable sensors: many state of the art wearable sensors are of the behind the ear type, corresponding to these four temporal channels [9, 28]. We apply the proposed GS-PS model for seizure detection by training them on the data from training patients and apply them to detect seizures on new test patients. In this case, the subgraph is pre-determined.
### GS-DD Model
We next consider the case of learning a student with channel reduction achieved in a completely data-driven manner. We propose to use a Gumbel-softmax (GS) channel selection block akin to the approach pursued in [29]. Our proposed GD-DD model consists of two connected blocks, first, the GS block that selects the subset of channels/electrodes, followed by the GNN block that produces a label as shown in Figure 1. The details of the GS block are given next.
The Gumbel-Softmax EEG channel selection block was first proposed by Strypsteen and Bertrand [29], where channel selection was acheived through a learnable layer in the Deep Neural Network (DNN) in an end-to-end differentiable manner. The Gumbel-softmax layer represents a relaxation of the discrete selection operation that allows for differentiation [29][13][18]. Let \(x_{n}\) indicate the feature vector derived from channel \(n\), and \(x_{new_{i}}\) indicate the \(i\)th channel in the reduced set of channels. During training, the output of each selection neuron \(k\) is given by \(x_{new_{k}}=w_{k}^{T}X\), with \(w_{k}\) sampled from the concrete distribution given by [18]:
\[w_{nk}=\frac{\exp((\log\alpha_{nk}+G_{nk})/\beta)}{\sum_{j=1}^{N}\exp((\log \alpha_{jk}+G_{jk})}, \tag{8}\]
with \(G_{nk}\) independent and identically distributed samples from the Gumbel distribution and \(\beta\in(0,+\infty)\) the temperature parameter of the concrete distribution. The effective subset of input node features is computed as \(X_{new}=w^{T}X\). The temperature parameter \(\beta\) controls the extent of this relaxation from the one-hot selection: as \(\beta\) approaches 0, the distribution becomes more discrete, the sampled weights converges to one-hot vectors. The continuous relaxation allows \(w\) to be jointly optimized with model parameters, and to match the channel selection to the target model. The most pertinent EEG channels are thereby selected without prior expert knowledge or the need for manual feature selection. The learnable parameters \(\alpha\) of this distribution are jointly optimized with with the other network weights. At the end of training, the selection layer is made to select discrete channels by hard-thresholding the entries of \(w_{k}\) so that they select only \(K\) channels as \(w_{nk}=\begin{cases}1&\text{if }n=\arg\max_{j}\alpha_{jk}^{*}\\ 0&\text{otherwise},\end{cases}\), where \(\alpha^{*}\) is the learned matrix after training. We note that during test time, the GS block takes the
form of a fixed linear matrix multiplication \(W\) that acts to select the electrode channels. We also note that unlike the pre-selected case presented in Section 3.2, GS-DD model learns a _data-driven subgraph_.
In order to obtain a data-driven channel selection, we use the Gumbel softmax channel selection block as part of our GNN pipeline shown in Figure 1. In particular, we apply the GNN on the reduced subgraph obtained by selecting only a subset of input EEG channel signals \(X_{new}\) and that uses the adjacency matrix \(A_{new}\) corresponding to the selected channels. As discussed above, the GS block is parameterized by a learnable matrix \(\alpha\in\mathbb{R}^{N\times K}\), where \(N\) is the total number of electrodes, and \(K\) is the number of electrodes we wish to keep after reduction. When being fed a sample \(X\), the selection block sample a weight matrix \(W\in\mathbb{R}^{N\times K}\) from the concrete distribution following Equation (8). This can be viewed as a softmax operation, which produces a weight matrix whose elements summing to one as continuous relaxation of one-hot vectors. In our experiments, we use a similar method as in paper [29]. During the training, we set \(\beta(t)=\beta_{s}(\beta_{e}/\beta_{s})^{B}\), decreasing in an exponential manner where \(B\) is the total number of training epochs. In particular, \(\beta(t)\) is the temperature parameter at epoch \(t\), \(\beta_{s}\) and \(\beta_{s}\) are respectively the starting and ending \(\beta\). In our settings, \(\beta_{s}=100\), \(\beta_{s}=0.001\). As we noted before, while the complete set of electrodes is indeed used during training of the student GNN, this is not the case during test time as the \(W\) matrix will be set to ones and zeros, thereby not requiring any measurements from the non-selected electrodes.
**Channel consolidation** We note that, though we force the weight matrix to select a reduced set of channels, it is possible that a given channel is chosen multiple times since we have not actively enforced that there is no duplication. In order to discourage duplicate channels, we minimize the total loss regularized with the penalty given by [29]: \(\Omega(P)=\lambda\sum_{n=1}^{N}\text{ReLU}(\sum_{k=1}^{K}p_{nk}-\tau)\), where \(\text{ReLU}(\cdot)\) is the rectified linear unit, \(\lambda\) is the weight of the regularization loss, and \(\tau\) the threshold parameter. During training, we set \(\tau(t)=\tau_{s}(\tau_{e}/\tau_{s})^{B}\), decreasing in an exponential manner. In our settings, \(\tau_{s}=3\), \(\tau_{s}=1.1\). \(\lambda\) is set to be 5 to control the strength of the regularization.
Then, we learn the GS-DD model with the regularized student loss, to obtain a seizure detection model that is global and applicable to any patient.
Figure 1: Proposed approach
### PS-DD model
Epileptic seizures vary significantly between individuals and personalized models could be beneficial in taking into account their unique patterns and characteristics. This motivates us to extend our previous model to a personalized setting to for simultaneous electrode reduction and seizure detection for every single patient. As with the GS-PS and GS-DD models proposed in Sections 3.2 and 3.3, our aim here is to arrive at light-weight models for seizure detection that use only a subset of electrode channels using KD, but personalized to the patient. As with GS-DD model, we let the channels be selected in a data driven manner. Our hypothesis is that _both knowledge-distillation and personalized models have an important role_ to play in improving the seizure detection performance, _particularly in the cases when the available data is scarce_. The PS-DD model is in its essence the same as the GS-DD model in the architecture, with the crucial difference that the model is now trained in a patient-specific manner. This means that the PS-DD model also learns a _data-driven subgraph for every patient_.
## 4 Numerical Experiments
### Settings
**Dataset** We apply our models for the task of seizure detection on the data from the Temple University Hospital EEG Seizure Data Corpus (TUSZ) [20], which is one of the most popularly used, and diverse datasets that consists of over 100 patients of a wide range of ages (8-80) with different seizure types, e.g., focal seizure, tonic seizure, generalized non-specific seizure, and myoclonic seizure for long durations. The data is in the form of 19 channel EEG recordings in the 10-20 electrode placement system. As our work deals with the problem of seizure detection, no distinction is made between seizure types and all seizures were grouped into one class, resulting in a binary classification problem. The selected seizure (ictal) segments ranged between 5 and 60 seconds in length. Corresponding interictal segments of the same length were selected that ended one minute before seizure onset, following the methodolgy pursued in [9]. This resulted in a balanced dataset of 50% seizures and 50% nonseizure segments. The segments are taken sequentially without overlap. Every segment is then divided into windows of 5 seconds for both the classes. All selected segments were then split into five-second windows. The TUH dataset has two separate sets of recordings for train and for dev, which correspond to different set of patients for training and test, respectively. Similarly to the literature, we use only the patients from train for training models, and the test patients from dev for testing the learnt models on which the performance is reported. Finally, we have a total of 14382 samples for training and 4529 samples for testing, each sample being a 5 second window of multi-channel EEG signal.
**Data preprocessing** As customary in EEG signal processing, each sample is then filtered with a Butterworth bandpass filter of order 5 between 0.5 and 50 Hz to remove the artifacts and noise. Similarly to [25], the features were
calculated for each EEG channel: energy of the signal filtered in frequency bands (from 0.5 Hz to 30 Hz with a bandwidth of 3 Hz and from 30 Hz to 50 Hz with a bandwidth of 10 Hz), Hjorth complexity and mobility, decorrelation time, L2-norm of the approximation and detail coefficients obtained from a six-level wavelet decomposition using a Daubechies db4 wavelet, log amplitudes of the non-negative frequency components. This results in 647 features in total for each sample/window. The features are then normalized component-wise and taken as input \(\mathbf{x}\) to the GNN along with the distance based adjacency matrix.
**Training data** In order to train the teacher, no distinction is made between patients or segments and the entire training data is used to train the teacher. All the samples from all the test patients are used as test data. For the training the global models of GS-PS, and GS-DD, we use the data of all training patients during training and data from all test patients for testing. On the other hand, since the PS-DD model is trained for each patient separately, the training and test data segments are obtained by splitting the _segments_ of the given patient randomly. Further, in order to understand the effect of personalization, we divide the patients into three bands based on the amount of data segments they possess as shown in Table 1.
**Model Training** We use a two-layer GCN network with 32 hidden nodes in each hidden layer as the teacher model. It is trained with a batch size of 64 and a learning rate of \(10^{-5}\). The student network in all the three cases of GS-PS, GS-DD, and PS-DD, is a light-weight model with just one-layer GCN of only 1 hidden node. We note that the number of parameters to learn in the student is _just_ 3% of that of the teacher. Each of the three models is trained and tested both with and without KD in order to determine the contribution of the teacher knowledge. As described in Equation (6), the KL divergence loss is used as the distillation loss function and the binary cross-entropy loss is used for the student loss function. The hyperparameters in the total loss are obtained by performing 5-fold cross-validation. We set \(T=5\), and \(\delta\) values are set to \(0.1,0.5,0.8\). For GS-DD, we consider the case of \(K=4\) channels to compare the performance with that of GS-PS using the four temporal channels. For the PS-DD model, we use \(K=2\) electrodes for every patient.
**Evaluation metrics** Following [5, 2], we evaluate the performance of the three models using two standard metrics: f1-score and the Area Under the curve of the Receiver Operating Characteristic (AUROC), which are used regularly in evaluating seizure detection. In all the cases, the performance is averaged over the different test patients.
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Data Bands** & **Number of Segments N** & **Batch Size** & **Epoch** \\ \hline Rare Data & \(4\leq N<20\) & 2 & 20 \\ Mid Data & \(20\leq N<100\) & 16 & 100 \\ Rich Data & \(N\geq 100\) & 64 & 100 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Three bands of patients.
### Detection Performance results
We now report the performance of the different approaches.
**GS-PS model** The performance of the teacher and the global student with the pre-selected temporal channels is presented in Table 2. In the pre-selected student, we observe that KD significantly improves the performance in terms of the f1-score that tends to be comparable to that of the teacher.
**GS-DD model** Unlike in the temporal channel pre-selection case, we see that the performance remains relatively constant to the different levels of KD. This is probably because the GS selection already results in a high performance, and the teacher does not offer notable improvement.
**PS-DD model** In the case of a personalized student GNN with only two electrodes (that we call PS-DD 2), we observe that the performance improves as \(\delta\) is increased, meaning more emphasis is given to the patient's data over the teacher's knowledge, with the highest performance obtained at \(\delta=0.8\). On the other hand, we also observe that completely relying on the patient's data and not using the teacher (\(\delta=1\)) reduces the performance. Further, we note that the performance of the student even without teacher's knowledge (\(\delta=1\)) is generally much better than that of the teacher or the global student. This in turn supports our intuition and hypothesis that personalization also plays a significant role in improving seizure detection performance. In the two plots in Figure 2, we depict the distributions of test f1 and AUROC of all test patients in the circumstances with or without KD, respectively for the PS-DD model. The averaged performances are indicated in numbers in the figures. The dashed red/green lines show the general performances of models without personalization. When trained on the general population, we obtain the test f1 of models with and without KD as 0.7 and 0.4, respectively. Whereas after personalization, the average test f1 are improved by 16% and 50% to around 0.8, corresponding to with and without KD, respectively. This shows that by tackling the diversity in EEG seizure data on a large population, personalization has the potential to improve seizure detection. The average test AUROC is improved by 8% to above 0.8. The detailed results are reported in Table 2. However, the average performance with KD is only slightly higher than the average performance without KD in both the metrics. This in turn motivated us to look into the performance in the three data bands individually next.
### Performance analysis
To better understand the effectiveness of our models, we do a detailed performance analysis by further dividing patients into three bands based on the number of seizure segments (rare-data band, mid-data band, rich-data band) and delve into the performances, respectively as shown in Table 1. In Table 3, we report the seizure detection results when the model training relies differently on the new patient data to different levels given by \(\delta=0.1\), \(\delta=0.5\) and \(\delta=.8\), respectively, in (6). The setting of \(\delta=0.8\) corresponds to the case where the student training relies more heavily on unseen patient-specific data than the teacher. Figure 3 shows the differences in the percentages of cases in each band where KD boosted
the model performance (in terms of test f1 and test AUROC). Overall, KD helps 72% (47 out of 65) patients in the rare-data band improve their model testing performances. But only 49% (26 out of 53) patients in the mid-data band and 54% (18 out of 33) patients benefit from the teacher. In general, we observe the tendency that patients with scarce data benefit the most from KD. This gives us the motivation to further delve into the rare-data band case.
In the rare-data band, we notice that we constantly encounter four patients with the lowest performance that bias the overall performance significantly. It turns out that these four cases correspond to the patients with the least training data. We refer to these cases as the four "extremes" in our experiments. Since the TUSZ dataset is rather diverse and we wish to see the averaged performance without a strong bias, we chose to exclude the extremes out and recompute the performance metrics. We notice from Table 4, that the performance improves overall by excluding the extremes, and the best performance is obtained when \(\delta=0.8\). This indicates that the effectiveness of KD in personalized settings widely varies with the amount of data each patient possesses, and potentially across the patient types (since the dataset includes different types of seizures
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \multicolumn{2}{c}{**Model**} & \multicolumn{2}{c}{**w/o KD**} & \multicolumn{4}{c}{**w/ KD**} \\ \hline \multirow{2}{*}{Channel} & \multirow{2}{*}{Personalization} & \multicolumn{2}{c}{–} & \multicolumn{2}{c}{\(\delta=0.1\)} & \multicolumn{2}{c}{\(\delta=0.5\)} & \multicolumn{2}{c}{\(\delta=0.8\)} \\ \cline{3-10} & & & & & & & & \\ \cline{1-1} Selection & & f1 & auroc & f1 & auroc & f1 & auroc & f1 & auroc \\ \hline Teacher & – & 0.689 & 0.781 & – & – & – & – & – & – \\ GS-PS & \(\times\) & 0.401 & 0.755 & 0.683 & 0.766 & – & – & – & – \\ GS-DD & \(\times\) & 0.690 & 0.763 & 0.695 & 0.761 & 0.697 & 0.763 & 0.693 & 0.764 \\ PS-DD & ✓ & 0.788 & 0.814 & 0.755 & 0.777 & 0.784 & 0.829 & **0.795** & **0.829** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Test Results with Different Models
that we do not currently account for) and also varies with the change of the weight of student loss \(\delta\). In our experiments, \(\delta\) is 0.8 gave the best scores on an average. A more exhaustive approach would be to compute personalized models with personalized \(\delta\), but that is beyond the scope of the current work.
**Effectiveness of Knowledge Distillation when lacking informative channels/signals** To further test the effectiveness of both personalization and KD, we select to keep we arbitrarily select to keep only signals from channels FP1, FP2 that belong to the frontal region, which are suggested to be the less informative region for epileptic seizure detection. The Gumbel-Softmax channel selection block is not involved in this section. The experiment is conducted on the rare data band, with the hypothesis that the combination of personalization and KD can help compensate for the adverse situation brought by a) lack of data, and b) lack of informative channels. With only personalization but no KD, 53.8% (35 out of 65) patients' test f1 and AUROC score still exceed 0.65, yielding fairly good performances. In the rest of not the ideal personalized situations, 90% (27 out of 30 patients) benefit from the teacher. With even the alleged least informative channels, we get 53.8% of the cases with rather promising results. For the rest of the cases, the integrated application of personalization and KD has been observed to be effective for detecting epileptic seizures. We thus see that the combination leverages the strengths of both techniques to provide highly accurate results in scarce data scenarios.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & & \multicolumn{2}{c}{**w/o KD**} & \multicolumn{4}{c}{**w/ KD**} \\ \cline{3-10}
**Data Bands** & **Personalization** & \multicolumn{2}{c}{–} & \multicolumn{2}{c}{\(\delta=0.1\)} & \multicolumn{2}{c}{\(\delta=0.5\)} & \multicolumn{2}{c}{\(\delta=0.8\)} \\ \cline{3-10} & & f1 & auroc & f1 & auroc & f1 & auroc & f1 & auroc \\ \hline Rare Data & ✓ & 0.786 & 0.791 & 0.783 & 0.783 & 0.790\({}^{*}\) & 0.816\({}^{*}\) & **0.798*** & **0.827*** \\ Mid Data & ✓ & **0.791** & **0.837** & 0.726 & 0.756 & 0.774 & 0.819 & 0.786 & 0.833 \\ Rich Data & ✓ & 0.790 & 0.821 & 0.749 & 0.800 & 0.786 & **0.829** & **0.801*** & 0.828\({}^{*}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: PS-DD Test Results on Different Bands
Figure 4: PS-DD Test Results (\(\delta=0.8\)) on Rare-Data Band. The right column shows the results with 4 patients with extremely sparse data and poor performances excluded (Patient ID: ”00005672”, ”00008706”, ”000006535”, ”00004596”)
**Hierarchical Clustering of Patients** We now investigate if the different patients naturally show clusters when the learnt electrode channels are used to cluster the patients. We use hierarchical clustering on the learnt selection matrices \(W\). Hierarchical clustering is a method of cluster analysis that builds a hierarchy of clusters by successively splitting or merging smaller clusters based on the Euclidean distance between clusters. The result of hierarchical clustering is shown as a dendrogram that shows the hierarchy of clusters in Figure 5. We observe that there are no clearly significant clusters emerging except for a large cluster and outliers, which could be because the patients and seizure signals in the TUSZ dataset are quite diverse. We also note that we have made no distinction between seizure types (about 6 of widely varying number of samples per type) in our analysis which might explain the single big cluster. While some of the outliers corresponded to patients with rare disease (Rasmussens' syndrome), it is unclear if the outliers show specific signature characteristics that separate them clinically from the main cluster. Further, we see that the main cluster diameter is relatively large indicating that there is significant variability in the selected channels across the different patients. In future work, we plan to pursue alternative clustering strategies and features and also mitigating the diversity by, for example, filtering out only the focal seizure signals.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & **Model** & \multicolumn{3}{c}{**Extremes \(\times\)**} & **Extremes \(\checkmark\)** \\ \hline \(\delta\) & Personalization & f1 & auroc & f1 & auroc \\ \hline
0.1 & \(\checkmark\) & 0.794 & 0.806 & 0.783 & 0.783 \\
0.5 & \(\checkmark\) & 0.813 & 0.860 & 0.790 & 0.816 \\
0.8 & \(\checkmark\) & **0.832** & **0.871** & 0.798 & 0.827 \\
1 (no KD) & \(\checkmark\) & 0.816 & 0.833 & 0.786 & 0.791 \\ \hline \hline \end{tabular}
\end{table}
Table 4: PS-DD Test Results on Rare-Data Bands
Figure 5: Hierarchical clustering of patients based on their Gumbel-Softmax channel selection patterns
Conclusions and future work
We proposed an approach to transfer the knowledge from a pre-trained GNN-based seizure detection to the case when the number of measurement electrodes is reduced. We showed that it is possible to obtain models that are (i) light-weight (requiring just a 3% of the sophisticated network), and (ii) work on reduced electrodes (requiring as low as only 10% of the original electrodes), yet offer superior performance in seizure-detection, particularly in the personalized setting. The approach resulted in patient-specific choice of the reduced set of electrodes. Our experiments demonstrated the merit of both knowledge distillation and personalization, particularly when dealing with patients with scarce data. We observe that there is a trade-off between the use of prior information (teacher) and patient-specific data: although teacher-knowledge is necessary, the relative importance should be higher on the patient-specific data for maximum performance. We believe that these results show that our approach can provide meaningful insights and guidelines in the practical setting where there is need to move from full scalp electrode measurements to reduced form factor measurements, such as personalized wearable devices. We have currently restricted our analysis to a relatively simple GNN teacher model and used the graph given by physical placement of electrodes. The quality of the teacher and the graph used both translate into the quality of the student model, and hence, we believe that a more sophisticated GNN could be employed to further improve overall performance. In the future, it would also be interesting to look into multi-class seizure classification and identify the different types of seizures. |
2305.09022 | It Takes Two to Tango: Navigating Conceptualizations of NLP Tasks and
Measurements of Performance | Progress in NLP is increasingly measured through benchmarks; hence,
contextualizing progress requires understanding when and why practitioners may
disagree about the validity of benchmarks. We develop a taxonomy of
disagreement, drawing on tools from measurement modeling, and distinguish
between two types of disagreement: 1) how tasks are conceptualized and 2) how
measurements of model performance are operationalized. To provide evidence for
our taxonomy, we conduct a meta-analysis of relevant literature to understand
how NLP tasks are conceptualized, as well as a survey of practitioners about
their impressions of different factors that affect benchmark validity. Our
meta-analysis and survey across eight tasks, ranging from coreference
resolution to question answering, uncover that tasks are generally not clearly
and consistently conceptualized and benchmarks suffer from operationalization
disagreements. These findings support our proposed taxonomy of disagreement.
Finally, based on our taxonomy, we present a framework for constructing
benchmarks and documenting their limitations. | Arjun Subramonian, Xingdi Yuan, Hal Daumé III, Su Lin Blodgett | 2023-05-15T21:12:07Z | http://arxiv.org/abs/2305.09022v1 | # It Takes Two to Tango: Navigating Conceptualizations of NLP Tasks and Measurements of Performance
###### Abstract
Progress in NLP is increasingly measured through benchmarks; hence, contextualizing progress requires understanding when and why practitioners may disagree about the validity of benchmarks. We develop a taxonomy of disagreement, drawing on tools from measurement modeling, and distinguish between two types of disagreement: 1) how tasks are conceptualized and 2) how measurements of model performance are operationalized. To provide evidence for our taxonomy, we conduct a meta-analysis of relevant literature to understand how NLP tasks are conceptualized, as well as a survey of practitioners about their impressions of different factors that affect benchmark validity. Our meta-analysis and survey across eight tasks, ranging from coreference resolution to question answering, uncover that tasks are generally not clearly and consistently conceptualized and benchmarks suffer from operationalization disagreements. These findings support our proposed taxonomy of disagreement. Finally, based on our taxonomy, we present a framework for constructing benchmarks and documenting their limitations.
## 1 Introduction
Claims of progress in NLP are often premised on how models perform on benchmarks for various NLP tasks1 (e.g., coreference resolution, question answering) (Wang et al., 2018, 2019; Hu et al., 2020; Gehrmann et al., 2021). Benchmarks instantiate a task with a specific format, dataset of correct input-output pairs, and an evaluation metric (Bowman and Dahl, 2021), and they are intended to serve as measurement models for performance on the task. On the one hand, benchmarks allow for performance results to be easily compared across a rapidly-rising number of NLP models (Schlangen, 2021; Ruder, 2021). Additionally, many NLP benchmarks are easily accessible via open-source platforms (Lhoest et al., 2021), which reduces the need of practitioners to construct new evaluation datasets and metrics from scratch. However, prior research has identified numerous threats to the validity of benchmarks (i.e., how well benchmarks assess the ability of models to correctly perform tasks). These threats include spurious correlations and poorly-aligned metrics (refer to Table 4 in the appendix).
Footnote 1: We disambiguate “benchmarks” and “tasks” in Appendix A.
However, little literature has surfaced sources of _disagreement_ among NLP practitioners about benchmark validity, which is paramount to contextualize progress in the field. Hence, we develop a taxonomy of disagreement based on measurement modeling (from the social sciences (Adcock and Collier, 2001; Jacobs and Wallach, 2021)). Our taxonomy critically distinguishes between disagreement in how tasks are conceptualized and how measurements of model performance are operationalized (Blodgett et al., 2021). It thereby goes beyond prior examinations of NLP benchmarking methodology, which assume that tasks are generally clearly and consistently understood from person to person (Schlangen, 2021; Bowman and Dahl, 2021). This is important because our taxonomy captures that practitioners may perceive a benchmark for a task to have poor validity because they conceptualize the task differently than the benchmark creators do, and not simply because of the creators' over-sights or mechanistic failures when constructing the benchmark. (We validate this hypothesis empirically in SS 5.1.) Furthermore, our taxonomy addresses that benchmarks can shape practitioners' conceptualization of a task.
Ultimately, our taxonomy equips practitioners with a language to structure their thinking around and communicate their perceptions of benchmark validity. To provide evidence for our taxonomy, we conduct a survey of practitioners (\(N\) = 46) about
their opinions on different factors that affect benchmark validity: how contested tasks are and the quality of common benchmark formats, datasets, and metrics for tasks. We further conduct a meta-analysis of relevant literature to understand how tasks are conceptualized. Our meta-analysis and survey across eight tasks, ranging from coreference resolution to question answering, uncover that tasks are generally not clearly and consistently conceptualized and benchmarks suffer from operationalization disagreements. These findings support our taxonomy of disagreement. Finally, based on our taxonomy, we present a framework for constructing benchmarks and documenting their limitations.
## 2 Related Work
**Community surveys** Researchers have conducted community surveys of NLP evaluation practices, often to surface perceptions that are not stated in related literature. Michael et al. (2022) survey NLP practitioners to "elicit opinions on controversial issues" around benchmarking. Zhou et al. (2022) survey NLG practitioners to uncover "goals, community practices, assumptions, and constraints that shape NLG evaluations." Dev et al. (2021) survey non-binary individuals to understand how they are not included in NLP model bias evaluations. We survey NLP practitioners to excavate perceptions of how contested tasks are and how well benchmarks measure model performance on tasks.
**Benchmark validity** A few previous works have studied benchmark validity through a measurement modeling lens Jacobs and Wallach (2021). Blodgett et al. (2021) analyze NLP bias evaluation benchmarks to inventory conceptualization and operationalization disagreements that threaten their validity as measurement models for stereotyping. Liao et al. (2021) review papers from various machine learning subfields to characterize benchmarks from the angles of internal and external validity. Raji et al. (2021) argue that benchmarks cannot measure "progress towards general ability on vague tasks such as [...] 'language understanding'," and hence lack construct validity. We draw from measurement modeling to navigate how perceptions of validity issues with NLP benchmarks arise.
## 3 Taxonomy of Disagreement
We present our taxonomy of disagreement about the validity of NLP benchmarks (displayed in Figure 1). Drawing from measurement modeling Jacobs and Wallach (2021), our taxonomy critically distinguishes between disagreement in: 1) how a task \(\tau\) is conceptualized, and 2) how a benchmark \(B_{\tau}\) operationalizes measurements of model performance on \(\tau\). We provide evidence for our taxonomy in SS 5, via our survey results and a meta-analysis of relevant literature.
### Task Conceptualization
\(\tau\) is _contested_ when it lacks consistency or clarity in how it is conceptualized. In this case, because \(B_{\tau}\) operationalizes measurements for model performance on \(B_{\tau}\)'s creators'2 conceptualization of \(\tau\), there will necessarily be disagreement about the content validity of \(B_{\tau}\)Jacobs and Wallach (2021). Disagreement in \(\tau\)'s conceptualization can stem from the following constructs with which \(\tau\) is inextricably entangled:
Footnote 2: By “creators,” we refer to all individuals involved in the construction of \(B_{\tau}\), including crowdworkers. We do not claim that all the creators of \(B_{\tau}\) necessarily have nor does \(B_{\tau}\) necessarily encode a consistent conceptualization of \(\tau\). For example, the Universal Dependencies Treebank attempts to consolidate different conceptualizations of dependency parsing (Nivre et al., 2016); hence, it likely fails to exactly match any individual linguist’s conceptualization of syntax.
* [leftmargin=*,noitemsep,topsep=0pt]
* **Model capabilities:** Practitioners may disagree or lack clarity on the set of model capabilities \(C_{\tau}\) that they assume \(\tau\) involves Gardner et al. (2019); Ribeiro et al. (2020); Schlangen (2021). Our conceptualization of \(C_{\tau}\) is broader than "cognitive
Figure 1: Bird’s eye view of our taxonomy comprising conceptualization and operationalization disagreements.
capabilities" Paullada et al. (2021), encompassing e.g., handling various genres of text. However, \(C_{\tau}\) can also include the coarse-grained capability of performing \(\tau\) correctly. In contrast to Schlangen (2021), we argue that practitioners may determine \(C_{\tau}\) in a top-down or bottom-up manner. They may first conceptualize \(\tau\) as a specific real-world application and identify \(C_{\tau}\) required to meet the needs of application users Cao et al. (2022). Alternatively, practitioners may first identify \(C_{\tau}\) that they believe to be linguistically interesting or crucial to general-purpose language systems, and subsequently devise \(\tau\) such that \(C_{\tau}\) is necessary to perform \(\tau\) correctly Pericliev (1984); Schlangen (2021); Mahowald et al. (2023). In both cases, we gauge the extent to which a model possesses \(C_{\tau}\) by proxy, by attempting to measure its performance on \(\tau\).
* **Performance correctness:** Practitioners may disagree or lack clarity on what constitutes performing \(\tau\) correctly Jamison and Gurevych (2015); Baan et al. (2022); Plank (2022). This could include different perspectives on correct outputs \(y_{\tau}\), as well as acceptable methods \(M_{\tau}\) and unacceptable methods \(\neg M_{\tau}\) for performing \(\tau\) correctly Teney et al. (2022).
* **Essentially contested constructs:** Practitioners often disagree or lack clarity on essentially contested constructs \(E_{\tau}\) entangled with \(\tau\). A construct is essentially contested when its significance is generally understood, but there is frequent disagreement on what it looks like (e.g., language understanding, justice) Gallie (1955). Developing criteria for whether a construct is essentially contested has been a subject of philosophical study for decades. For instance, Gallie (1955) posited that essentially contested constructs must have "reciprocal recognition of their contested character among contending parties" and "an original exemplar that anchors conceptual meaning," among other characteristics Collier et al. (2006).
Model capabilities, performance correctness, and essentially contested constructs are mutually-building. \(C_{\tau}\) (capabilities assumed to be involved to perform \(\tau\) correctly) rely on a particular understanding of \(y_{\tau}\). Similarly, \(M_{\tau}\) (acceptable methods for performing \(\tau\) correctly) may overlap with \(C_{\tau}\).
### Perceptions of Benchmark Validity
Our taxonomy connects disagreement in how \(\tau\) is conceptualized to impressions of the validity of \(B_{\tau}\) (i.e., how well \(B_{\tau}\) operationalizes measurements of model performance on \(\tau\)). In particular, there are two reasons for perceptions of poor benchmark validity: disagreements in how the task is conceptualized, and operationalization disagreements. We delve into these reasons, with examples, in SS 5.
* **Conceptualization disagreements:** Disagreements in how practitioners fundamentally conceptualize an aspect of \(\tau\) (e.g., \(C_{\tau}\), \(y_{\tau}\), \(M_{\tau}\), \(\neg M_{\tau}\), \(E_{\tau}\)) necessarily yields disagreements about the content validity of \(B_{\tau}\). For example, Williams et al. (2018) construct MNLI because they conceptualize natural language inference as requiring models to handle various text genres, which they perceive SNLI "falls short of providing a sufficient testing ground for" because "sentences in SNLI are derived from only a single text genre." Additionally, practitioners' conceptualizations of tasks can evolve over time, and even be influenced by the benchmarks with which they work. For instance, SQuAD arguably radically shifted practitioners' conceptualizations of QA from open-ended information retrieval to reading comprehension-style questions Rajpurkar et al. (2016). As such, constructing valid benchmarks for a task can be a game with a shifting goalpost.
* **Operationalization disagreements:** Consider a set \(P_{B_{\tau}}\) of practitioner(s) whose conceptualization of an aspect of \(\tau\) aligns with that of the creators of \(B_{\tau}\). Operationalization disagreements are choices made by the creators of \(B_{\tau}\) (with respect to task format, dataset, and metric) that even within \(P_{B_{\tau}}\), engender divergent perceptions of \(B_{\tau}\)'s validity. As an example, consider practitioners \(P_{B_{\tau}}\) who believe that metrics for machine translation quality should "yield judgments that correlate highly with human judgments Pillutla et al. (2021). Pillutla et al. (2021), motivated by their impression that popular automatic evaluation metrics in NLG (e.g., BLEU, ROUGE) "weakly" operationalize how humans judge machine translations, propose a new metric MAUVE.
We provide an extended discussion of conceptualization and operationalization disagreements in Appendix C.
Survey Methodology
With our taxonomy in mind, we conduct a survey of NLP practitioners3 (\(N=46\)) to surface and understand for various NLP tasks, practitioners' perceptions of: **(1)** the extent to which the tasks appear to have a clear and consistent conceptualization and **(2)** the quality of benchmarks (with respect to task format, dataset, and metric). We ultimately chose to include the following tasks in our survey: Sentiment Analysis (Sent), Natural Language Inference (NLI), Question Answering (QA), Summarization (Sum), Machine Translation (MT), Named-Entity Recognition (NER), Coreference Resolution (Coref), and Dependency Parsing (Dep). We detail our task selection protocol in Appendix D.
Footnote 3: Following Zhou et al. (2022), by “practitioners,” we refer to academic and industry researchers, applied scientists, and engineers who have experience with NLP tasks or evaluating NLP models or systems.
**Survey topics** In our survey, we begin by asking participants about their background (i.e., occupation and experience with NLP) to understand the demographics of our sample. We then inquire into participants' initial impressions of how current state-of-the-art NLP models perform on various NLP tasks; we do this prior to asking participants to engage more critically with task definitions and benchmarks, so as not to sway their responses. Subsequently, for each task, we ask participants about their familiarity with the task, and if they are familiar, their perceptions of the **(a)** clarity and consistency of the task's definition or conceptualization, **(b)** extent to which common task formats capture the underlying language-related skill, **(c)** quality of benchmark datasets and metrics, and **(d)** progress on the task. We utilize perceptions of **(a)** as a proxy for how contested tasks are across practitioners. We do this because it is not feasible to collect and compare participants' raw task conceptualizations in a quantifiable manner. Furthermore, we collect perceptions of **(b)** and **(c)** to capture conceptualization and operationalization disagreements across benchmarks generally. We do not inquire into participants' impressions of **(b)** and **(c)** for specific benchmarks in order to keep the survey reasonably long and have a sufficient sample size.
For all survey questions that ask participants to rate their perception, we provide them with a scale that ranges from 1 to 6 with articulations of what 1 and 6 mean in the context of the question. We include the entirety of our survey and survey results in Appendix H, and discuss participant guidance in Appendix E.
**Survey recruitment and quality control** As seen in Table 1, our sample is heavily skewed towards academic researchers; we detail our participant recruitment protocol and IRB approval in Appendix G. We additionally document our quality control measures in Appendix F.
## 5 Results
### Task Conceptualization
Figure 2 shows how survey participants perceive the clarity and consistency with which various NLP tasks are conceptualized. We observe that:
* **Tasks are not perfectly clearly or consistently conceptualized.** No task in Figure 2 received a score of 6 from all participants.
* **Tasks are conceptualized with varying levels of clarity and consistency.** The tasks in Figure 2 exhibit a range of average and median conceptualization scores. NLI and Sent appear to have objectives that are less clearly and consistently understood by practitioners, while Coref and MT seem to be better defined.
* **Practitioners diverge in their impressions of how clearly and consistently the NLP community conceptualizes a task.** Many tasks in Figure 2 have a large interquartile range, and for NLI and Sent, scores span from 2 to 6.
To further provide evidence for these observations, we leverage our taxonomy (in particular, the sources of disagreement in task conceptualization described in SS 3.1) and relevant literature.
**Model capabilities** In order to understand disagreement about involved capabilities \(C_{\tau}\) for the tasks, we meta-analyze benchmarks that survey participants mention. Specifically, for each task, we first select the 2-4 most frequently mentioned benchmarks; we then perform light open coding4
\begin{table}
\begin{tabular}{l|c} \hline \hline
**Role** & **\#** \\ \hline Works on deployed systems & 6 \\ Industry practitioner (not researcher) & 7 \\ Industry researcher & 10 \\ Academic researcher & 32 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Demographics of survey participants. Some participants identified with more than one role.
on the papers that initially proposed these benchmarks in order to identify model capabilities5 that the authors claim the benchmark assesses.
Footnote 5: We restrict our attention to stated capabilities that lie below the surface of the capability of performing the task correctly (Schlangen, 2021). Some annotated datasets when proposed, were not intended for model evaluation, but were later repurposed as benchmarks (e.g., Penn Treebank (Marcus et al., 1993)).
We find that for each task, stated capabilities overlap but often vary across benchmarks, suggesting disagreement in task conceptualization. For instance, for Sum, the authors of XSum claim that the benchmark assesses whether models can generate novel language, handle linguistic phenomena, and handle various domains (Narayan et al., 2018), while the authors of CNN/Daily Mail claim that this benchmark gauges whether models possess benchmark-external knowledge (Nallapati et al., 2016); however, authors of both benchmarks intend to test language understanding. We present all our meta-analysis results in Appendix J.
**Performance correctness** Correct outputs \(y_{\tau}\) for a task may be inherently disagreed upon or unclear. For instance, in MT, the adequacy of translations in \(y_{\tau}\) is subjective (White and O'Connell, 1993); further, it can be unclear how to translate lexical and syntactic ambiguity in the source language (Pericliev, 1984; Baker et al., 1994), or translate from a language without to with grammatical gender (Gomen and Webster, 2020). We present additional examples in Table 2.
Practitioners can also disagree about acceptable methods \(M_{\tau}\) and unacceptable methods \(\neg M_{\tau}\) for performing a task correctly. For example, Sugawara et al. (2020) expect models to take certain actions when performing reading comprehension, e.g., {recognize word order, resolve pronoun coreferences} \(\subset M_{\tau}\). On the other hand, numerous works have raised concerns about models exploiting annotation artifacts in NLI (Gururangan et al., 2018; Poliak et al., 2018) and QA benchmarks (Si et al., 2019; Kavumba et al., 2019; Chen and Durrett, 2019), which suggests that they view exploiting artifacts as part of \(\neg M_{\tau}\).
**Essentially contested constructs \(E_{\tau}\)** pose an issue when practitioners incorrectly presuppose that \(E_{\tau}\) have clear and consistent conceptualizations, thus failing to communicate how they personally understand \(E_{\tau}\). We present examples of essentially contested constructs \(E_{\tau}\) entangled with various tasks in Table 2, elaborating on a few in this section. Sent presupposes that the essentially contested construct "sentiment:" 1) has a clear and consistent definition (e.g., falls on a spectrum between "positive" and "negative"); 2) can be gleaned from text alone; and 3) admits expressions that are universally or predominantly interpreted the same way from person to person. However, there exists "divergences of sentiments about different concepts" across cultures (Heise, 2014), and hence "sentiment" \(\in E_{\tau}\). Furthermore, Coref, in asking if two expressions refer to the same entity, presupposes that the essentially contested construct "identity" is clearly and consistently understood, and thus "identity is never adequately defined" (Recasens et al., 2010).
Often, \(C_{\tau}\) and \(E_{\tau}\) overlap. As revealed by our meta-analysis of model capabilities, practitioners may believe that performing certain tasks involves:
* **Possessing benchmark-external knowledge:** But, what constitutes benchmark-external knowledge is left ambiguous. For example, in QA, questions may involve "commonsense knowledge" (Talmor et al., 2019; Schlegel et al., 2020), whose constitution is essentially unclear and inconsistently understood (Mueller, 2015). It is also unclear how much external knowledge and context NER requires to disambiguate entities (Ratinov and Roth, 2009).
* **Being on par with humans:** However, practitioners often do not specify which humans (e.g., crowdworkers, trained syntacticians) with which they would like models to be on par, or use vague or problematic language in their specifi
Figure 2: Perceived clarity and consistency of task definition. Orange lines indicate median score, while dashed lines indicate average score.
cations (e.g., "normally-abled adults whose first language is English" Levesque et al. (2011)).
We discuss additional examples of essentially contested constructs in Appendix K.
### Perceptions of Benchmark Validity
Figure 3 depicts for various tasks, how survey participants' perceptions of the quality of common benchmark task instantiations, datasets, & metrics (which are central to benchmark validity) vary in relation to their perceptions of the clarity and consistency of how the task is defined. These plots show that there is generally a positive association between perceptions of benchmark validity and task contestedness. This observation indicates that benchmarks suffer from conceptualization disagreements. However, this observation could also reflect that NLP practitioners collapse task contestedness onto their perceptions of benchmark validity.
The plots also demonstrate that the association (especially between perceptions of metric quality and task contestedness) is weak, with seemingly well-defined tasks like MT facing impressions of low-quality metrics. This association weakness suggests that benchmarks suffer from operationalization disagreements. To provide evidence for our findings, we leverage relevant literature.
**Conceptualization disagreements** We describe some disagreements in the conceptualization of NLP tasks and provide examples of resultant conceptualization disagreements in Table 2.
**Operationalization disagreements** Operationalization disagreements can be attributed to various factors. Measurement modeling naturally provides us with a language to categorize and discuss these factors, and in the process, theorize about the real world. Hence, we taxonomize operationalization disagreements through the lens of different threats to validity in the measurement modeling literature.
* **Face validity:** Benchmarks can have surface characteristics (e.g., incorrect or incomplete annotations) that affect perceptions of their quality. For instance, QA, Coref, and NER benchmarks often contain incorrect or incomplete annotations Jie et al. (2019); Schlegel et al. (2020); Blodgett et al. (2021). Many Sum benchmarks have unfaithful reference summaries Zhang et al. (2022); Tang et al. (2022); Goyal and Durrett (2021). MT benchmarks often contain incorrect reference translations Castilho et al. (2017).
* **Substantive validity:** A benchmark may not exhaustively assess a model capability Schlangen (2021). For example, practitioners may conceptualize a task as involving the capability to handle phenomena in real-world data, but benchmark datasets (e.g., from "constrained social media platforms") can fail to "reflect broader real-world phenomena" Olteanu et al. (2016); Hupkes et al. (2022). For example, despite having saturated SST-2 Wang et al. (2019), NLP models struggle with domain shift, bi-polar words, negation Hussein (2018); Hossain et al. (2022). Furthermore, QA benchmarks are often restricted to a single format (e.g., multiple-choice reading comprehension, story-cloze queries Schlegel et al. (2020)), which does not substantively instantiate QA. Moreover, the format of MT benchmarks (e.g., of WMT shared tasks) often precludes sufficient intersentential context for substantively assessing translations Toral (2020).
* **Discriminant validity:** Benchmarks may inadvertently assess undesired model capabilities or "unacceptable" methods of performing a task (e.g., picking up on spurious cues) Jacobs and Wallach (2021). For instance, despite having saturated SuperGLUE NLI benchmarks Wang et al. (2019), the task-specific
et al., 2019), NLP models fail on a controlled evaluation set where it is not possible to rely on syntactic heuristics McCoy et al. (2019).
* **Convergent validity:** Benchmarks may not "match other accepted measurements" of performance on a task. For example, practitioners may consistently conceptualize Sum and MT as involving "being on par with humans"; however, automatic evaluation metrics like ROUGE and BLEU are poorly aligned with human judgments of summarization Deutsch and Roth (2021); Deutsch et al. (2022) and translation Reiter (2018); Toral (2020); Marie et al. (2021); Amrhein et al. (2022) quality, respectively. This is reflected in Figure 3c, which shows that Sum and MT noticeably deviate from the positive trend; in particular, although these tasks are more consistently and clearly conceptualized, practitioners perceive their metrics to be low-quality (i.e., Sum and MT benchmarks have poor convergent validity).
* **Consequential validity:** Practitioners may be concerned that the use of a benchmark has societal harms. For example, Sent benchmarks can reinforce hegemonic conceptions of emotion and and be culturally discriminatory Crawford (2021).
Benchmark issues may threaten more than a single aspect of validity.
### Progress
Figure 4 (and Figure 5 in the appendix) suggest that perceptions of better task conceptualization and benchmark validity are associated with perceptions of stronger progress on the task. In reality, impressions of progress in NLP (especially for non-practitioners) may be disconnected from the validity of the benchmarks used to make claims about progress Bender et al. (2021). This is important because claims of progress shape the social and academic capital of NLP, and are implicitly embedded in every research artifact, including scientific publications. Thus, towards proper science and accountability, NLP practitioners ought to make realistic and tenable claims about progress, and not over-type NLP models. Furthermore, progress is neither monolithic nor does it increase monotonically; it is critical to be transparent about benchmark validity issues and their implications for claims of progress.
\begin{table}
\begin{tabular}{|p{14.2pt}|p{142.3pt}|p{142.3pt}|} \hline
**Task** & **Disagreement in conceptualization?** & **Disagreement examples** \\ \hline \multirow{4}{*}{NLI} & \(C_{r}\): yes (Table 6). & SNLI, MNLI datasets operationalize validity of natural language inferences with single gold label (Bowman et al., 2015; Williams et al., 2018). \\ \cline{2-3} & \(\overline{\textit{\nu}}\): yes; inherent disagreement in validity of natural and fwiatkowski (2019); lack of clarity and disagreement about \(y_{s}\) when premise or hypotheses is equation (Figure 13). & SNLI, MNLI datasets operationalize validity of natural language inferences with single gold label (Bowman et al., 2015; Williams et al., 2018). \\ \cline{2-3} & \(\overline{\textit{\nu}}\): yes (Table 7). & HotpotQA, ReCoRD, MultiRC datasets operationalize reference answers with arbitrary precision (Schlegel et al., 2020). \\ \cline{2-3} & \(\overline{\textit{\nu}}\): yes; appropriate adequacy of answers in \(y_{s}\) is subjective (Schlegel et al., 2020), & HotpotQA, ReCoRD, MultiRC datasets operationalize reference answers with arbitrary precision (Schlegel et al., 2020). \\ \cline{2-3} & \(\overline{\textit{\nu}}\): yes; (understand language, reason over a context, possess benchmark-external knowledge, be on par with humans) \(\subset E_{r}\) (Table 7). & OntNotes dataset does not capture near-identity coreferences (Reacsens et al., 2010; Zeldes, 2022). \\ \cline{2-3} & \(\overline{\textit{\nu}}\): yes; (Table 9). & \\ \cline{2-3} & \(\overline{\textit{\nu}}\): yes; "goodness" and adequacy of summaries in \(y_{s}\) are subjective (Nallapati et al., 2016; Li et al., 2021; Ter Hove et al., 2022). & benchmark datasets contain single gold summaries with varying levels of adequacy (Kano et al., 2021). \\ \cline{2-3} & \(\overline{\textit{\nu}}\): yes; (understand language, possess benchmark-external knowledge ) \(\subset E_{r}\) (Table 9). & \\ \hline \end{tabular}
\end{table}
Table 2: Disagreements in the conceptualization of NLP tasks and relevant examples.
Figure 4: Perceived quality of common task instantiations, benchmark datasets, and benchmark metrics vs. perceived current progress on task.
We must simultaneously re-imagine "progress" in NLP to encapsulate measuring, alleviating, and communicating benchmark validity issues.
## 6 A Framework for NLP Benchmarks
Towards better documenting benchmarks' conceptualization and operationalization, we encourage benchmark creators to answer the questions in Table 3 in their future directions or limitations section when they propose a new benchmark \(B_{\tau}\) for a task \(\tau\). This framework is not a post-hoc intervention. We intend for benchmark creators to answer these questions before, during, and after they construct benchmarks; this framework should be grounded in care for and facilitating collective progress in NLP. Furthermore, creators should share their answers to these questions, so that this framework becomes normalized and shapes people's thinking about their own contributions. Moreover, this framework is intended to supplement processes like Datasheet for Datasets and Data Statements for NLP (Gebru et al., 2021; Bender and Friedman, 2018), which enable comprehensive documentation for benchmarks, but do not ask benchmark creators to reflect in a way that distinguishes between: 1) how they conceptualize a task (and how others may disagree with their conceptualization), and 2) how well the benchmark operationalizes a measurement model for model performance on their conceptualization of the task. This framework is also complementary to technical solutions (e.g., human-in-the-loop approaches) to resolving task ambiguity (Tamkin et al., 2022).
We hope that this reflection will benefit the NLP community in the following ways:
* **Reduce overbyping**: By being transparent about and defining the model capabilities that benchmarks are intended to assess, as well as documenting benchmark validity issues, benchmark creators will: 1) not misrepresent model capabilities, and 2) remind people to be careful about extrapolating benchmark performance results.
* **Encourage reflexivity and engagement with the politics of benchmarks:** By clarifying how they conceptualize tasks and considering how others may disagree with their conceptualization, benchmark creators will: 1) assess how their social context and power influences task conceptualization and benchmark construction (Collins, 2017), 2) reflect on which groups of people benchmarks represent, and 3) include people from diverse communities during benchmark construction towards alleviating disagreement. Towards considering historical and social context, we urge practitioners to not neutralize disagreements in conceptualization by valuing all "sides" equally, as this inevitably invalidates marginalized people's lived experiences and perpetuates the power relations in which benchmark construction participates (Collins, 2017; Denton et al., 2021). In particular, the widespread adoption, presumed validity, and inertia of benchmarks influence the direction of NLP, shaping funding landscapes and the domains in which NLP systems are deployed (Bolli-Hamelin and Hancox-Li, 2022; Bommasani, 2022). As such, we encourage practitioners to prioritize the perspectives of marginalized people.
* **Provide actionable insights to address benchmark validity issues:** Distinguishing between conceptualization and operationalization disagreements in a benchmark will better enable the creators of the benchmark, as well as creators of future benchmarks, to address benchmark validity issues. For example, to address perceptions that a benchmark does not exhaus
Figure 5: Quality of common task instantiations, benchmark datasets, and benchmark metrics vs. perceived current progress on task among all responding practitioners.
tively assess whether models can "handle real-world phenomena," benchmark creators can decide if this is a conceptualization disagreement (e.g., "real-world" is too open-ended, in which case creators should clearly explain which domains they foreground in their conceptualization of "real-world") or operationalization disagreement (e.g., acquiring real-world data is difficult.)
## 7 Conclusion
We develop a taxonomy of disagreement (based on measurement modeling) which distinguishes between how tasks are conceptualized and how measurements of model performance are operationalized. To provide evidence for our taxonomy, we conduct a survey of practitioners and meta-analysis of relevant literature. Based on our taxonomy, we propose a framework for the creation of benchmarks and the documentation of their limitations. Future work includes studying task conceptualization via benchmark inter-annotator disagreement.
\begin{table}
\begin{tabular}{|p{227.6pt}|} \hline
**Conceptualization questions** \\
**Model capabilities:** Which \(C_{\tau}\) do you believe \(\tau\) involves and why? (e.g., Table 1 in Ribeiro et al. (2020)) How does \(C_{\tau}\) differ from the capabilities that other benchmarks for \(\tau\) are intended to assess? \\
**Performance correctness:** How may \(y_{\tau},M_{\tau},\neg M_{\tau}\) be contested? How did you involve relevant communities to co-create \(B_{\tau}\)? How would you accurately characterize “solving” \(\tau\)? \\
**Essentially contested constructs:** Do you define any \(E_{\tau}\) (e.g., model capabilities) entangled with \(\tau\)? (e.g., "universality" in Bhatt et al. (2021)) How did you come up with the name of \(\tau\) and \(B_{\tau}\)? Do you avoid employing overloaded or overclaiming terminology in your \(\tau\)’s name (Shanahan, 2022)? \\
**Overarching questions:** How may \(B_{\tau}\) limit “progress” to only working on one conceptualization of \(\tau\)? Do you hold space for others to propose alternatives? \\
**Operationalization questions** \\
**Validity:** How well does \(B_{\tau}\) operationalize a measurement model for model performance on your conceptualization of \(\tau\)? What kinds of validity may \(B_{\tau}\) lack and why? If \(B_{\tau}\) were to indicate that a model performs exceptionally well on it, what can the NLP community conclude? \\ \hline \end{tabular}
\end{table}
Table 3: Documentation questions to facilitate the creation of NLP benchmarks.
### Limitations
Survey limitationsOur survey sample size overrepresents English-speaking NLP practitioners, and likely practitioners from the United States. While we would like to study the demographic skews in our sample (e.g., seniority) and its implications for the results in our paper, we could not collect demographic data due to privacy concerns. Nevertheless, our results still highlight that even within skewed samples, there exists weak agreement on how tasks are conceptualized. Additionally, we assume that survey participants do not base their perceptions of task conceptualization on surface characteristics of tasks, or task ethos (e.g., task longevity, task popularity, rhetoric associated with the task). Furthermore, while we provide some justification for the 6-point scale in Appendix E, the scale is not optimal, as not many participant judgments are below 4; we had not run a similar survey previously, nor did our pilot responses indicate that many judgments would be \(\geq 4\). Finally, while we would like to provide a qualitative analysis of participants' free responses, the majority of participants did not answer the "Additional Thoughts" questions.
Meta-analysis limitationsWe largely focus on static textual single-task English-language benchmarks. Furthermore, we assume that the capabilities stated by authors generally represent the primary capabilities that they believe the task involves; however, authors may refrain from including particular information due to space limits or reviewing incentives.
Framework limitationsWhile our proposed framework for creating benchmarks has not been explicitly tested, we have confidence in its efficacy as it was borne out of our systematic analysis of NLP practitioners, literature, and benchmarks. We ultimately wish to implement the framework, but doing so is beyond the scope of this paper (whose primary focus is a systematic perspective on disagreements on evaluative practices in NLP), and leave it to future work.
## Ethics and Broader Impact
We obtained informed consent from all survey participants, and the survey was IRB-approved. In administering the survey, we did not collect any personally identifiable information that could be traced back to participants' responses, and we transparently communicated our data privacy, usage, and retention policies (refer to Appendix H.1). Furthermore, we shared our survey with artificial intelligence affinity groups to increase the diversity of our sample. We detail our participant recruitment protocol and IRB approval in Appendix G. Additionally, in our paper, we discuss our taxonomy and benchmark documentation guidelines in the context of scientific accountability, power relations, and path dependence in NLP.
## Acknowledgements
We thank Alexander M. Hoyle and Eve Fleisig for their feedback on early versions of our survey. We further thank Li Lucy, Kai-Wei Chang, Jieyu Zhao, Swabha Swayamdipta, and the reviewers for their feedback on the writing of this paper. |
2303.04850 | Quantum computing with and for many-body physics | Quantum computing technologies are making steady progress. This has opened
new opportunities for tackling problems whose complexity prevents their
description on classical computers. A prototypical example of these complex
problems are interacting quantum many-body systems: on the one hand, these
systems are known to become rapidly prohibitive to describe using classical
computers when their size increases. On the other hand, these systems are
precisely those which are used in the laboratory to build quantum computing
platforms. This arguably makes them one of the most promising early use cases
of quantum computing. In this review, we explain how quantum many-body systems
are used to build quantum processors, and how, in turn, current and future
quantum processors can be used to describe large many-body systems of fermions
such as electrons and nucleons. The review includes an introduction to analog
and digital quantum devices, the mapping of Fermi systems and their
Hamiltonians onto qubit registers, as well as an overview of methods to access
their static and dynamical properties. We also highlight some aspects related
to entanglement, and touch on the description, influence and processing of
decoherence in quantum devices. | Thomas Ayral, Pauline Besserve, Denis Lacroix, Edgar Andres Ruiz Guzman | 2023-03-08T19:34:55Z | http://arxiv.org/abs/2303.04850v2 | # Quantum computing with and for many-body physics
###### Abstract
Quantum computing technologies are making steady progress. This has opened new opportunities for tackling problems whose complexity prevents their description on classical computers. A prototypical example of these complex problems are interacting quantum many-body systems: on the one hand, these systems are known to become rapidly prohibitive to describe using classical computers when their size increases. On the other hand, these systems are precisely those which are used in the laboratory to build quantum computing platforms. This arguably makes them one of the most promising early use cases of quantum computing.
In this review, we explain how quantum many-body systems are used to build quantum processors, and how, in turn, current and future quantum processors can be used to describe large many-body systems of fermions such as electrons and nucleons. The review includes an introduction to analog and digital quantum devices, the mapping of Fermi systems and their Hamiltonians onto qubit registers, as well as an overview of methods to access their static and dynamical properties. We also highlight some aspects related to entanglement, and touch on the description, influence and processing of decoherence in quantum devices.
+
Footnote †: journal: Eur. Phys. J. A
e1Atos Quantum Lab, 78340 Les Clayes-sous-Bois, France
e2Universite Paris-Saclay, CNRS/IN2P3, IJCLab, 91405 Orsay, France
e3Centre de Physique Theorique, 91120 Palaiseau, France
## 1 Introduction
For decades since they were envisioned by Richard Feynman in the 1980s [1], quantum computers have been imagined as futuristic objects overcoming the limitations of classical devices. Today, with significant progress in the manipulations of various quantum systems [2], the possibility of using them as a computational unit is becoming a reality [3; 4]. The race towards proving "quantum advantage" is now underway [5; 6; 7]. The ultimate challenge of this race is to provide one or several reliable quantum processing units (QPUs) with unprecedented capabilities in terms of hard memory storage or the ability to solve specific complex problems in record time [8; 9].
Quantum computing is now at a turning point in its practical development thanks to the growing availability of quantum machines [10]. Yet, these machines have a limited quality: The noise that degrades each operation of a quantum circuit strongly constrains the complexity and types of algorithms that can be used today, and requires specific denoising methods [11; 12]. This had led to the notion of _Noisy Intermediate Scale Quantum_ (NISQ) [13; 14; 15] processors to describe this intermediate stage of development. Despite the limitations of the NISQ era, the possibility to experiment with actual quantum devices has led to intensive scientific emulation to test the capacity of current computers, propose new quantum algorithms, and ultimately prepare for the coming second quantum revolution and surpass the limitations of classical algorithms.
Quantum many-body systems--formed by a set of particles interacting with one another--appear as natural test benches for quantum platforms [16; 17; 18; 19; 20; 21]. This class of problems is characterized by a Hilbert space size that
increases steeply when the number of one-body degrees of freedom (the number of particles or accessible single-particle space or both) increases. This large size leads to severe restrictions in the class of many-body systems that one can solve exactly on classical computers.
This increase in complexity is common to all fields of physics or chemistry. In particular, it applies to quantum devices themselves: assemblies of quantum bits or "qubits" are also characterized by an exponential growth of the spanned Hilbert space, as it is of size \(2^{n_{q}}\) where \(n_{q}\) is the number of qubits. What is a hindrance for the classical description of many-body systems may thus become a blessing: the fact that they contain an exponential complexity a priori makes quantum computers suitable for tackling many-body (and thus also exponentially complex) systems, provided such systems can be accurately encoded and probed via a precise manipulation of qubits. This is facilitated by the fact that particles treated in the formalism of second quantization share many formal aspects with qubits (as will be described in section 4). Many-body systems such as those encountered in quantum chemistry [17; 18; 19; 21; 22; 23], condensed-matter physics [24], atomic physics [25; 26; 27; 28], astrophysics [29; 30; 31; 32; 33], or nuclear physics [34; 35; 36; 37; 38], have thus become key application domains of quantum computing.
The primary goal of the present article is to introduce quantum computing from a _double_ many-body physics perspective: quantum computers are many-body systems that can help understand (among others) many-body problems. We start by highlighting some specific aspects of selected many-body systems one might find in nature and underline their common features (section 2) with a focus on the complexity of treating them using classical computers. Section 3 introduces different types of quantum computing devices, namely, analog and digital, to the nonexpert reader. This section is not only a discussion of the basic concepts in quantum computation but also an opportunity to underline the fact that quantum computers are built upon many-body interacting systems. In section 4, we discuss various aspects related to the solution of quantum many-body problems with quantum computers. We introduce selected quantum algorithms to solve these problems, be it with post-NISQ (section 5) or NISQ (section 6) processors. In section 7, we briefly discuss how entanglement in many-body systems can be described on a quantum computer. Finally, in section 8, we discuss the modeling of the noise impacting quantum hardware, how its effect can be mitigated on current machines as well as the main principles of error correction, which could bring about fault tolerance in the long term.
## 2 Quantum Many-body problems
In this section, we discuss various types of many-body systems where quantum computing could be of help. The reader interested mainly in quantum computing aspects can skip this section and directly go to section 3.
Many-body systems are encountered in many fields of physics. They are ruled in all generality by a Hamiltonian that reads, in a second-quantized form:
\[H =H_{1-\text{body}}+H_{2-\text{body}}+H_{3-\text{body}}+\cdots\] \[=\sum_{\alpha\beta}h_{\alpha\beta}c^{\dagger}_{\alpha}c_{\beta}+ \frac{1}{2}\sum_{\alpha\beta\gamma\delta}v_{\alpha\beta\gamma\delta}c^{ \dagger}_{\alpha}c^{\dagger}_{\beta}c_{\gamma}c_{\delta}+\ \cdots, \tag{1}\]
where \(\alpha,\beta,\gamma,\delta\) are multi-indices whose components depend on the specific many-body system at hand. As we will see, depending on the system, one may not need terms involving more than two particles.
The defining feature of many-body problems is that they cannot be solved within the mean-field approximation. This approximation is obtained, at the level of Hamiltonian (up to constant terms), by replacing all operators but one by its average value in each term. In other words, in a many-body system, the correlations between its key constituents--be they electrons, nucleons, or spins--must be handled with sophisticated methods, hence their other name of many-body systems: _strongly correlated systems_.
In the following subsections, we introduce a few fields--illustrated in Fig. 1--where many-body problems are found, with a focus on the form of the Hamiltonians
Figure 1: Illustration of some of the many-body systems where quantum computing is now being explored as a disruptive technique compared to classical computing: quantum chemistry, condensed matter, nuclear and neutrino physics.
that describe them, and the typical classical methods that are used to investigate their properties.
### The electronic structure problem: electrons in solids and quantum chemistry
Electrons in solids or molecules are interacting particles. In many solids, the Coulomb interaction can be dealt with in an averaged fashion because electrons do not come too close to each other due to the Pauli principle. There, electrons merely become quasi-particles without radically changing their behavior. This observation is at the heart of mean-field methods; two prominent representatives of them are the Hartree-Fock (HF) method and Density Functional Theory (DFT) [39].
However, in some solids and most molecules, such an averaged description of Coulomb interactions leads to wrong predictions. For instance, in solids where valence electrons are of \(d\) or \(f\) character (namely very localized), neither HF nor DFT can predict the transition from a metallic _Fermi liquid_ to a _Mott_ insulator. In such insulators, Coulomb interactions freeze the charge degree of freedom. This type of solids has received much attention in the past forty years since the high-temperature superconductors discovered in the mid-1980s are believed to be doped Mott insulators. Despite prolonged efforts to crack this problem, a complete theoretical account of the origin of high-Tc superconductivity--and numerous other phenomena, especially when driving these systems out of equilibrium--is still missing. This lack of explanation is due to the complexity of dealing with such systems beyond mean field.
The simplest model describing such strongly-correlated solid-state systems is the so-called (Fermi) Hubbard model, which, in its single-band version, reads:
\[H=-t\sum_{\langle ij\rangle}c^{\dagger}_{i\sigma}c_{j\sigma}+U\sum_{i}n_{i \uparrow}n_{i\downarrow}-\mu\sum_{i\sigma}n_{i\sigma}, \tag{2}\]
where \(i,j=1\dots N\) denote lattice sites (\(N\) is typically infinite in a solid; \(\langle ij\rangle\) denote nearest neighbors on the lattice), \(t\) denotes a tunneling or hopping term, \(U\) the on-site Coulomb interaction and \(\mu\) the chemical potential. The fermionic creation and annihilation operators \(c^{\dagger}_{i\sigma}\) and \(c_{i\sigma}\) typically create and annihilate electrons in localized orbitals \(\phi_{i}(r)\) (sometimes called _Wannier orbitals_). \(n_{i\sigma}\) is a density operator which reads \(n_{i\sigma}=c^{\dagger}_{i\sigma}c_{i\sigma}\). \(\mu=U/2\) corresponds to half-filling (undoped case), namely, one electron per site. Despite its apparent simplicity, this "spherical cow" of strongly-correlated electronic systems is difficult to solve on a classical computer in physically "interesting" regimes, like the doped, low-temperature regime in two spatial dimensions and the thermodynamical limit (\(N\to\infty\)).
The Hubbard model can either be tackled "directly" via exact diagonalization (ED), quantum Monte-Carlo (MC) or tensor-network methods, or "indirectly" via embedding methods, which map the model to a smaller many-body problem as will be described in subsection 4.2. The exponentially growing size of the Hilbert space with \(N\) quickly makes ED prohibitive, although advanced versions like Krylov or Lanczos methods can help reach relatively large sizes [40]. An exponential _sign problem_ typically plagues MC methods; namely, the statistical error bar of MC methods grows exponentially with system size and decreasing temperature, requiring an exponential number of samples and hence exponential run time. Tensor-network methods (like Matrix Product States, MPS) [41; 42; 43] can also be used but usually require a space complexity (memory usage) that scales exponentially with the so-called _entanglement entropy_\(S\) (see section 7 for a more in-depth discussion). These methods are potent when \(S\) is constrained to small values, as happens e.g in one dimension. However, in two and more dimensions, \(S\) usually grows in a way that makes these methods difficult to apply.
The Hubbard model proves insufficient to describe molecules in quantum chemistry, as chemical systems are less prone to screening and thus cannot be described by on-site interactions only. Thus, one typically deals with the following more general Hamiltonian:
\[H=\sum_{pq,\sigma}h_{pq}c^{\dagger}_{p\sigma}c_{q\sigma}+\frac{1}{2}\sum_{pqrs }\sum_{\sigma\sigma^{\prime}}v_{pqrs}c^{\dagger}_{p\sigma}c^{\dagger}_{q \sigma^{\prime}}c_{r\sigma^{\prime}}c_{s\sigma} \tag{3}\]
where \(p,q,r,s=1\dots N\) denote orbitals \(\phi_{q}(r)\) (typically molecular orbitals obtained after a first Hartree-Fock computation). The one- and two-body matrix elements \(h_{pq}\) and \(v_{pqrs}\) are computed as integrals over the molecular orbitals. Typically, \(N\) is of the order of \(10-100\). The exact diagonalization of this Hamiltonian (ED, usually called _Full Configuration Interaction_ - FCI - in a quantum chemical context) is limited to a small number \(N\) of orbitals. Variational methods based on a perturbative expansion on the HF wave function are less costly than FCI. Typically, the _Coupled Cluster_ (CC) method, based on the variational state (here limited to single and double excitations)
\[\left|\Psi(\vec{\theta})\right\rangle=\exp\left(\sum_{ia,\sigma}\theta^{a}_{i }c^{\dagger}_{i\sigma}c_{a\sigma}+\sum_{\begin{subarray}{c}ijab\\ \sigma\sigma^{\prime}\end{subarray}}\theta^{ab}_{ij}c^{\dagger}_{i\sigma}c^{ \dagger}_{j\sigma^{\prime}}c_{a\sigma^{\prime}}c_{b\sigma}\right)\left|\Psi_{ \text{HF}}\right\rangle, \tag{4}\]
with \(i,j\) (resp. \(a,b\)) empty (resp. occupied) orbitals, is considered to be one of the most advanced methods in
chemistry (for systems with dynamical correlations, as opposed to systems with large static correlations, where methods based on Matrix Product States [see section 7] are among the most advanced [44]). It can be combined with active-space methods, whose goal is to reduce the number of relevant (or _correlated_ or _active_) degrees of freedom, similar to embedding methods for solids. This selection of degrees of freedom will be further discussed in subsection 4.2.
### Nuclear physics
Atomic nuclei are self-bound, strongly interacting systems with a wide range of numbers of particles, from very few (2 for the deuteron) to several hundreds for the heaviest nuclear systems existing in nature. Nucleons organize themselves to form quantum droplets with a large variety of static and dynamical physical phenomena [45]. These phenomena can be observed in the laboratory through the use of accelerators. The many-body treatment of nuclei is particularly complex due to the non-perturbative nature of the two-body interaction with a strong repulsion at short distances between particles.
The nuclear Hamiltonian is of the form (2.1) where the multi-indices \((\alpha,\beta,\gamma,\delta)=1,...,N\) label single-particle states characterized by the usual quantum numbers \(n,l,m,\sigma\), as well as an isospin component \(\tau\). These states can for instance be 3-dimensional harmonic oscillators (HO) states with \(\phi_{\alpha}(\mathbf{r})=\phi_{nlm\sigma\tau}(\mathbf{r})\) (\(\tau=-1/2\) and \(+1/2\) for neutrons and protons respectively), where the strength of the HO is optimized to reproduce nuclei sizes [46]. Due to the presence of spin-orbit coupling, it is usual to introduce the angular momentum \(\vec{j}=\vec{l}+\vec{s}\), and relabel the state with \((nljm\sigma\tau)\). Since there is no external field, the one-body term only contains the kinetic component. The two-body interaction contains nuclear (short-range) and Coulomb (long-range) interactions. The Coulomb part acts only on protons, while the nuclear part, which depends on the spin of the particles, acts on all nucleons. Note that the latter is almost the same for all particles, a property know as isospin symmetry of the nuclear force. Altogether, \(v_{\alpha\beta\gamma\delta}\) is both spin and isospin-dependent. In addition, the existence of 3-body and, more generally, multi-body interactions was recognized only recently with the recent advances in Effective-Field-Theory to construct nuclear interactions [47; 48; 49; 50; 51]. The presence of multi-body interactions is an extra complication compared with the electronic structure Hamiltonian (2.3), and even in the most advanced many-body techniques, such interactions are usually treated only approximately.
Despite this complexity, a variety of simplified Hamiltonians have been proposed to understand specific properties of nuclei. A typical example is the Lipkin-Meshkov-Glick model [52], which is often used to understand the concept of spontaneous symmetry breaking in finite systems. Another example is the pairing, also called Richardson [53], Hamiltonian that is often used to understand superconducting effects. This Hamiltonian can be justified by (i) assuming that only a set of particles, typically close to the Fermi energy are active, (ii) the interaction between them is constant, and (iii) it is non-negligible only when pairs of time-reversed states, denoted by \((i,\bar{i})\), are involved. These states are often taken as opposite spin particles or as particles with spin projection \(j_{z}=m\) and \(-m\) when a single \(j\)-shell is considered as active. The Hamiltonian then reduces to:
\[H=\sum_{i}\varepsilon_{i}(c_{i}^{\dagger}c_{i}+c_{i}^{\dagger}c_{\bar{i}})-g \sum_{ij}c_{i}^{\dagger}c_{\bar{i}}^{\dagger}c_{j}c_{j}. \tag{2.5}\]
These simple, schematic Hamiltonians are studied today on quantum computers as first steps towards future applications.
An overview of the microscopic approaches used to describe atomic nuclei can be found in Ref. [54]. The only approach able to describe the large variety of phenomena ranging from static (nuclear structure), dynamical (nuclear dynamics), and thermodynamical properties is nuclear Density Functional Theory, often referred to as Energy Density Functional (EDF) theory [55; 56; 57]. Another powerful approach, restricted to studying nuclear structure properties, consists of performing a direct CI method in a restricted subspace of single-particle states forming the valence space [58; 59]. In this approach, often referred to as the _Shell Model_, the effective interaction is fine-tuned to account for the truncation of the model space. One of the main difficulties that forces the restriction to a set of active valence particles is the size of the many-body Hilbert space when the number of single-particle states increases. The current scope of restricted CI approaches is the treatment of eigenvalue problems in spaces with \(10^{11}-10^{12}\) states. These values are still far from the requirements to treat the whole nuclear chart with all single-particle active states.
Another breakthrough in the most recent description of interactions lies in the possibility of getting rid of the hard core and using _soft interactions_ for low-energy nuclear physics problems [60]. Such interactions have opened the way to the so-called _ab-initio method_ that aims at treating the nuclear many-body problem directly, starting from the bare Hamiltonian (2.1). This advance has led to a significant boost in applying several many-body techniques, some already mentioned in section 2.1. One can, in particular, mention the
use of the full CI technique--known, in this context, as the no-core shell-model [61; 62], the Green's Function Monte-Carlo method [63; 64; 65], the Self-Consistent Green Function method [66; 67], the Coupled-Cluster method [68; 69], or Many-Body Perturbation Theory [70], among others. One specific aspect of atomic nuclei is the necessity to generalize some of these theories to allow for possible spontaneous symmetry breaking like particle number or rotational symmetries (see, for instance, a few examples of extensions in Refs. [71; 72; 73; 74; 75]). This generalization is fundamental to describing open-shell nuclei.
Although significant progress has been made in recent years, ab-initio methods are still applied to study the nuclear structure effects in a limited region of the nuclear chart or to nuclear reactions involving only very light systems. This limitation is due to the increment in the complexity of the problem when the number of particles increases. For now, very few quantum computing pilot applications on real devices have been made, and these have been limited to rather simplistic nuclear Hamiltonians [76; 77; 78]. However, the use of quantum computers for nuclear many-body problems has recently gained momentum [35; 36; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94].
### Common difficulties
A defining property of many-body systems is that they are exponentially difficult to solve on classical computers. As we will see in this section, this exponential difficulty may trivially arise from the size of the Hilbert space. However, most classical methods strive to circumvent this difficulty by either using the structure of the problem to merely reduce the number of relevant degrees of freedom or by adopting another representation--shifting the exponential difficulty from the size of the Hilbert space to other parameters, like, for instance, the severity of the Monte-Carlo sign problem or the internal dimension of a tensor network.
One standard strategy to circumvent the exponential complexity is to use parameterized variational wave function ansatze. Another strategy--based on so-called _reduced density matrices_--relies on the assumption that some degrees of freedom contain more information than others. A typical starting point is to assume that one-body degrees of freedom are the most relevant. The information on them is contained in the one-body reduced density matrix (1-RDM). Reducing the information to these DoFs leads to Hartree-Fock (HF) or mean-field theory. Usually, such simplified approaches miss significant correlation effects, and approximations beyond the mean-field are necessary to describe many-body systems accurately. For instance, this description can be done by truncating the Bogolyubov-Born-Green-Kirkwood-Yvon (BBGKY) hierarchy and treating 2-body or higher DoFs explicitly through the two-body reduced density matrix (2-RDM) or higher-order reduced density matrices [95] (see also the discussion of embedding or active-space methods in section 4.2).
Despite the dimensionality reduction that the aforementioned approximate methods afford, they always, at some point, reach a computational limit on classical computers. Quantum computers--provided fermionic problems can be turned into spin or qubit problems (see section 4.1)--can a priori overcome these limitations provided enough qubits can be efficiently manipulated. Interestingly, quantum computers do not start from a blank page: many quantum algorithms are strongly guided by the accumulated expertise and methods gained on classical devices. An illustration of that will be discussed in section 4.
Besides the above general considerations on many-body systems, each physical system has specificities that will render its encoding on quantum computers more or less difficult. For instance, electrons can have two spin components (spin up and down), while nucleons can have both spin and isospin (neutron and proton components). Some physical systems, such as solids or atomic ones, can be suitably described on a lattice, sometimes with only nearest neighbor two-body interaction. This case is advantageous when encoding such a problem on a set of qubits with limited connectivity. Some other systems might be more complex, like atomic nuclei, with long-range two-body or, more generally, multi-body interactions or the necessity to describe unbound states. A prerequisite to the success of future applications is the efficient transposition of a given many-body problem with good mapping of its characteristics into an analog or a digital quantum computer (see section 4). The complexity of this transposition will strongly depend on the problem itself, but many-body problems will undoubtedly be among the first stringent benchmarks for current and future quantum technologies.
The exponential size of the Hilbert space is not a sufficient condition for making a problem computationally hard from classical computers. For instance, the ground state of quadratic fermionic Hamiltonians--namely Hamiltonians that are bilinear in the creation and annihilation operators--can be found in polynomial time on a classical computer (see e.g [96]). The problem becomes complicated only when terms beyond quadratic terms are added. This result explains why certain classes of time evolutions--quantum circuits
(see section 3.1.2 below for a definition) stemming from a quadratic Hamiltonian, with so-called _matchgates_--are also simulatable classically in polynomial time. Interestingly, other circuits, this time unrelated to "uncorrelated systems", are efficiently simulatable. For instance, Clifford circuits, with gates belonging only to the Clifford group [97], are simulatable with time and space complexity \(O(n^{2})\) (with \(n\) the number of qubits), a result known as the Gottesman-Knill theorem [98; 99]. This result holds even though these circuits can generate highly entangled states. The key idea behind the efficiency of the simulation of such circuits is that the states generated by Clifford circuits can be represented in a compact (polynomial) fashion.
Leveraging a compact representation is also very common in many-body classical methods. For instance, some Monte-Carlo methods decouple quartic (interaction) terms in the Hamiltonian to eliminate this term at the expense of an additional auxiliary field [100]. Then, the exponential difficulty is shifted from the size of the Hilbert space to the Monte-Carlo sign problem. In tensor network methods, like Matrix Product States [41], the accuracy of the representation is tuned by the size of the internal indices (often called the _bond dimension_), which is related to the degree of entanglement of the state at stake (see section 7.3).
### A few quantum complexity considerations
In this section, we comment on the formal expectations regarding speedups that quantum computers can bring when solving many-body problems. For general reviews on the topic, we refer the reader to e.g [9; 101].
Classical computational problems with a yes/no answer (decision problems) are classified using complexity classes. For instance, finding the ground state energy of a Hamiltonian \(H\) acting on \(n\) qubits (fermions) can be formulated as a decision problem by picking numbers \(a\) and \(b\) such that \(\epsilon\equiv b-a>1/\text{poly}(n)\) defines a region where it is promised that the ground state energy does not lie in, and asking whether \(E_{0}<a\) or \(E_{0}>b\) (see e.g., [102] and references therein).
The two main classes of classical complexity are P and NP, which describe problems that can be solved in polynomial time or whose solution can be verified in polynomial time. NP-hard problems are problems at least as hard as any problem in NP, and NP-complete problems are the NP-hard problems that belong to NP. NP-complete problems are very likely (that is unless P=NP) not to have a polynomial-time solution, i.e., they are colloquially referred to as _exponentially hard_ problems.
One central question is whether quantum computers can reduce the complexity of solving many-body problems [103]. To answer this question systematically, two quantum complexity classes, BQP and QMA, have been designed as quantum counterparts to P and NP, respectively: the computer that solves the problem or, respectively, certifies the solution of the problem is a quantum computer. Fig 2.1 illustrates these classes and the connection between quantum and classical complexity classifications.
A significant result is that NP-complete problems are very likely not in BQP. In other words, it is improbable that quantum computers can solve exponentially hard classical problems in polynomial time. The fact that the factoring problem can be solved using Shor's algorithm [104] with an exponential speedup is possible because factoring is not a NP-complete problem. Importantly, this also does not mean that quantum computers are not helpful for NP-complete problems: quantum heuristics (polynomial algorithms with no quality guarantee) can still reach better solutions than classical heuristics.
A natural question to know what to expect from quantum computers for many-body problems is to which complexity class they belong. The ground state estimation problem is QMA-complete for \(k\)-local (spin) Hamiltonians (Hamiltonians whose terms act on at most \(k\) qubits, see Eq. (5.4) below) as long as \(k\geq 2\)[105]. QMA-completeness is retained for the ground state estimation problem of geometrically 2-local Hamiltonians - that is to say 2-local Hamiltonians only involving pairs
Figure 2.1: Computational complexity classes and selected problems. The three crosses indicate three examples of complex problems. The ”factoring” and ”traveling salesman” are common problems often quoted in complexity theories. The ”k-local Hamiltonian ground state estimation” is a common problem in many-body systems (see text).
of adjacent qubits - in a square lattice qubit layout (aka a quantum spin glass) [106]. These statements hold for general values of the couplings of the spin Hamiltonians mentioned above. Restrictions on the coefficients can lead to a reduction in computational complexity. For instance, the transverse Ising model with negative transverse field and ferromagnetic interactions (i.e., Eq. (3.4) below with \(\Omega<0\) and \(C<0\) [which is not the case for Rydberg atoms, where \(C>0\)]) is a so-called _stoquastic_ Hamiltonian. Its ground state energy can be approximated polynomially in the system size \(n\) and \(1/\epsilon\)[107].
As for fermionic and bosonic models, [108] showed that the Fermi-Hubbard model with local magnetic fields (with an additional \(\sum_{i}\sigma_{\bf i}\cdot{\bf B}_{\bf i}\) term in Eq. (2.2)) is QMA-complete, while [109] showed that the Bose-Hubbard model is QMA-complete.
These formal considerations call for two comments. First, despite these hardness results, the ground state energy of the models cited above can be determined with great accuracy on classical computers in special regimes, i.e., for certain classes of parameters. For instance, this is the case of the Fermi-Hubbard model on a bipartite lattice, whose solution with quantum Monte-Carlo does not suffer from a sign problem at half-filling (see, e.g., [110]). The existence of these special regimes means that one must be careful to look for truly hard classical computational regimes to identify a useful application of quantum computers. In other words, classical methods make the most of any symmetry or structure of the problem to overcome the underlying exponential complexity of the many-body problem so that truly hard regimes are hard to come by. In these regimes, quantum computers ought to also leverage these symmetries and structures in order to outperform classical computers.
Second, even in those regimes where classical computers fail to reach an accurate enough result, the QMA-completeness of the problem is a strong indication that the problem will also be hard to solve on a quantum computer. In other words, classical and quantum algorithms likely end up running into an exponential wall (see, e.g., [111] for a concrete example). This limitation is not necessarily a showstopper: what matters is whether quantum computers can reach regimes inaccessible to classical computers before they run into said wall.
Lastly, complexity theory can also be used to appraise, at least formally, the feasibility of hybrid quantum-classical algorithms like the _Variational Quantum Eigensolver_ (see section 6.1.1 below). For instance, the classical optimization procedure of the energy \(E(\theta)\) in VQE is generically a challenging computational problem [112]. In practice, this does not exclude the existence of heuristics for finding accurate enough variational parameters for concrete (as opposed to generic) problems. What is more, the classical counterpart--an entirely classical variational algorithm--also suffers from the same problems. Ultimately, what matters is whether quantum processors can accelerate parts of the computation _relative to_ the best classical algorithm.
## 3 Quantum computers: artificial many-body systems
This section explains how to investigate the many-body problems mentioned above by building an artificial many-body system with a similar Hamiltonian (aka analog quantum computers or quantum simulators) or, still starting from a many-body system, but with individual control over the particles/degrees of freedom, by building a gate-based (digital) quantum computer (subsection 3.1). We then explain the basic building blocks and rules of ideal quantum computers.
### From analog to digital
Quantum computers are essentially synthetic many-body systems whose state can be manipulated according to some predefined plan--_aka_ a quantum program--and measured to learn something.
Depending on the level of control of this quantum system, one speaks of quantum _simulators_ (or analog quantum computers) or quantum _computers_ (or digital quantum computers). While (analog) simulators offer only a limited and specific set of controls, (digital) computers offer controls--usually called _gates_--that are universal. This universality allows them to reach, in principle, any state of the Hilbert space by performing any unitary operation.
#### 3.1.1 Analog quantum computers (aka quantum simulators)
The term _analog quantum computer_ refers to any synthetic many-body system with a certain amount of control over its degrees of freedom. Each analog computer is characterized (in the absence of defects) by a many-body Hamiltonian \(H(t)\) whose time-dependence is "programmed" more or less at will, depending on the experimental constraints. This Hamiltonian is usually chosen as close as possible to the "real-life" Hamiltonians introduced in the previous section. Thus, by measuring
the properties of the analog simulator, one hopes to get insights into the physics of real-life systems. We highlight below some illustrations of physical systems used as analog simulators.
_Ultracold atoms_ can be described as implementing a Fermi- or Bose-Hubbard model, depending on the atomic isotopes used [113]. For instance, the Bose-Hubbard model reads:
\[H(t)=\frac{U(t)}{2}\sum_{i}n_{i}(n_{i}-1)-J(t)\sum_{\langle ij\rangle}b_{i}^{ \dagger}b_{j}. \tag{1}\]
Here, the creation and annihilation operators \(b_{i}^{\dagger}\) and \(b_{i}\) create and annihilate (bosonic) atoms in orbitals \(\phi_{i}(r)\) and \(n_{i}=b_{i}^{\dagger}b_{i}\); \(J(t)\) is the tunneling between two neighboring "sites" \(\langle ij\rangle\) of the optical lattice, and \(U(t)\) is the on-site repulsion between two atoms. Both \(U\) and \(J\) can be temporally modulated by changing the amplitudes of the lasers creating the lattice. In addition, the interaction \(U(t)\) can also be tuned by changing the background magnetic field using a phenomenon known as the Feshbach resonance [113].
_Spin qubits_, which are essentially electrons trapped in quantum dots, can also be described by a Fermi-Hubbard model or, when neglecting charge fluctuations, by a Heisenberg model [114]:
\[H(t)=J_{\rm ex}(t)\sum_{\langle ij\rangle}\left(X_{i}X_{j}+Y_{i}Y_{j}+Z_{i}Z_ {j}\right)+\sum_{i}H_{\rm loc}^{(i)}, \tag{2}\]
with \((X_{i},Y_{i},Z_{i})\) denoting the Pauli matrices acting on the \(i^{\rm th}\) spin, and with
\[H_{\rm loc}^{(i)}=\frac{\omega_{0}^{(i)}-\delta_{i}(t)}{2}Z_{i}+\Omega_{i}(t) \cos\left(\omega_{c}^{(i)}t+\phi_{i}(t)\right)X_{i}. \tag{3}\]
The exchange constant \(J_{\rm ex}\sim 4J(t)^{2}/U\) can be turned on and off via the tuning of the tunneling term \(J(t)\) between two dots using a gate voltage. The local term \(H_{\rm loc}\) can, for instance, come from a magnetic field with a static (\(Z_{i}\) term) and a rotating (\(X_{i}\) term) component.
Depending on which atomic levels they target, _platforms of Rydberg atoms_ (see, e.g., [115; 116]) may implement an Ising Hamiltonian:
\[H(t) = \sum_{ij,i\neq j}\frac{C}{|r_{i}-r_{j}|^{6}}n_{i}n_{j} \tag{4}\] \[+ \frac{\Omega(t)}{2}\sum_{i}X_{i}-\delta(t)\sum_{i}Z_{i},\]
with \(n_{i}=(1-Z_{i})/2\), or a \(XY\) Hamiltonian:
\[H(t) = 2\sum_{ij,i\neq j}\frac{C}{|r_{i}-r_{j}|^{3}}\left(X_{i}X_{j}+Y_ {i}Y_{j}\right) \tag{5}\] \[+ \Omega(t)\sum_{i}X_{i}-\frac{\delta(t)}{2}\sum_{i}Z_{i}.\]
_Superconducting qubits_, which are usually thought of as (digital) computers, can also be seen as analog computers realizing a Bose-Hubbard model (see Eq. (1)). There, the creation and annihilation operators refer to bosonic excitations relative to the charge and flux variables inside Josephson junctions.
All these Hamiltonians are of many-body nature owing to the coupling terms (first term of each equation). Thanks to their closeness to the many-body Hamiltonians encountered when studying quantum matter (see previous section, 2), these Hamiltonians have long been used as proxies (or _simulators_) for the many-body systems one wants to understand.
By nature, quantum simulators are very specific in that (i) they implement (or "simulate") only one class of Hamiltonians, and (ii) they usually implement partial (often only global) control of their degrees of freedom. This limitation is both temporal and spatial: e.g., in Eq. (4), the coupling term cannot be made time-dependent as it corresponds to a van der Waals interaction that cannot be switched off or decreased, except by moving the atoms, a very slow operation. Also, the second (Rabi) and third (detuning) terms can most often not be controlled at an individual site level (i.e., \(\Omega\) and \(\delta\) are the same for all atoms).
Analog platforms have advantages and drawbacks. Their primary advantage is that the limited degree of control usually allows them to work with significantly more degrees of freedom (atoms, spins, ions, junctions, and other building blocks). The major drawback is that they are not "universal" or "all-purpose".
Experimental platforms must reach a reasonably good temporal and spatial degree of control to become "universal" (in a sense that will be made more explicit later). This performance level is required to have gate-based or digital computers.
#### 3.1.2 Digital (aka gate-based) quantum computers
Digital, or gate-based quantum computers, refer to physical setups (i) whose description can be narrowed to an assembly of interacting two-level quantum systems and (ii) whose Hamiltonian can be controlled at a local level.
Two-level quantum systems: qubitsLet us focus on criterion (i). For instance, among the systems cited above, Rydberg atoms or spin qubits are already naturally described as two-level systems: two atomic levels for Rydberg atoms and two spin levels for spin qubits. In photonic platforms, the photon's two polarizations can play the role of the two levels. Superconducting platforms, which are naturally described with bosonic variables, can be restricted to a two-level subspace by tuning their parameters so that "leakage" out of the two lowest levels--called _computational subspace_--is very improbable. The two levels of the computational subspace are usually denoted as \(\left|0\right\rangle\) and \(\left|1\right\rangle\). Hence, the wavefunction of a single two-level system, or _qubit_, is, in general, the superposition:
\[\left|\psi\right\rangle=a_{0}\left|0\right\rangle+a_{1}\left|1\right\rangle, \tag{3.6}\]
with \(a_{i}\in\mathbb{C}\) and \(\left|a_{0}\right|^{2}+\left|a_{1}\right|^{2}=1\). More generally, a \(n\)-qubit wavefunction \(\left|\Psi\right\rangle\) is the superposition of \(2^{n}\)_computational basis states_\(\left|00\ldots 0\right\rangle,\left|00\ldots 01\right\rangle,\ldots\left|11 \ldots 1\right\rangle\). We see that all states can be written as \(\left|b_{n-1},\cdots,b_{0}\right\rangle=\bigotimes_{i=0}^{n-1}\left|b_{i}\right\rangle\) where \(\left|b_{i}\right\rangle=\left|0_{i}\right\rangle\) or \(\left|1_{i}\right\rangle\) refers to the state of the \(i^{\text{th}}\) qubit. Below, we will use the same convention as in Eq. (3.4), and operators that act on this qubit will be labeled by \(i\) like, for instance, the Pauli operators \((X_{i},Y_{i},Z_{i})\). Each state \(\left|b_{n-1},\cdots,b_{0}\right\rangle\) can also be labeled by a single integer \(k=\sum_{i}b_{i}2^{i}\).
Manipulating qubits: gatesCriterion (ii) ensures that one can reach any state of this Hilbert space using operations called quantum gates. Mathematically, these gates are unitary operations \(U\) acting on the wavefunction \(\left|\Psi\right\rangle\): \(\left|\Psi^{\prime}\right\rangle=U\left|\Psi\right\rangle\). Such operations are performed by letting the system evolve under a given Hamiltonian. For instance, let us consider a \(n\)-qubit system described by the (non-interacting) Hamiltonian
\[H=\sum_{i=1}^{n}H_{\text{loc}}^{(i)}, \tag{3.7}\]
with \(H_{\text{loc}}^{(i)}\) defined in Eq. (3.3). \(\omega_{0}^{(i)}\) is the \(i^{\text{th}}\) qubit's frequency (energy difference between the two levels), and \(\omega_{c}^{(i)}\) is the drive frequency. \(\delta_{i}(t)\), \(g_{i}(t)\) and \(\phi_{i}(t)\) in Eq. (3.3) are controllable fields. If one switches off all but the \(i^{\text{th}}\) qubit's field, one goes to the frame rotating at frequency \(\omega_{0}^{(i)}\), one drives at resonance (namely \(\omega_{c}^{(i)}=\omega_{0}^{(i)}\)), and one neglects terms oscillating as \(2\omega_{0}^{(i)}\) (so-called _rotating wave approximation_), the Hamiltonian reads, up to terms acting on the other qubits, as
\[H_{\omega_{0}^{(i)}}=-\frac{\delta_{i}(t)}{2}Z_{i}+\frac{\Omega_{i}(t)}{2} \left[\cos(\phi_{i})X_{i}+\sin(\phi_{i})Y_{i}\right]. \tag{3.8}\]
If we turn off the Rabi term \(\Omega_{i}\), solving the Schrodinger equation yields a wavefunction \(\left|\psi(t)\right\rangle=R_{z}^{(i)}(\theta(t))\left|\psi(0)\right\rangle\) with \(R_{z}^{(i)}(\theta)\equiv e^{-i\frac{\theta}{2}Z_{i}}\) and \(\theta(t)=-\int_{0}^{t}\delta_{i}(\tau)d\tau\). Thus, we have operated a rotation of angle \(\theta(t)\) around axis \(z\) for the \(i\)th qubit. Similarly, if we turn off the "detuning" term \(\delta_{i}\), we are going to effect a rotation along axes \(x\) (\(\phi=0\)) and \(y\) (\(\phi=\pi/2\)).
Such a time evolution is illustrated in Fig. 3.2, using the standard Bloch sphere representation [9]. We show the evolution of a one-qubit state under a Rabi drive \(\Omega(t)\) and a detuning drive \(\delta(t)\) (see Fig. 3.1), with \(\phi=0\) (green trajectory). The area under the Rabi curve \(-\Omega(t)\) is chosen to effect a \(-\pi/2\) rotation, but as a consequence of the detuning being not strictly zero, a small \(z\)-axis rotation is effected in addition to the \(x\)-rotation. We will explain what happens when the qubit is affected by decoherence (red trajectory) in a later section (section 8).
To produce entanglement, we need a Hamiltonian with interacting spins. For instance, if we can switch on a term \(JZ_{i}Z_{j}\) in Hamiltonian (3.7) (a term similar to the van der Waals term in Eq. (3.4)), we can perform operations of the type \(e^{-i\frac{\theta}{2}Z_{i}Z_{j}}\). Such operations can create entanglement between qubits \(i\) and \(j\).
Universal quantum gatesIt turns out that, with the one-qubit rotations \(R_{x}\left(\theta\right)\), \(R_{y}\left(\theta\right)\), and \(R_{z}\left(\theta\right)\) presented in Table 1, together with, for instance, a two-qubit gate called "CNOT" presented in Table 2, one can achieve any unitary operation \(U\) acting on \(n\)-qubits as a finite sequence of these gates (a result known as the Solovay-Kitaev theorem [9; 119]). Therefore, one calls this gate set a _universal gate set_. The Clifford group (see section 2.3) can become universal if a \(T\) gate is added (with \(T=P\left(\pi/4\right)\), see Table 1). Below, we briefly discuss the fundamentals of digital quantum computation; for more advanced considerations, we refer to different textbooks [9; 120; 121; 122].
Figure 3.1: Analog computation (a) vs. digital computation (b). In analog computation, one directly specifies the (analog) parameters (here \(\Omega(t)\) and \(\delta(t)\), see, e.g., (3.3), (3.4), or (3.5)). These controls are not necessarily local (as in (3.4) or (3.5)). In digital computations, the user discretely describes the sought-after evolution with quantum gates, which are usually local, i.e., act only on a few qubits (lines in the diagram). Internally, each gate is performed using an analog description.
Quantum circuitsA standard digital quantum computation is a sequence of simple manipulations of the system's Hamiltonian; each described as a quantum gate. The computation usually starts from an initial state corresponding to all qubits in state \(|0\rangle\). In other words, the "quantum register" is in state \(|0\rangle^{\otimes n}\). One then applies gates \(U_{1},U_{2},\ldots,U_{m}\). This sequence of gates is usually represented as a so-called _quantum circuit_, where each line stands for a qubit (time flowing from left to right) and each gate is pictured by a symbol that acts only on a subset of these lines. Table 1 and 2 respectively give examples of the most common gates acting on one or two qubits.
Quantum measurementsAfter applying the gates, the register is in its final state \(|\Psi\rangle=U_{m}U_{m-1}\cdots U_{1}|0\rangle^{\otimes n}\) and one can measure some observable. In most platforms, one can only measure the observable \(Z_{i}\) (or a tensor product \(Z_{i_{1}}\otimes\cdots\otimes Z_{i_{k}}\) if one "measures" \(k\) qubits). One can translate a measurement in the \(X\) or \(Y\) basis into a measurement in the \(Z\) basis using the insertion of one or two gates before the measurement, as pictured in Table 3.
The outcome of a measurement of \(Z_{i_{1}}\otimes\cdots\otimes Z_{i_{k}}\) is a bitstring \(b_{i_{1}}\ldots b_{i_{k}}\), with \(b_{i}\in\{0,1\}\). This bitstring is obtained with a probability given by Born's rule,
\[p(b_{i_{1}}\ldots b_{i_{k}})=\langle\Psi|P_{b_{i_{1}}\ldots b_{i_{k}}}|\Psi\rangle, \tag{3.9}\]
with \(P_{b_{i_{1}}\ldots b_{i_{k}}}=|b_{i_{1}}\rangle\langle b_{i_{1}}|\otimes \cdots\otimes|b_{i_{k}}\rangle\langle b_{i_{k}}|\) (we do not explicitly write identities for qubits that are not being measured). The measurement projects the register to the state \(P_{b_{i_{1}}\ldots b_{i_{k}}}|\Psi\rangle/\sqrt{p(b_{i_{1}}\ldots b_{i_{k}})}\), so that if
\begin{table}
\begin{tabular}{|c c c|c c|c c|c c|} \hline
**Name** & **Symbol** & **Matrix** & **Name** & **Symbol** & **Matrix** & **Name** & **Symbol** & **Matrix** \\ \hline X & \(\left[\begin{matrix}0&1\\ 1&0\end{matrix}\right]\) & Y & \(\left[\begin{matrix}0&-i\\ i&0\end{matrix}\right]\) & z & \(\left[\begin{matrix}1&0\\ 0&-1\end{matrix}\right]\) & \(\left[\begin{matrix}0&1\\ 0&-1\end{matrix}\right]\) \\ \hline Hadamard & \(\left[\begin{matrix}H\\ \end{matrix}\right]\) & \(\frac{1}{\sqrt{2}}\left[\begin{matrix}1&1\\ 1&-1\end{matrix}\right]\) & Phase & \(\left[\begin{matrix}P\left(\varphi\right)\\ 0&e^{i\varphi}\end{matrix}\right]\) & Universal & \(\left[\begin{matrix}\cos\left(\frac{\theta}{2}\right)&-e^{i\lambda}\sin \left(\frac{\theta}{2}\right)\\ e^{-i\phi}\sin\left(\frac{\theta}{2}\right)&e^{i\left(\phi+\lambda\right)} \cos\left(\frac{\theta}{2}\right)\end{matrix}\right]\) \\ \hline X-rotation & \(\left[\begin{matrix}R_{x}\left(\theta\right)\\ -i\sin\left(\frac{\theta}{2}\right)&\cos\left(\frac{\theta}{2}\right)\end{matrix}\right]\) & X-rotation & \(\left[\begin{matrix}\cos\left(\frac{\theta}{2}\right)&-i\sin\left(\frac{ \theta}{2}\right)\\ \sin\left(\frac{\theta}{2}\right)&\cos\left(\frac{\theta}{2}\right)\end{matrix}\right]\) & Z-rotation & \(\left[\begin{matrix}e^{-i_{1}^{*}}&0\\ 0&e^{i\uparrow}\end{matrix}\right]\) \\ \hline \end{tabular}
\end{table}
Table 1: Summary of some standard single-qubit quantum gates. The Clifford group mentioned in section 2.3 is depicted in purple and can be generated using the \(H\), \(S=P\left(\pi/2\right)\) and CNOT gates (see Table 2). The purple and green hatches on the phase gate \(P\left(\varphi\right)\) reflect that it is Clifford only for \(\varphi=\pm\pi/2\). Table adapted from [117].
\begin{table}
\begin{tabular}{|c c|c|c|c|} \hline
one wants to measure another observable that does not commute with \(Z_{i_{1}}\otimes\cdots\otimes Z_{i_{k}}\), one needs to rerun the circuit.
The estimation of the expectation value of an observable (hermitian operator), like \(\langle O\rangle=\langle\Psi|\,O\,|\Psi\rangle\), is typically done by measuring the given observable a number \(n_{\text{shots}}\) of times, resulting in values \(\{o_{k}\}_{k=1,n_{\text{shots}}}\) that can be averaged to yield the estimator
\[\overline{O}=\frac{1}{n_{\text{shots}}}\sum_{k=1}^{n_{\text{shots}}}o_{k}. \tag{3.10}\]
In the limit \(n_{\text{shots}}\rightarrow\infty\), this estimate converges to \(\langle O\rangle\). Due to the central limit theorem, \(\mathcal{O}(1/\varepsilon^{2})\) samples are needed to reach an accuracy \(\varepsilon\).
Quantum circuits can also be used to compute the average value of any unitary operator, as shown in Table 4. This table also shows how to measure time-dependent correlation functions of the form \(\langle P_{t}(t)P_{k}(t^{\prime})\rangle\) (using fermion-spin transforms, this type of circuit can be used to compute fermionic correlation functions).
DiVincenzo criteriaThe principles introduced above have been gathered in a list of five criteria known as the DiVincenzo criteria [123]:
1. The ability to work with a scalable number of two-level systems (qubits) without "leakage" out of the computational subspace.
2. The capacity to initialize and reset qubits in a reliable (usually fast enough) fashion.
3. A long coherence time (compared to the typical time scales of gates, measurements, and resets, i.e., compared to the "clock time" of the processor).
4. A universal set of gates (with the possibility to parallelize operations on disjoint sets of qubits).
5. Reliable qubit-wise measurements.
Current quantum processors are strongly impacted by decoherence effects, as will be explicited in section 8. As a result, they do not meet all five criteria. However, they help develop and test algorithms in real-life conditions.
Before reviewing these algorithms, we describe how many-body problems can be translated to forms amenable to quantum computing.
## 4 Mapping a many-body problem to a quantum computer
This section explains how to go from the many-body problem at hand to the one that the quantum computer models. In particular, if we focus on digital (gate-based) quantum platforms, such devices usually have constraints: (i) they have qubits (two-level systems), not fermions/bosons; (section 4.1), (ii) they have a limited number of qubits (section 4.2), and (iii) they have a limited coherence (section 8.1). Thus, one needs to transform, reduce or/and map the original problem so that the quantum computer can give insights into the properties of the original many-body problem.
### From fermions to qubits
The treatment of fermions on a quantum computer can be made starting from the first or from the second quantization [17]. Here we focus on the second case. To perform any Hamiltonian simulation written in second quantized form on a quantum computer, one must
Figure 3.2: Bloch sphere with the North and South pole corresponding to \(\ket{0}\) and \(\ket{1}\), respectively. The other poles shown in the figure are decomposed in terms of \(\ket{0}\) and \(\ket{1}\) as \(\ket{+}=\frac{1}{\sqrt{2}}\left(\ket{0}+\ket{1}\right)\) and \(\ket{i+}=\frac{1}{\sqrt{2}}\left(\ket{0}+i\ket{1}\right)\). The green and red trajectories represent the evolution of a one-qubit state, starting from \(\ket{0}\), when subject to a \(R_{x}(-\pi/2)\) rotation. Green curve: noiseless (pure state) evolution. This evolution approximately realizes a \(R_{x}(-\pi/2)\) rotation (it would be exactly such a rotation if the detuning drive \(\delta(t)\) were rigorously 0). Red curve: noisy evolution under dephasing and relaxation noise, see section 8.
\begin{table}
\begin{tabular}{|c|c|} \hline
**Measurement** & **Conversion to measurement in the Z-basis** \\ \hline \(\ket{\psi}\) & \(\ket{\psi}\) \\ \hline \(\ket{\psi}\) & \(\ket{\psi}\) \\ \hline \end{tabular}
\end{table}
Table 3: Conversion of measurements in \(X\) or \(Y\) bases into measurements in the \(Z\) basis.
map the fermion Hamiltonian to a spin Hamiltonian. This mapping is not unique. The most standard mapping techniques are the Jordan-Wigner (JW) transformation [124], the Bravyi-Kitaev (BK) [125] or the parity mapping [126] (for a comprehensive discussion see [17; 127; 128]). We illustrate the JW case that is often retained for many-body applications due to its relative simplicity.
Let us consider a set of fermions associated with the creation/annihilation operators \((a_{p}^{\dagger},a_{p})\) where \(p\) labels a complete basis of single-particle states \(\phi_{p}(r)\). These operators act on the many-body vacuum by changing the occupation of orbital \(p\). In the JW fermion-to-qubit mapping, the occupation (resp. vacancy) of a state is usually encoded as the state \(|1\rangle_{p}\) (resp. \(|0\rangle_{p}\)). Then, the operator \(Q_{p}^{+}=|1\rangle_{p}\langle 0|_{p}=\frac{1}{2}\left(X_{p}-iY_{p}\right)\) and its hermitian conjugate \(Q_{p}^{-}\) can be seen as the qubit equivalents of the creation/annihilation operators. The difficulty is that these sets of operators commute between each other for different qubits while they should anticommute for fermions. One solution to this issue is to choose a specific ordering for the one-to-one correspondence between the single-particle state and the qubits and use the following prescription:
\[a_{p}^{\dagger}\longleftrightarrow\bigotimes_{k=1}^{p-1}Z_{k}\otimes Q_{p}^{ +},\ a_{p}\longleftrightarrow\bigotimes_{k=1}^{p-1}Z_{k}\otimes Q_{p}^{-}. \tag{4.1}\]
In this transformation, the fermionic sign (which comes from the anticommutation rules of fermions) is kept track of via the string \(\bigotimes_{k=1}^{p-1}Z_{k}\) of Pauli-\(Z\) operators. At the circuit level, this means that operations that are one-body at a fermionic level (like \(\exp(-i\{c_{0}^{\dagger}c_{1}+\text{h.c}\})\)) might become a multi-qubit operation. For instance, \(a_{3}^{\dagger}a_{1}\) leads to a term \(Q_{3}^{\dagger}Z_{2}Q_{1}\) that acts on the three qubits \((1,2,3)\).
There is a large number of qubit representations \(\tilde{H}\) corresponding to one fermionic representation \(H\). The number \(n\) of qubits may not be identical to the number \(\tilde{n}\) of fermions. Furthermore, the locality \(d\) (a \(d\)-local Hamiltonian can be expressed as the sum of Hamiltonian terms acting upon at most \(d\) qubits) is usually not conserved upon encoding. For instance, in the Jordan-Wigner mapping, \(\tilde{n}=n\) and \(\tilde{d}=O(\tilde{n})\), while another transform called the Bravyi-Kitaev transformation that also has \(\tilde{n}=n\) achieves a better locality, namely \(\tilde{d}=O(\log_{2}\tilde{n})\). Another example is the so-called superfast fermionic encoding [129], which requires \(\tilde{n}=O(md)\) and achieves \(\tilde{d}=O(d)\).
### Reducing the number of degrees of freedom
NISQ devices come with a limited number of qubits and limited coherence. These constraints limit the number of degrees of freedom (typically orbitals) of the system one wants to study. Nevertheless, many-body condensed matter, quantum chemistry, or nuclear problems typically comprise tens or hundreds of orbitals. Directly tackling these large model spaces with a quantum processor appears unfeasible, if not ill-advised. Indeed, over the last century, numerous classical many-body methods have been devised to reduce the number of truly correlated--sometimes called "ultra quantum"--degrees of freedom. These classical methods then rely on advanced algorithms to solve the "reduced model". Despite its reduced complexity, this model is usually
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Circuit \(\mathcal{C}\)** & **Calculated overlap** \\ \hline \(|0\rangle\) & \(H\) & \(P\left(\varphi\right)\) \\ \(|\Psi\rangle\) & \(\langle\Psi|U|\Psi\rangle=\langle Z_{0}\rangle_{\mathcal{C}\left(\varphi=0 \right)}+i\langle Z_{0}\rangle_{\mathcal{C}\left(\varphi=\pi/2\right)}\) \\ \(|0\rangle\) & \(H\) & \(V/Z\) \\ \(|\Psi\rangle\) & \(\langle\Psi|U^{\dagger}\left(t,0\right)P_{k}U\left(t,t^{\prime}\right)P_{l}U \left(t^{\prime},0\right)|\Psi\rangle\) \\ \(=\langle Z_{0}\rangle_{\mathcal{C}}+i\langle Y_{0}\rangle_{\mathcal{C}}\) \\ \end{tabular}
\end{table}
Table 4: Here we present some standard interferometry circuits which allow computing overlaps by making measurements on an ancillary qubit. The first circuit evaluates quantities of the form \(\langle\Psi|U|\Psi\rangle\) where \(UU^{\dagger}=\mathbb{I}\), e.g., the overlap between a state and its time-evolved counterpart. The second circuit enables to retrieve two-time correlators of the form \(\langle\Psi|A(t)B(t^{\prime})|\Psi\rangle\), with \(A(t)=U^{\dagger}(t,0)P_{k}U(t,0)\) and \(B(t^{\prime})=U^{\dagger}(t^{\prime},0)P_{l}U(t^{\prime},0)\) (where \(P_{i}\) denotes a Pauli operator). On the right, \(\langle O_{0}\rangle_{\mathcal{C}}\) indicates that the ancillary qubit (labeled by convention as the ”0th” qubit) is to be measured in the \(O\in\{X,Y,Z\}\) basis after the execution of the circuit \(\mathcal{C}\) drawn on the left.
hard to tackle in some physically relevant regimes due to its strongly-correlated character. This is where quantum coprocessors could be used to extend the power of classical algorithms. We highlight below a few classical-inspired methods that were used to reduce the number of qubits for many-body systems treated on quantum computers. These methods are illustrated in Fig. 4.1.
#### Embedding methods
Typical condensed-matter problems are formulated on a lattice of atomic sites (e.g., the Hubbard model introduced in Eq. (2)). The number of sites needed to observe collective phenomena like phase transitions typically exceeds the capacity of exact diagonalization or Monte-Carlo methods. Classical methods collected under the term _embedding methods_ have been developed to overcome this limitation. They draw inspiration from mean-field methods in that they self-consistently map the original, extended problem onto a smaller, more local many-body problem (sometimes called _fragment_) "embedded" in a (usually) non-interacting environment (also called _bath_). One can then leverage the fact that this embedded problem has fewer correlated degrees of freedom to tackle it with classical or, if need be, quantum methods [127]. An illustration for these methods is provided in Figure 4.1 (a).
Examples of embedding methods include, but are not limited to, Dynamical Mean Field Theory (DMFT)[130], the Gutzwiller or Rotationally-Invariant Slave Boson (RISB)[131] method, and the Density-Matrix Embedding Theory (DMET)[132] method.
Generically, the embedded problem has the form:
\[H =\sum_{i=1}^{N_{c}}Un_{i\uparrow}^{c}n_{i\downarrow}^{c}-\mu\sum_ {i=1}^{N_{c}}\sum_{\sigma}n_{i\sigma}^{c} \tag{4.2}\] \[+\sum_{p=1}^{N_{b}}\sum_{i=1}^{N_{c}}\sum_{\sigma}\left(V_{p}c_{i \sigma}^{\dagger}a_{p\sigma}+\mathrm{h.c}\right)+\sum_{p=1}^{N_{b}}\sum_{ \sigma}\varepsilon_{p}a_{p\sigma}^{\dagger}a_{p\sigma}, \tag{4.3}\]
where \(a_{p\sigma}^{\dagger}\) creates electrons in the bath (of size \(N_{b}\)), while \(c_{i\sigma}^{\dagger}\) creates electrons in the correlated orbitals (\(n_{i\sigma}^{c}=c_{i\sigma}^{\dagger}c_{i\sigma}\)). Compared to the Hubbard model, Eq. (4.3) has fewer (\(N_{c}\)) interacting sites (the \(N_{b}\) bath sites are uncorrelated). However, it is still a complicated many-body problem. Typically, \(N_{c}\) is adapted to the spatial resolution one wants. For regimes with considerable correlation lengths, it can exceed the reach of advanced Monte-Carlo methods.
The embedding methods mentioned above differ by the number of bath sites, the observables that need to be computed (generally, Green's functions on the impurity or reduced density matrices), and the way the self-consistent parameters (\(\varepsilon_{p}\) and \(V_{p}\)) are updated. For instance, within RISB and DMET, \(N_{b}=N_{c}\); this results in the embedded model being much smaller than the original Hubbard model.
Recent works have used quantum processors to tackle the embedded model within an embedding method [133; 134; 135; 136; 137; 138]. They are limited so far to small sizes (\(N_{c}\leq 2\)) due to NISQ limitations. Until now, classical methods still outperform quantum methods in solving these problems.
#### Active space methods
In quantum chemistry, like in condensed matter or nuclear physics problems, the number of degrees of freedom can be reduced to the genuinely complicated degrees of freedom. These are usually called active orbitals. Instead of handling all orbitals at the same level of theory, orbitals are divided into active ones--which require an advanced many-body method--and inactive ones--for which mean-field (Hartree-Fock) methods will be sufficient. The active space selection can be based on the occupation level of molecular orbitals (the orbitals resulting from a Hartree-Fock optimization). Empty and occupied orbitals (with occupation numbers close to 0 or 1, respectively) are inactive, while partially-filled orbitals are considered active. The active space size is adjusted according to the sought-after
Figure 4.1: Embedding methods (a) and active-space selection (b). In embedding methods, an extended (lattice) model is self-consistently mapped to a local (impurity or embedded) model. In active-space methods, a subset of orbitals (usually partially filled ones) is selected to construct the active space Hamiltonian, while the other orbitals are treated at the mean-field (Hartree-Fock) level. Due to their smaller size, the embedded or active-space models are better suited for a solution with today’s quantum computers.
accuracy, available computational capacity, or both. The so-obtained active space Hamiltonian has the same form as the original Hamiltonian, which is given by Eq. (3). However, it usually has a much smaller number of orbitals: \(N\) is reduced to \(N_{a}\). Then, with a classical computer, one can tackle this reduced problem with advanced methods like FCI (if \(N_{a}\) is very small) or CC otherwise. With a quantum coprocessor, the reduction from \(N\) to \(N_{a}\) orbitals directly translates, via fermion-spin transforms (see section 4.1, to a reduced number of required qubits (namely \(N_{a}\)). An illustration for these methods is provided in Figure 1 (b).
Many recent works use this active space selection to reduce the number of required qubits, see, e.g., [139], or to explore the resource requirements on future quantum computers [140; 141].
## 5 Ideal algorithms
Here, we discuss some textbook methods to simulate a quantum system and solve the eigenvalue problems on a quantum computer [9].
### Quantum Phase Estimation for the eigenvalue problem
_Description_ Quantum Phase Estimation (QPE), also called Phase Estimation Algorithm, is a generic method to shed light on the spectrum of a unitary operator with a quantum computer [9; 122]. It is already well documented, and we only give here the key ingredients of the approach.
Suppose that the operator \(U\) has a set of eigenvalues \(\{e^{2\pi i\phi^{a}}\}\). We assume that for all \(\alpha\), we have \(0\leq\phi^{\alpha}<1\) and denote by \(\{|\alpha\rangle\}\) the corresponding eigenvectors. The system's initial state can be decomposed as:
\[\Psi\rangle=\sum_{\alpha}c_{\alpha}|\alpha\rangle. \tag{5.1}\]
The QPE method consists of the following schematic sequence:
\[|\Psi\rangle\xrightarrow{\mathrm{QPE}}\sum_{\alpha}c_{\alpha}|\alpha\rangle \otimes|\widetilde{\phi}^{\alpha}\rangle\xrightarrow{\mathrm{Measure}}| \alpha\rangle\otimes|\widetilde{\phi}^{\alpha}\rangle, \tag{5.2}\]
where the bitstring \(\widetilde{\phi}^{\alpha}=\phi_{n_{a}-1}^{\alpha}\ldots\phi_{0}^{\alpha}\) (\(\phi_{i}^{\alpha}\in\{0,1\}\) being the result of the measurement of the ancillary qubit \(i\)) encodes an estimation of the phase \(\phi^{\alpha}\) as the binary fraction \(0.\phi_{0}^{\alpha}\cdots\phi_{n_{a}-1}^{\alpha}=\sum\limits_{j=0}^{n_{a}-1} \frac{\phi_{j}^{\alpha}}{2^{j+1}}\). We will use the notation \(0.\widetilde{\phi}^{\alpha}\) for this number, although the order of the bits is reversed. A schematic view of the QPE circuit is shown in Fig. 1.
The effect of QPE is twofold: the initial state is projected into one eigenstate (or a set of degenerate states) having non-vanishing overlap with the initial state, and the associated eigenvalue is retrieved with a precision \(|\phi^{\alpha}-0.\widetilde{\phi}^{\alpha}|\leq 1/2^{n_{a}}\), which improves exponentially with \(n_{a}\). Note that the projection is only approximate unless the binary fraction of \(\phi^{\alpha}\) is finite and the number \(n_{a}\) is sufficient to have \(\phi^{\alpha}=0.\widetilde{\phi}^{\alpha}\). This projection effect was used recently in a many-body system to restore broken symmetries (see [80; 89]).
For many-body problems, QPE can be seen as a gold standard to solve the eigenvalue problem for the Hamiltonian \(H\) in a large Hilbert space. In this case, the operator \(U\) can be chosen as the propagator itself, with
\[U(\tau)=e^{-2\pi i\tau(H-E_{0})}, \tag{5.3}\]
where \(\tau\) and \(E_{0}\) are parameters chosen to map the spectrum of \(H\) into elements of \([0,1[\). The circuit to prepare the unitary \(U\) starting from a given \(H\) is obtained via trotterization, a method summarized in section 5.2.
_Discussion: strengths and weaknesses_ A key feature of QPE is that with a circuit of depth \(2^{n_{a}}\), one finds the phase associated with the eigenstate \(|\alpha\rangle\) with accuracy \(1/2^{n_{a}}\) with probability \(O(|\langle\Psi|\alpha\rangle|^{2})\). In other words, to reach an accuracy \(\epsilon\), the computational cost with QPE scales as \(1/\epsilon\,\mathrm{poly}(1/|\langle\Psi|\alpha\rangle|)\).
This scaling illustrates the advantages and shortcomings of "perfect" QPE (i.e., performed on a fault-tolerant quantum computer). On the flip side, it means that QPE requires an input state that reasonably overlaps the actual eigenstate. One could, for instance, resort to _adiabatic state preparation_[142] to rotate some simple initial state into a state exhibiting a significant
Figure 1: Illustration of the QPE circuit used to get the eigenvalues of an arbitrary unitary operator \(U\). The QPE circuit uses the inverse Quantum Fourier Transform (QFT\({}^{-1}\)) [9]. The QPE requires \(n_{a}\) additional ancillary qubits. The precision of the method will directly depend on \(n_{a}\).
overlap with the eigenstate of interest. This requirement over the overlap becomes problematic in high dimension for many-body problems due to the _orthogonality catastrophe_ (see, e.g., [111]). Note that this initial state problem can be cleverly handled in some cases. A case in point is Shor's factoring algorithm [104], which uses QPE as the main ingredient after a clever state preparation, providing an exponential speedup over the classical version of the factoring algorithm.
On the bright side, the \(1/\epsilon\) scaling is much better than the \(1/\epsilon^{2}\) typical of classical Monte-Carlo methods. This advantage is used in "quantum Monte-Carlo" methods. These techniques use QPE as a critical building block within another algorithm called quantum amplitude estimation [143]. We point out that these techniques should not be confused with the many-body quantum Monte-Carlo methods, which refer to purely classical methods to solve quantum many-body problems. We also note in passing that recent proposals have been made to hybridize "classical" quantum Monte-Carlo methods such as Auxiliary-Field Quantum Monte-Carlo (AFQMC) or Full-Configuration-Interaction-Quantum-Monte-Carlo (FCIQMC) with quantum algorithms [144; 145; 146].
On NISQ processors, QPE is hardly applicable owing to the number of operations required (whether the previously mentioned \(2^{n_{a}}\)- depth or those of the QFT). Estimates for chemical problems yield tremendous numbers [147], not compatible with the number of qubits and error rates of current and near-term machines.
Today, intensive efforts are being made to provide less costly methods for Hamiltonian eigenvalue problems. Some of them will be further reviewed in section 6.
### Trotterization
DescriptionTrotterization is a technique to implement a time evolution \(U=e^{-iHt}\) on digital quantum hardware [148], i.e., as a sequence of few-qubit gates.
Most many-body Hamiltonians are \(k\)-local, meaning they can be decomposed as a sum of terms acting on at most \(k\) qubits:
\[H=\sum_{j=1}^{m}\lambda_{j}P_{j}, \tag{5.4}\]
with \(P_{j}\) a product of at most \(k\) Pauli operators.
Trotterization approximates the exponential of a sum \(e^{-iHt}\) as the product of the individual terms. The so-called first-order _Trotter-Suzuki_ formula [148] reads
\[e^{-iHt}=\left(\prod_{j=1}^{m}e^{-i\lambda_{j}P_{j}\frac{t}{n_{t}}}\right)^{n_ {t}}+O\left(\frac{m^{2}t^{2}}{n_{t}}\right) \tag{5.5}\]
where \(n_{t}\) is the number of _Trotter steps_. The rationale behind formula (5.5) is that the whole Hamiltonian evolution is carried out in the form of repeated sequences of step-wise evolutions \(e^{-i\lambda_{j}P_{j}\frac{t}{n_{t}}}\). Each such evolution can be simplified as a sequence of one- and two-qubit gates.
Notably, the number \(n_{t}\) of Trotter steps must be increased when \(t\) increases: the circuit structure is not fixed with \(t\); instead, its complexity grows linearly with \(t\). In some cases, a sublinear scaling can be proposed, but this is not generally the case because of the _no fast-forwarding theorem_[149]. In the QPE described in the previous section, this implies that a unitary operator of the form \(U^{2^{n}}=\exp(-iH2^{n}t)\) has depth \(O(2^{n})\), explaining the exponential scalings of QPE.
Beyond standard trotterizationThe possibility of reducing the significant scaling associated with the Trotter-Suzuki methods is an active field of research. Several alternative methods have been proposed: the Variational Fast-Forwarding [150; 151], Incremental Structure Learning [152], the Adaptive Product Formula [153], and the Variational Time Dependent Phase Estimation [154], to quote some of them.
## 6 NISQ Algorithms
Quantum algorithms like the QPE presented above require, in general, a large number of qubits or gates, or
Figure 5.2: Illustration of the QPE algorithm applied to the Hartree-Fock state \(|\psi\rangle=|00001111\rangle\) of a Pairing Hamiltonian, Eq. (2.5), with one body energies at levels \(j\), \(\varepsilon_{j=0,1,\ldots,n_{a}-1}=j\Delta e\), \(g=0.5\Delta e\) and \(\Delta e=1\)[89]. \(p\) is the probability of measuring the energy value \(E\) in the ancillary register. The ancillary qubits \(n_{a}\) used for the QPE were 4 (a), 6 (b), and 8 (c). The vertical green and black lines correspond to the ground and excited energies of the Hamiltonian, respectively. The figure has been adapted from [89]. Note that panel (c) is shown in linear-log scale.
both. Because of the limitations of current quantum platforms in qubit and gate counts, these algorithms cannot be used in the presence of noise. Specific algorithms have been designed to circumvent those limitations and study the applicability of today's quantum processors to concrete problems. These algorithms are designed, for instance, to reduce the circuit depth by allocating only a specific task to the quantum computer or, as when using variational methods, to allow better control of these states.
### Variational algorithms
Variational methods are standard tools for many-body physicists using classical computers [155]. In recent years, they have emerged as a tool of choice for applications on quantum platforms, and are an important part of the broader class of hybrid quantum-classical methods [14; 15].
#### 6.1.1 Variational Quantum Eigensolver (VQE)(and VQS [156])
The VQE algorithm, first introduced in [157], aims at finding the approximate ground-state energy and wavefunction of a Hamiltonian \(H\) by minimizing the energy over a parameterized trial space (also called _ansatz_), i.e.:
\[E_{\text{VQE}}=\min_{\boldsymbol{\theta}}\left[\left\langle\Psi\left( \boldsymbol{\theta}\right)|H|\Psi\left(\boldsymbol{\theta}\right)\right\rangle \right]\equiv\min_{\boldsymbol{\theta}}\left[E(\boldsymbol{\theta})\right], \tag{6.1}\]
where \(\boldsymbol{\theta}\equiv\{\theta_{p}\}\) is a set of parameters that defines the trial state \(|\Psi(\boldsymbol{\theta})\rangle\). In most applications, the preparation of the trial state vector is made using a unitary transformation \(U(\boldsymbol{\theta})\) of the qubit vacuum, denoted hereafter by \(|\boldsymbol{0}\rangle\equiv|0,\ldots,0\rangle\) with:
\[|\Psi(\boldsymbol{\theta})\rangle=U(\boldsymbol{\theta})|\boldsymbol{0}\rangle. \tag{6.2}\]
Some illustrative examples of trial state vectors and associated unitary transformation will be given in section 6.2.
The parameterized observable \(E(\boldsymbol{\theta})\) is further decomposed into observables that are directly measurable on the quantum device. To do so, being provided a fermion-to-qubit mapping (see paragraph 4.1), one can write \(H\) as qubit operator
\[H=\sum_{k}\alpha_{k}P_{k}, \tag{6.3}\]
where each \(P_{k}\) corresponds to a string of Pauli operators. The expectation value of each operator over the trial state can be evaluated via sampling in the computational basis (\(Z\) measurements) with negligible gate overhead (as illustrated on Table 3). \(E(\boldsymbol{\theta})\) is then obtained by classically aggregating the expectation value of each of the Pauli strings:
\[E(\boldsymbol{\theta})=\sum_{k}\alpha_{k}\langle P_{k}\rangle_{\boldsymbol{ \theta}} \tag{6.4}\]
where we use the shorthand notation \(\langle\cdot\rangle_{\boldsymbol{\theta}}\equiv\left\langle\Psi\left( \boldsymbol{\theta}\right)|.|\Psi\left(\boldsymbol{\theta}\right)\right\rangle\).
The expectation value \(\langle P_{k}\rangle\) is computed by sampling many instances of the parameterized quantum circuit to gather enough statistics to curb statistical or "shot" noise. Numerous strategies to limit the sampling overhead incurred by VQE have been put forward (like term-grouping strategies, see, e.g., [158]). Note that the associated time burden of the algorithm can also be partly alleviated by running these circuits in parallel on several quantum devices or parts of a large chip.
In VQE, the optimization of parameters is delegated to classical computers using standard optimization methods, either gradient-free (like the Nelder-Mead or COBYLA method) or gradient-based (like gradient descent). To obtain gradients in gradient-based approaches, one can use either finite difference methods (with the generating function of Eq. (6.28)) or compute "analytical" gradients. The computation of the first derivative needed in those methods can be done either with a Hadamard-test-like circuit or with the parameter shift-rule technique [159; 160; 161]. The latter is more suitable for noisy devices as it requires fewer qubits and fewer multi-qubit gates. Some optimization methods are particularly well-suited in that they are more robust to the shot noise mentioned above, like the SPSA method or the "rotosolve" method [162].
A schematic view of the VQE hybrid method is given in Fig. 6.1.
The implementation of the VQE algorithm on current noisy devices faces several challenges. To mention some important ones: the preparation of the initial state (in the presence of noise), the accurate estimation of expectation values (Eq. (6.1)) (despite shot noise), and the (classical) optimization of the set of parameters \(\boldsymbol{\theta}\) to find the minimum of \(E_{\text{VQE}}\). VQE's main advantage is the flexibility in the choice of the unitary transformation.
Variational methods like VQE are not limited to digital quantum processors. Elementary operations \(\{U_{R_{j}}(\theta_{j})\}\) of the "resource" Hamiltonian of an analog quantum computer can also be assembled into a parameterized ansatz circuit \(U_{R_{1}}(\theta_{1})U_{R_{2}}(\theta_{2})\cdots U_{R_{k}}(\theta_{k})\) in order to minimize the energy of the target Hamiltonian in the
final state. This method is called generically _Variational Quantum Simulation_ (VQS, [156]). It has been applied e.g to the Schwinger model on trapped-ion processors [156] or to simple molecules on Rydberg processors [163].
Some refinements of VQE are highlighted below.
#### Advanced VQE schemes
Various refined algorithmic schemes have been proposed to either extend the scope of or overcome some limitations of plain-vanilla VQE.
Penalty methodsOther types of states than the ground state can be valuable to prepare, such as excited states. VQE schemes with an alternative cost function have thus been proposed to enforce the exploration of specific subspaces of interest. For instance, leveraging the orthogonality of the eigenvectors of the Hamiltonian, one can prepare a sequence of eigenstates by supplementing the cost function with penalty terms proportional to the overlaps (inner products) between the trial state \(\Psi(\mathbf{\theta})\) and the eigenstates already prepared \(\{|\Psi(\mathbf{\theta}_{j}^{*})\rangle\}\): \(\mathcal{C}(\mathbf{\theta})=E(\mathbf{\theta})+\sum_{j}\lambda_{j}\langle\Psi(\mathbf{ \theta}_{j}^{*})|\Psi(\mathbf{\theta})\rangle\). This approach was dubbed the _Variational Quantum Deflation_ scheme [164]. More generally, penalty terms chosen accordingly can focus the variational search on, e.g., a specific spin sector [165].
Subspace-search VQEA significant inconvenience of the penalty methods listed above is the need to evaluate inner products. Hence, the proposal in [166] to leverage the preservation of the inner product under unitary transformation to look for the circuit that best maps a set of orthogonal states to the Hamiltonian's eigenstates.
ctrl-VQEVQE can be applied more fundamentally by optimizing the control pulses which underlie quantum operations (see Fig. 3.1) [167]. This low-level optimization allows to address the time-limited character of coherence in NISQ devices.
Orbital-rotating VQE schemesTo use shorter circuits, one had better work in a suitable one-particle orbital basis, depending on the target state. Some VQE schemes were proposed to tailor the basis to the problem. We can distinguish two different approaches: (i) a "classical dressing" of the Hamiltonian observable with a general orbital rotation whose parameters must be determined along with the circuit's parameters (Orbital Optimized-VQE [168; 169]), and (ii) iterative basis updates: a converged variational state in the current basis (starting from the usual site-spin basis in lattice models for instance) is leveraged to extract ground state features, setting forth advantageous updates to the single-particle basis. The terms of the Hamiltonian are transformed accordingly before a new VQE optimization is run. The latter strategy was applied in two different settings. The first one--dubbed permVQE [170]--consists in mere single-particle basis permutations guided by the form of the current converged state's _mutual information matrix_. This matrix measures the information shared by pairs of qubits. For a nearest-neighbor qubit topology, one aims at concentrating its high magnitude coefficients around the diagonal so as to lower the count of entangling gates required. The second one--called NOization [137]--performs general basis updates aimed at iteratively reaching a specific, state-dependent basis known as the Natural Orbitals basis. This basis is associated with a compact state representation, namely the one encompassing the lowest number of computational basis states (Slater determinants). In this case, the quantity being monitored is the 1-RDM, whose eigenbasis provides the new basis for the subsequent VQE run.
Projective Quantum Eigensolver (PQE)The PQE method [171] minimizes the residuals (that measure the non-orthogonality of excited states to the ground state manifold) instead of the energy, yielding accuracies on a par with VQE's using fewer resources, and with less size-dependence.
Figure 6.1: Schematic illustration of the VQE approach. A set of expectation values of Pauli strings \(\{P_{k}\}\) is obtained upon measurements on a quantum processor, while the cost function reconstruction and the parameter optimization are made through classical processing. The unitary \(U_{k}(\mathbf{\theta})\) comprises both the ansatz circuit instance \(U(\mathbf{\theta})\) as well as the additional basis rotation gates necessary to access the expectation value of operator \(P_{k}\) as described in Table 3.
#### 6.1.3 On the use of variational principles in quantum computers
Although it is slightly out of the scope of the present review, we want to mention broader applications of variational principles in quantum computing.
One can, for instance, use McLachlan's variational principle (MVP) [172] to obtain approximate quantum systems' unitary evolutions. In that case, the MVP takes the form:
\[\delta\left\|\left(i\hbar\partial/\partial t-H\right)\left|\Psi(t)\right\rangle \right\|=0, \tag{6.5}\]
where \(\left\|\left|\left|\Psi\right\rangle\right\|\right|\equiv\sqrt{\left\langle \Psi\middle|\Psi\right\rangle}\). This variational principle can be interpreted as a cost function which, given a parametric form for the trial state, minimizes the deviation of the approximate evolution \(i\hbar\partial_{t}\middle|\Psi(t)\rangle\) from the true evolution \(H\middle|\Psi\rangle\). This principle can be connected with other variational principles generally used in many-body problems [155]. A complete discussion of the MVP and its manipulation is out of the scope of the present article but can be found in Ref. [173] in the quantum computing context. Still, it is interesting to mention that the MVP is not restricted to real-time unitary evolution but can be adapted to other problems involving non-unitary motion or mixed-state evolution.
_Mixed state evolution:_ The approximate solution of a density matrix evolution that could differ from that of a pure state density can be obtained from the MVP:
\[\delta\left\|i\hbar d\rho/dt-\mathcal{L}(\rho)\right\|^{2}=0, \tag{6.6}\]
where \(\rho\) is the density matrix (see section 8.1 for a definition), and the Liouvillian \(\mathcal{L}(\rho)\) is a general functional of the density. This variational principle can be used to find an approximation of \(i\hbar\dot{\rho}=\mathcal{L}(\rho)\). The pure state Hamiltonian evolution can be recovered by setting \(\rho=\left|\Psi\right\rangle\!\left\langle\Psi\right|\) and \(\mathcal{L}(\rho)=[H,\rho]\). Besides this case, it can also be used to simulate dissipative processes using a Lindblad-type equation (see section 8) for \(\mathcal{L}(\rho)\)[174].
_Imaginary-time propagation_ When the real-time evolution is replaced by an imaginary-time evolution (\(t\to i\tau\)), an initial state with a good enough overlap with the exact ground state is projected to the exact ground state by the evolution operator \(U(\tau)=e^{-\tau H}\). This method is a standard practical one to obtain the lowest energies and associated eigenstates of the Hamiltonian on a classical computer. The operator \(U(\tau)\) is non-unitary and cannot a priori be directly implemented on a quantum computer. This problem was overcome in Ref. [175; 17] using the following variant of the MVP:
\[\delta\left\|\left(\partial_{\tau}+[H-\left\langle H\right\rangle_{\tau}] \right)\middle|\Psi(\tau)\right\rangle\right\|=0. \tag{6.7}\]
This approach is called Quantum Imaginary-Time Evolution (QITE). It can be variational if \(\left|\Psi\right\rangle\) is parameterized by variational parameters, or not [176].
It is interesting to mention that variational techniques, that are fundamental tools in many-body systems, have also been exported to general learning problems. Such techniques are a growing field of interest today [159; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 212; 213; 214; 215; 216; 217; 218; 219; 222; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 288; 289; 290; 282; 289; 281; 284; 285; 286; 287; 289; 288; 289; 291; 289; 280; 283; 285; 287; 286; 287; 288; 289; 289; 292; 293; 281; 285; 289; 286; 287; 288; 289; 294; 288; 289; 295; 281; 289; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 289; 281; 288; 289; 296; 282; 287; 289; 297; 280; 284; 289; 285; 286; 287; 288; 289; 288; 299; 298; 299; 300; 301; 302; 303; 304; 305; 306; 307; 308; 309; 310; 311; 312; 313; 314; 315; 316; 317; 318; 319; 320; 321; 322; 324; 325; 326; 327; 327; 328; 333; 334; 335; 336; 337; 338; 339; 344; 345; 346; 347; 348; 359; 360; 361; 362; 363; 364; 365; 366; 367; 368; 369; 370; 371; 372; 373; 374; 375; 376; 377; 378; 389; 390; 391; 392; 393; 394; 395; 396; 397; 398; 399; 40; 40; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 59; 60; 51; 50; 52; 57; 59; 61; 62; 63; 64; 65; 66; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 82; 89; 80; 83; 85; 87; 89; 81; 84; 86; 88; 89; 80; 84; 87; 88; 89; 82; 80; 85; 89; 80; 86; 87; 88; 89; 80; 88; 89; 81; 89; 80; 89; 80; 81; 80; 82; 81; 83; 85; 86; 87; 89; 80; 88; 87; 81; 88; 89; 82; 81; 84; 89; 80; 85; 88; 89; 80; 86; 89; 81; 87; 88; 89; 80; 89; 80; 89; 81; 80; 83; 89; 80; 84; 81; 85; 86; 87; 88; 89; 82; 83; 87; 88; 89; 80; 89; 81; 88; 89; 80; 89; 80; 81; 80; 89; 80; 81; 81; 82; 84; 85; 86; 87; 88; 89; 82; 89; 83; 88; 89; 80; 89; 80; 81; 84; 86; 89; 87; 88; 89; 80; 89; 82; 80; 89; 80; 81; 80; 84; 88; 89; 80; 81; 85; 89; 80; 86; 87; 89; 88; 89; 80; 89; 81; 80; 89; 82; 80; 81; 85; 89; 81; 86; 89; 82; 87; 89; 80; 88; 81; 89; 80; 81; 83; 88; 89; 82; 84; 86; 89; 87; 88; 89; 80; 89; 80; 89; 80; 81; 89; 80; 81; 80; 82; 85; 89; 83; 86; 87; 88; 89; 80; 88; 89; 81; 89; 80; 89; 80; 81; 84; 89; 80; 82; 86; 89; 83; 89; 84; 89; 85; 89; 86; 89; 87; 89; 80; 89; 80; 89; 81; 89; 80; 89; 80; 81; 80; 89; 82; 89; 80; 83; 89; 80; 89; 80; 81; 80; 84; 89; 80; 81; 85; 89; 86; 89; 87; 81; 89; 83; 89; 84; 89; 86; 89; 87; 89; 88; 89; 89; 89; 80; 89; 89; 80; 89; 80; 89; 89; 80; 89; 80; 89; 81; 89; 80; 89; 89; 80; 89; 80; 89; 810; 89
The adiabatic theorem guarantees that provided one proceeds slowly enough (adiabatically) with regards to the spectral gap along the path, the system remains in the ground state of the instantaneous Hamiltonian \(H(t)\). This property is linked to the so-called Gell-Mann and Low theorem [183; 184]. Upon trotterizing the evolution under perturbed Hamiltonians, one thus has a general--but costly--recipe for ground state preparation: adiabatic state preparation (ASP) [142]. This method was used in, e.g., [127], which furthermore makes use of QPE (see paragraph 5.1 for a discussion on the limitations of QPE for state preparation) to pin the state into the ground state of the perturbed Hamiltonian.
By construction, ASP gives rise to long time evolutions and hence deep circuits, which is a problem for NISQ processors. It can nevertheless be used as a formal inspiration to design variational states to be used in the variational methods described above: these can be regarded as a way to find unitary "shortcuts" (sometimes called diabatic evolution) to go from a simple initial state \(|\psi_{0}^{(0)}\rangle\) to a target ground state \(|\psi_{0}^{(1)}\rangle\), as pictured on Fig. 6.2.
The general strategy to borrow from ASP to design VQE states is the following: Hamiltonian (6.8) induces a unitary evolution that can be trotterized as \(\prod_{k=1}^{n_{t}}e^{-i(1-s(t_{k}))H_{0}\delta t}e^{-is(t_{k})H_{1}\delta t}\), with \(\delta_{t}=T/n_{t}\) and \(t_{k}=k\delta_{t}\). This unitary evolution can be transformed into a variational circuit
\[U(\mathbf{\theta})=\prod_{k}e^{-i\theta_{2k}H_{0}}e^{-i\theta_{2k+1}H_{1}}. \tag{6.9}\]
In the following subsections, we consider ansatze with increasing complexity starting from a rather academic methodology (Hartree-Fock approach, Bogolyubov transformation) and then present the recent efforts to craft more minimal ansatze so that they avoid running into the usual limitations of VQE (limited coherence time of NISQ devices, or the barren plateaus encountered in numerous-parameter ansatz optimizations [185]).
#### 6.2.2 Uncorrelated ansatze
Hartree-Fock theory and Slater determinantsIn many-body problems, a complete basis of the Fock space is given by the set of Slater determinants:
\[|\delta_{n_{q}-1},\ldots,\delta_{0}\rangle=\prod_{k}\left[a_{k}^{\dagger} \right]^{\delta_{k}}|\mathbf{0}\rangle, \tag{6.10}\]
where \(\{a_{k}^{\dagger}\}_{k=0,n_{q}-1}\) correspond to the creation operators associated with a complete set of single-particle states \(\{|k\rangle\}\). Here it is assumed that the one-body Hilbert space is finite with dimension \(n_{q}\). For a set of \(A\) particles, the Hartree-Fock procedure consists in estimating the ground state energy of a (possibly correlated) Hamiltonian \(H\) as \(E_{\rm HF}=\langle\Psi_{\rm HF}|H|\Psi_{\rm HF}\rangle\) assuming that the trial wave-function is a Slater determinant given by
\[|\Psi_{\rm HF}\rangle=\prod_{\alpha}\left[b_{\alpha}^{\dagger}\right]^{\gamma_ {\alpha}}|\mathbf{0}\rangle. \tag{6.11}\]
In this equation, only \(A\) coefficients \(\gamma_{\alpha}\) are equal to 1 (corresponding to particles below the Fermi level) while \(n_{q}-A\) of them are equal to zero (particle states). The creations operators \(\{b_{\alpha}^{\dagger}\}\) are associated to a complete basis \(\{|\alpha\rangle\}\), with the relationship:
\[|\alpha\rangle=\sum_{k}|k\rangle\langle k|\alpha\rangle\ \longrightarrow b_{\alpha}^{ \dagger}=\sum_{k}a_{k}^{\dagger}U_{k\alpha}, \tag{6.12}\]
with \(U_{k\alpha}=\langle k|\alpha\rangle\) a unitary transformation to be variationally determined. In other words, the HF procedure consists in variationally finding the best Slater determinant approximation to the target ground state. It is a very simple mean-field approach to the original problem: it replaces the many-body problem with a set of particles influencing one another through a self-consistent, average one-body potential (or mean field), instead of actual interactions. Practical aspects of finding the HF solution, which consists in minimizing the energy with respect to the variations of unitary matrix \(U\), are standard.
Below, we focus on the implementation of HF on a quantum computer because the preparation of arbitrary Slater determinants is the starting point of many more advanced methods (indeed, implementing HF on a quantum computer is per se not useful as HF can be implemented efficiently--i.e in polynomial time--on a classical computer).
When the Jordan-Wigner transformation is used to map the Fock space to qubit space (see section 4.1), the set of Slater determinants, given by Eq. (6.10), directly identifies with the computational basis \(\{|\delta_{n_{q}-1},\ldots,\delta_{0}\rangle\}\) with \(\{\delta_{k}=0,1\}_{k=0,n_{q}-1}\), that we can rewrite as:
\[|\delta_{n_{q}-1},\ldots,\delta_{0}\rangle=\prod_{k}\left[X_{k}\right]^{\delta _{k}}|\mathbf{0}\rangle. \tag{6.13}\]
The key ingredient to implementing the HF theory on a quantum computer is to be able to realize a general unitary transformation \(U\) on the fermionic modes, as defined in Equation (6.12), using a parametric circuit and the possibility to prepare a state equivalent to (6.11) with exactly \(A\) particles. The technique that has been employed, for instance, in Ref. [76; 186], is based on the
use of the Thouless theorem [187] (see also Appendix E of Ref. [46]). Starting from one of the Slater determinants given in (6.10) and denoted generically as \(|\Psi_{0}\rangle\), one can generate an ensemble of new Slater determinants given by
\[|\Psi(Z)\rangle\equiv e^{i\sum_{ij}Z_{ij}a_{i}^{\dagger}a_{j}}|\Psi_{0}\rangle, \tag{6.14}\]
where \(Z\) is supposed to be hermitian. These states identify with the form (6.11) with \(b_{\alpha}^{\dagger}(Z)=\sum_{k}(e^{iZ})_{\alpha k}a_{k}^{\dagger}\), i.e. \(U^{*}=e^{iZ}\). The circuit that performs the Thouless transformation starting from a product state as given by Eq. (6.13) is discussed in detail in Ref. [186]. Additional discussion on the application of HF can be found in Refs. [17; 147].
_General quasi-particle vacuum_ The transformation (6.12) and the Thouless method can be generalized to a larger class of trial states known as quasi-particle vacua (also known as Gaussian states in other contexts). Below, such vacuum states are written generically as \(|\Psi_{\beta}\rangle\propto\prod\beta_{\alpha}|\mathbf{0}\rangle\), where \(\{\beta_{\alpha},\beta_{\alpha}^{\dagger}\}\) denotes a complete set of quasi-particles creation/annihilation operators. These operators can be connected through a generalization of Eq. (6.12) given by [155; 46]:
\[\beta_{k}^{\dagger}=\sum_{k}\left[a_{k}^{\dagger}U_{k\alpha}+a_{k}V_{k\alpha} \right]. \tag{6.15}\]
Using general quasi-particle vacua instead of restricting to Slater determinants leads to the Hartree-Fock-Bogolyubov (HFB) theory, where the \(U(1)\) symmetry, associated with particle number conservation, is broken. The advantage of breaking this symmetry is the possibility to describe superfluid systems [188].
The quantum state preparation of a general quasi-particle vacuum using the Thouless transformation, as done at the HF level in [186], was addressed in, e.g., [189] and relies on two main arguments. First, the mapping between Thouless' transformation \(\mathcal{U}(Z)=e^{i\sum_{ij}Z_{ij}\gamma_{i}\gamma_{j}}\) (where the \(\gamma_{i}\) denote Majorana modes, i.e., \(\gamma_{2k}=a_{k}+a_{k}^{\dagger}\) and \(\gamma_{2k+1}=-i(a_{k}-a_{k}^{\dagger})\)) and quantum gates can be found by leveraging the decomposition of \(R=e^{iZ}\) as a product of elementary Givens rotations [190]. Let \(M\) be the number of fermionic modes (and hence, qubits), then \(R=\prod\limits_{k=1}^{\infty}r_{k}(\theta_{k})\), where the Givens rotations consist of \(M\) local phase rotations which can be implemented as \(R_{z}\) gates and \(2M(M-1)\)\(SO(4)\) rotations acting non-trivially only on two modes. The \(SO(4)\) rotations can be rendered in a quantum circuit by means of matchgates [191; 192] and SWAP gates. All in all, Thouless' transformation can be implemented as a nearest-neighbour matchgate circuit with depth \(O(M)\).
Several applications have also been explored in quantum computers using a Bardeen-Cooper-Schrieffer (BCS)-like ansatz given by:
\[|\Psi\rangle=\prod_{k}\left(u_{k}+v_{k}a_{k}^{\dagger}a_{k}^{\dagger}\right)| \mathbf{0}\rangle, \tag{6.16}\]
where \((k,\bar{k})\) refers to two single-particle states forming a pair of time-reversed states, and where \(u_{k}^{2}+v_{k}^{2}=1\). The encoding of such a state on a qubit register is not unique. If the brute force JWT is used to make a direct mapping between single-particle states and qubits, the trial state can be written as:
\[|\Psi(\mathbf{\theta})\rangle=\bigotimes_{k}\left[\sin(\theta_{k})|00\rangle_{k}+ \cos(\theta_{k})|11\rangle_{k}\right], \tag{6.17}\]
where we made the identification \(u_{k}=\sin(\theta_{k})\) and \(v_{k}=\cos(\theta_{k})\). To map Eq. (6.16) into Eq. (6.17), time-reversed states are assumed to be represented by adjacent qubits, and \(|.\rangle_{k}\) denotes the two qubits associated with these states. We recognize a generalized Bell state that can be obtained by performing a \(R_{y}(\theta_{k})\) rotation on one of the qubits followed by a CNOT operation with the second qubit. Such encoding was used, for instance, in [193; 194; 195; 196; 80].
This encoding is general and allows treatment of systems where one or several pairs are broken (usually referred to as nonzero seniority [188]) as illustrated in Ref. [80] for instance, to treat odd systems. If we restrict to the situation with seniority 0, i.e., when no pairs are broken, one can reduce the number of qubits by directly encoding the occupation of the two adjacent time-reversed states onto one qubit. In this case, the state \(|1\rangle_{k}\) or \(|0\rangle_{k}\) represents the simultaneous occupation or not of the two time-reversed particles \((k,\bar{k})\). This technique, used in Refs. [89; 196; 86], has the advantage of reducing by a factor of 2 the required number of qubits compared to the case where one particle is encoded on one qubit. It also avoids the use of controlled operations since we have, for this encoding scheme:
\[|\Psi(\mathbf{\theta})\rangle =\bigotimes_{k}\left[\sin\theta_{k}|0\rangle_{k}+\cos\theta_{k}|1 \rangle_{k}\right]\] \[=\prod_{k}R_{y}^{(k)}\left(\pi-2\theta_{k}\right)|\mathbf{0}\rangle. \tag{6.18}\]
Quasiparticle-like states have been extensively explored in quantum computers [198; 199] (see also [189]), as well as their experimental implementation, [186].
We focused here on relatively standard quasi-particle vacuum states that play a particular role in many-body systems and lead to the HF and HFB frameworks. Below is a selection of other ansatze that are widely discussed today in the literature.
#### 6.2.3 Hamiltonian Variational Ansatz
Inspired both from adiabatic state preparation (ASP, see Fig. 6.2) and the Quantum Approximate Optimization Algorithm circuit (QAOA [200]), which is ubiquitous in quantum combinatorial optimization, the Hamiltonian Variational Ansatz (HVA [201]) state reads
\[|\Psi(\mathbf{\theta})\rangle=\prod_{l=1}^{L}\left(\prod_{k}e^{-i\theta_{k}^{l}H_{k }}\right)|\Psi_{0}\rangle\,, \tag{6.19}\]
where the terms \(H_{k}\) come from decomposing \(H\) as \(H=\sum_{k}H_{k}\), with \([H_{k},H_{k^{\prime}}]\neq 0\) for \(k\neq k^{\prime}\). The dimension \(L\) of index \(l\) is referred to as the _depth_ of the ansatz. The initial state \(|\Psi_{0}\rangle\) is the ground state of some \(H_{k_{0}}\) that is taken not to act first on \(|\Psi_{0}\rangle\). Optimizing the HVA parameters amounts to optimizing the Hamiltonian schedule \(s\) of ASP (as introduced in Eq. (6.8)). HVA was applied, _e.g._ to the study of the 1-D Hubbard model in [202].
#### 6.2.4 Hardware-Efficient Ansatz (HEA)
Today's quantum computers are not uniformly accurate in performing different operations. Knowing the strength or weaknesses of a given platform, one might adapt the ansatz to the operations most efficiently realized. The HEA technique consists in writing the trial state from a set of operations that are "native" in the quantum processor. This heuristic approach can optimize the trial state construction with respect to the specific hardware, but also restricts the type of trial states that can be constructed [203; 204]. One issue with the HEA is that the classical optimization of the variational parameters can become difficult due to gradients that vanish exponentially with the number of qubits, a phenomenon dubbed the _barren plateau problem_[205].
#### 6.2.5 Unitary coupled cluster (UCC)
The UCC offers a framework that naturally extends the HF method based on the Thouless approach described in section 6.2.2 (see also section 2.1). The trial wavefunction is written in a generalized form [185]:
\[|\Psi_{\rm UCC}(\mathbf{\theta})\rangle=e^{T(\mathbf{\theta})-T^{\dagger}(\mathbf{\theta} )}|\Psi_{0}\rangle\equiv U(\mathbf{\theta})|\Psi_{0}\rangle, \tag{6.20}\]
where \(T\) can be expanded as a set of operators of increasing complexity with \(T=T_{1}+T_{2}+\ldots\). Here, \(T_{1}\), \(T_{2}\),... stands for single, double,... particle-hole excitation operators with respect to the state \(|\Psi_{0}\rangle\), with
\[T_{1} = \sum_{i,j}T_{ij}^{(1)}a_{i}^{\dagger}a_{j}, \tag{6.21}\] \[T_{2} = \sum_{i,j}T_{ij,kl}^{(2)}a_{i}^{\dagger}a_{j}^{\dagger}a_{l}a_{k},\] (6.22) \[\ldots\]
We see, in particular, that the HF is recovered by using a Slater determinant and restricting \(T\) to single excitations. After truncation, the state prepared using Eq. (6.20) is used as a trial state in the VQE approach discussed in section 6.1.1. This technique is currently widely applied in quantum chemistry (see the recent review [206] and references therein). It was also used in most applications to atomic nuclei on real quantum platforms [76; 78]. Finally, the possibility of combining such an approach with \(U(1)\) symmetry that is relevant for strongly interacting systems like nuclei is of current interest [73; 207] and is currently explored on quantum computers too [196; 197].
#### 6.2.6 The Low-Depth Circuit Ansatz (LDCA)
Elaborating on the general quasiparticle vacua preparation routine reviewed in paragraph 6.2.2, the LDCA circuit [189] (standing for _Low-Depth Circuit Ansatz_) possibly allows reaching any state due to the insertion of \(R_{zz}\) rotation gates into to the Hartree-Fock-Bogolyubov circuit. It also allows the replication of similar layers in the ansatz to increase its representability systematically. Intuitively, \(R_{zz}(\theta)\equiv e^{-i\theta/2Z_{p}Z_{q}}\) gates generate correlated states as these gates correspond to density-density interactions a Jordan-Wigner encoding: the density operator of orbital \(p\), \(n_{p}\equiv a_{p}^{\dagger}a_{p}\), maps to the qubit operator \(\frac{I-Z_{p}}{2}\); therefore \(n_{q}n_{p}\) interactions translate into \(Z_{p}Z_{q}\) terms.
The main drawback of LDCA is that despite its gentle scaling, it still incurs a prohibitive gate count with respect to NISQ capacities, e.g., [137].
#### 6.2.7 Projected Ansatze
An important cornerstone for future applications, especially in nuclear systems, is the possibility of making, for instance, symmetry restoration after symmetry breaking. A discussion for the particle number symmetry was made in section 6.2.2. Assume, for instance, that a state \(|\Psi(\mathbf{\theta})\rangle\) can be prepared on a quantum computer and that such state does not respect the symmetry of the physical problem that is encoded in the Hamiltonian
\(H\). Instead of using \(|\Psi(\mathbf{\theta})\rangle\) in the variational principle, one can use the projected wavefunction:
\[|\Psi^{\prime}_{\text{P}}(\mathbf{\theta})\rangle=\frac{1}{\sqrt{\langle\Psi(\mathbf{ \theta})|P_{S}|\Psi(\mathbf{\theta})\rangle}}P_{S}|\Psi(\mathbf{\theta})\rangle. \tag{6.23}\]
Here, \(P_{S}\) is a projector onto the subspace of the Hilbert space containing states with the desired property (for instance, the proper symmetries). Such a strategy is used in many-body physics to grasp specific correlations between particles or strongly entangled states that are hard to describe otherwise [46; 155]. One difficulty is that the projector is a non-unitary operation and cannot be directly implemented on a quantum computer. Several methods have been proposed recently to construct projected states [80; 208]. These methods have been combined with VQE in Ref. [89], leading to the Quantum-Variation After Projection (Q-VAP) framework.
#### 6.2.8 Adapt-VQE
Plain-vanilla VQE takes a fixed variational state as an input to the computation. This incurs the risk of over-fitting the target state if the variational manifold is too large. The ADAPT-VQE method iteratively constructs this ansatz instead. It relies on a predefined operator pool from which operators are drawn adaptively along the optimization procedure. Typically one selects the operator maximizing the gradient at the current step so that its addition to the circuit has the most significant effect on the variational energy. The aim is to reduce the number of parameters of the ansatz at the expense of an increased measurement overhead due to the gradients. In the initial proposal (ADAPT-VQE [209])), the operators in the pool were fermionic, but the large gate overheads resulting from long Jordan-Wigner strings can be avoided by directly using qubit operators instead (qubit-ADAPT VQE [210]). This method can reach chemical accuracy at a relatively low gate count, at least with noiseless computers (see, e.g., [211] for an example with large molecules).
#### 6.2.9 Tensor-network-inspired quantum circuits
Tensor-network states refer to widely used representations of the wave function \(|\Psi\rangle\) of a many-body problem. Instead of storing the information contained in a \(n\)-qubit wavefunction in a multi-array \(a_{b_{1},b_{2},\ldots b_{n}}\) (with storage cost \(2^{n}\) complex floating-point numbers), one assumes a particular factorization of this multi-array, with the hope that storing the different factors will be less costly. The Matrix Product State (MPS) class is a widespread subclass of tensor networks. It consists in factorizing the multi-array as a product of matrices \([A^{(k)}]^{b_{k}}\):
\[a_{b_{1},b_{2},\ldots b_{n}}=\sum_{\alpha_{1},\ldots\alpha_{n-1}}[A^{(1)}]^{b_ {1}}_{\alpha_{1}}[A^{(2)}]^{b_{2}}_{\alpha_{1},\alpha_{2}}\cdots[A^{(n)}]^{b_ {n}}_{\alpha_{n-1}} \tag{6.24}\]
The internal indices \(\alpha_{k}\) have a dimension called the bond dimension, which is usually denoted as \(\chi\). As we will see in section 7.3, this parameter is closely connected to the degree of entanglement one can access with such a state.
MPS can be used as an inspiration or starting point for quantum computations. Specifically, methods have been proposed to convert a given MPS to a quantum circuit [42; 212]. Paragraph 7.3.3 presents an outline of the quantum circuitry involved. An important outcome is that MPS with a given bond dimension requires a circuit with a logarithmic depth in \(\chi\), resulting in a complexity gain and thus the possibility to use a quantum computer to generate states with an entanglement level inaccessible to classical computers due to too large a bond dimension. The conversion from MPS to a circuit can also be used to warm-start a variational quantum computation [213].
Other tensor networks can also be used as a comparison point or inspiration to quantum circuit design, like Tree Tensor Networks [214], the Multiscale Entanglement Renormalization Ansatz [215], or Projected Entangled Pair States [43].
### Beyond variational methods
As described in section 6.1.1, variational methods usually target the search of approximate ground states. Here, we discuss several methods that can give access to excited states too. Previously (section 5.1), we saw that the QPE could be a tool of choice to obtain energy eigenvalues and associated eigenvectors. Unfortunately, it cannot be used in current devices, and probably, it will take some time before the fidelity of quantum machines becomes sufficiently high to apply it. The search for alternative methods, less costly in terms of quantum resources, is therefore an intensive domain of activity today. We report below some of the methods that have been proposed to access excited state properties. Noteworthy, these methods sometimes only give access to the energies, not the states associated with them.
#### 6.3.1 Quantum Subspace Expansion methods
A common strategy used in classical computers to obtain the approximate solution of a diagonalization problem when a complete CI solution is prohibitive consists of iteratively constructing subspaces of the total Hilbert space \(\mathcal{H}\) of increasing complexity. In many cases, at a given level \(M\) of complexity, a subspace is generated by a set of states \(\{|\Psi_{0}\rangle,\ldots,|\Psi_{M-1}\rangle\}\) that spans a subspace denoted as \(\mathcal{H}_{M}\). The method to obtain the states is usually iterative in the sense that \(|\Psi_{k+1}\rangle\) is constructed from \(|\Psi_{k}\rangle\) using specific operations. Arnoldi or Lanczos methods are famous examples widely used on classical computers [40]. The generic strategy of using an increasing number of states to form a subspace of the Hilbert space will be called hereafter Quantum Subspace Expansion (QSE) [216]. A schematic view of the QSE strategy is shown in Fig. 6.3. This strategy's success depends on its capacity to construct the relevant subspace for a given problem. Key ingredients are the seed state \(|\Psi_{0}\rangle\) and the rules for the iterative generation of states.
The generated states are usually not orthogonal with one another. Any state belonging to the reduced space \(\mathcal{H}_{M}\) can be written as \(|\Psi\rangle=\sum\limits_{K=0}^{M-1}c_{K}|\Psi_{K}\rangle\). An approximate eigenvalue \(E\) in this space can be obtained by solving the generalized eigenvalue problem written as a set of \(K=0,M-1\) equations given by:
\[\sum\limits_{K^{\prime}=0}^{M-1}c_{K^{\prime}}H_{KK^{\prime}}=E\sum\limits_{K ^{\prime}=0}^{M-1}c_{K^{\prime}}O_{KK^{\prime}}, \tag{6.25}\]
with \(H_{KK^{\prime}}=\langle\Psi_{K}|H|\Psi_{K^{\prime}}\rangle\) and \(O_{KK^{\prime}}=\langle\Psi_{K}|\Psi_{K^{\prime}}\rangle\) the overlap matrix. Such equations can be solved in a two-step process by first diagonalizing the overlap matrix prior to the Hamiltonian diagonalization. Potentially, in \(\mathcal{H}_{M}\), \(M\) approximate eigenstates can be obtained, which makes the method quite attractive.
On classical computers, a typical choice for the states is the Krylov basis where \(|\Psi_{k+1}\rangle=H|\Psi_{k}\rangle\). Great effort is currently devoted to the possibility of extending the Krylov space technique to quantum computers. Then, the strategy is to compute the matrix elements of the two matrices \(O\) and \(H\) in the reduced space using the quantum computer. Subsequently, this matrix is diagonalized via classical methods [217]. A discussion on the possibility of obtaining the strict equivalent of the Krylov basis using derivatives of the generating function \(F(t)\) introduced below in Eq. (6.28) was scrutinized in Ref. [86]. This analysis was made using the fact that \(H_{KK^{\prime}}=\langle\Psi_{0}|H^{K+K^{\prime}+1}|\Psi_{0}\rangle\) and that \(F(t)\) is the generating function of the Hamiltonian moments. However, this method is exceptionally susceptible to numerical noise.
Alternatively, one can generate the states using unitary transformation of the seed state such that \(|\Psi_{K}\rangle=U_{K}|\Psi_{0}\rangle\). This approach is particularly well adapted to quantum computing, where circuits are automatically unitary. In the _Quantum Krylov_ technique, the hamiltonian propagator itself is used such that \(U_{K}\equiv e^{-iH\tau_{K}}\) where a set of times \(\{\tau_{K}\}_{K=0,\ldots,M}\) has been assumed with the convention \(\tau_{0}=0\). The possibility of using Krylov-inspired techniques on quantum computers has more generally attracted much attention in recent years [216; 217; 218; 219; 220; 221; 222; 223; 224; 225; 226] (see also the survey [227]).
#### 6.3.2 QPE-inspired quantum algorithms
Different methods inspired by the QPE algorithm have been proposed to reduce the quantum resources in ancilla qubits or the number of operations in the quantum circuit. For instance, the methods we will discuss subsequently use only one ancilla qubit and, contrary to the QPE algorithm, do not require the inverse quantum Fourier Transform. Ref. [17] discusses a comprehensive list of these methods.
As an alternative to the standard phase estimation, Kitaev's algorithm [228] and the iterative QPE algorithm based on the semiclassical quantum Fourier transform [229; 230] (see also [231; 232]) were proposed to find the eigenvalue of a single eigenstate. More recently, further progress has been made with the _Rodeo_ algorithm [233] that appears as a practical tool in the NISQ context [234; 235]. We briefly describe below how these iterative techniques can be implemented.
We consider again an initial state \(|\Psi\rangle\) that decomposes onto the Hamiltonian eigenstates \(\{|\alpha\rangle\}\) (associated to
Figure 6.3: Illustration of QSE philosophy where the eigenvalue problem is considered in a subspace with increasing dimension.
a set of eigenvalues \(\{E_{\alpha}\}\)) as \(|\Psi\rangle=\sum_{\alpha}c_{\alpha}|\alpha\rangle\). Provided that the initial state is prepared on a quantum register, the circuit depicted in Fig. 6.4 is applied, and the ensemble of ancilla qubits is measured. The parameter \(E\) appearing as a scaling factor in the phase rotations shown in Fig. 6.4 can be freely varied. Given that there are no entangling gates between the ancilla wires in Fig. 6.4, it is possible to see this circuit as a consecutive series of measurements on a single qubit, similar to the iterative QPE procedure [230]. To be more specific, assuming \(n_{a}\) indirect measurements, a set of times \((\tau_{1},\ldots,\tau_{n_{a}})\) are considered and the controlled operation of the \(j^{\rm th}\) measurement is made using \(U(\tau_{j})=e^{-iH\tau_{j}}\) with \(H\) the Hamiltonian. It can then be shown [233; 234] that the probability to obtain only the \(|0\rangle\) state in all of the consecutive \(n_{a}\) measurements of the ancilla qubits is:
\[p_{0^{n_{a}}}\left(E,\{\tau_{i}\}\right)=\sum_{\alpha}|c_{\alpha}|^{2}\prod_{ i=1}^{n_{a}}\cos^{2}\left(\left(E_{\alpha}-E\right)\frac{\tau_{i}}{2}\right). \tag{6.26}\]
As the number of repetitions \(n_{a}\) increases, the above function of \(E\) peaks around the \(E_{\alpha}\) values. The flexibility in choosing the \(\{\tau_{i}\}\) values can be further used to improve the convergence. Below, we discuss two main options:
(i) _Fixed times prescription:_ We can assume that we have \(\tau_{i}=\tau/2^{i-1}\) and \(\tau=\frac{\pi 2^{2}n^{-2}}{|E_{up}-E_{low}|}\) where \(E_{up}\) (resp. \(E_{low}\)) is an upper bound (resp. lower bound) on the spectrum of the Hamiltonian. An example of the resulting probability \(p_{0^{n_{a}}}\) given by Eq. (6.26) at various \(E\) is shown in Fig. 6.5. From the positions of the peaks in the distribution, as well as their amplitude, we can extract approximate eigenenergies and the weights of the associated eigenstates in the decomposition of \(|\Psi\rangle\), \(|c_{\alpha}|^{2}\). (ii) _Rodeo prescription:_ The key idea behind the Rodeo method is to assume a Gaussian statistical ensemble of times \(\{\tau_{i}\}\) with an adjustable Gaussian width \(\sigma\). Averaging over the statistical ensemble gives the probability:
\[p_{0^{n_{a}}}\left(E\right)=\sum_{\alpha}|c_{\alpha}|^{2}\left[\frac{1+e^{- \left(E_{\alpha}-E\right)^{2}\sigma^{2}/2}}{2}\right]^{n_{a}}, \tag{6.27}\]
that is also strongly peaked around the eigenenergies. An illustration of the Rodeo prescription is also given in Fig. 6.5.
The Rodeo method has two advantages compared to the fixed times' approach. First, the probabilities are flattened away from the energies, which helps to identify peaks. Second, the extra parameter \(\sigma\) can be used as a resolution to rapidly scan a given energy range (see [234]).
Accelerated-VQEFinally, let us mention a proposal to take the "best of both worlds" of QPE and VQE by interpolating between these two regimes [236]. The idea is to tune an interpolation parameter \(\alpha\in[0,1[\) to achieve an optimal trade-off between measurement variance and circuit depth.
#### 6.3.3 Response and Green's function methods
Starting from the decomposition (5.1), a direct method on a classical computer to get both the amplitude
Figure 6.4: Circuit used for the Rodeo algorithm [233]. Given that there are no entangling gates between the ancillary qubits, it is possible to use a procedure where only one is used and measured multiple times.
Figure 6.5: Illustration of the iterative methods using the fixed times (black line) and Rodeo (blue line) prescriptions discussed in the text. The red bars indicate the (exact) decomposition of the initial wave function in the eigenbasis of the Hamiltonian \(\{|\alpha\rangle\}\), i.e., the points \((|c_{\alpha}|^{2},E_{\alpha})\) from \(|\psi\rangle=\sum_{\alpha}c_{\alpha}|\alpha\rangle\). Here the system is assumed to have two eigenvalues, and \(n_{a}=3\) is used. Note that in the Rodeo case, each eigenstate \(k\) contributes to a flat background proportional to \(|c_{\alpha}|^{2}/2^{n_{a}}\) (see Eq. (6.27)). Contrary to the fixed times case, this background being flat cannot be misinterpreted as an eigenstate contribution.
and the eigenvalues \(E_{\alpha}\) would be to compute the function:
\[F(t)=\langle\Psi|e^{-itH}|\Psi\rangle, \tag{6.28}\]
and perform its classical Fourier transform, leading to
\[\widetilde{F}(E)\propto\sum_{k}|c_{\alpha}|^{2}\delta(E-E_{\alpha}). \tag{6.29}\]
The function \(F(t)\) is called a _generating function_ for reasons that will become apparent hereafter. In contrast, \(\widetilde{F}(E)\) is named _response function_ in analogy to the response of a system to an external field. Two essential conditions are necessary to extract the energies from this technique accurately. The first one is the possibility of computing the propagator entering in Eq. (6.28). For complex systems like many-body systems, quantum computers seem appropriate platforms. For these reasons, a hybrid method where (6.28) is estimated on a quantum device while the Fourier is performed classically has been advertised in Ref. [86] (see also discussion in [237]). One clear advantage is that the real and imaginary parts of \(F\) at a given time \(t\) can be obtained using standard techniques with a single Hadamard-like test, as pictured in Table 4. An illustration of such a function was given in Ref. [86] for superfluid systems and the Hubbard model.
A second constraint is that the energy resolution achieved in Eq. (6.29) will strongly depend on the maximal time \(\tau_{\rm max}\) over which \(F\) is known due to the Heisenberg uncertainty relation between time and energy. Such a long-time evolution requirement prevents using the response function technique in the NISQ period.
Along the same line, with performant quantum computers, one can also imagine a priori to access Green's function in many-body systems without approximation. For instance, the one-body Green's function matrix elements can be defined as [184]:
\[G_{ij}(t,t^{\prime})=\langle\Psi(0)|{\rm T}\left[a_{j}^{\dagger}(t)a_{i}(t^{ \prime})\right]|\Psi(0)\rangle. \tag{6.30}\]
Here, \(|\Psi(0)\rangle\) is the initial state we suppose normalized. T is the time-ordering operator, and we use the Heisenberg interaction representation, i.e., \(a_{i}^{\dagger}(t)=U^{\dagger}(t)a_{i}^{\dagger}U(t)\), with \(U(t)=e^{-iHt}\). Provided that the propagator can be efficiently implemented on a digital quantum platform, the Green's function matrix elements can be obtained using, for instance, a circuit similar to the one shown in the bottom part of Table 4. The possibility of computing Green's functions on quantum computers is now being explored [238; 239; 240; 241; 242].
## 7 Entanglement and quantum entropy
One promise of quantum computing is the possibility to construct quantum states that include complex internal correlations between particles. Hence, a question of utmost importance underpinning the design of quantum circuits is their ability to generate entanglement beyond classical correlations. Here, we describe some figures of merit to measure the level of entanglement exhibited by a state and how to connect this degree of entanglement to requirements on the depth of a state-preparation circuit or the complexity and expressive power of a quantum ansatz.
### Basic aspects of entanglement and some measures of it
Entanglement is one branch of quantum information theory that is a vast subject of research [9; 243]. This subsection briefly introduces how to measure entanglement between two systems, starting from the von Neumann entropy concept.
#### 7.1.1 Measures of entanglement
The entanglement degree is relative to a partition of a quantum system into subsystems, denoted by \(A\) and \(B\). It measures how far the state of the entire system \(\{A+B\}\) is from being factorized into a product of the states of its subparts \(A\) and \(B\).
_Von Neumann entanglement entropy_ Let us assume that the total system is described by a density matrix \(\rho_{AB}\) (see section 8 for more details about density matrices); the densities of the two subsystems can be obtained by performing partial traces:
\[\rho_{A}={\rm Tr}_{B}(\rho_{AB}),\ \ \rho_{B}={\rm Tr}_{A}(\rho_{AB}), \tag{7.1}\]
where \(\rho_{A}\) (resp. \(\rho_{B}\)) is the density of the system \(A\) (resp. \(B\)). If the two subsystems are not entangled, then we have the simple property:
\[\rho_{AB}=\rho_{A}\otimes\rho_{B}. \tag{7.2}\]
One key aspect of quantum computing is the possibility to control the degree of entanglement between two subsets of the complete qubit register. Entanglement is a specific feature of quantum mechanics that does not exist in classical mechanics. Quantum algorithms, as opposed to classical algorithms, generally use entanglement as a tool. This is, for instance, the case of most algorithms discussed in section 5. In the many-body
context, when one qubit represents one orbital, the possibility of a given ansatz to produce entanglement between qubits should also be linked to the onset of correlation between particles. Therefore, the possibility of generating and controlling entanglement is essential. A possible measure of the entanglement between two subsystems is based on the von Neumann entropy, defined for a given density \(\rho_{x}\) as:
\[S_{x}=-\text{Tr}(\rho_{x}\log_{2}\rho_{x}). \tag{7.3}\]
Using \(x=AB\), \(A\), or \(B\), we obtain three entropies \(S_{AB}\), \(S_{A}\), and \(S_{B}\) associated with the total system or with either subsystem (\(S_{A}\) and \(S_{B}\) are called bipartition entropies). These entropies are real positive numbers. They can quantify the complexity, disorder or entanglement in a system. An interesting property of the entanglement entropy is the so-called subadditivity condition [244]:
\[S_{AB}\leq S_{A}+S_{B}, \tag{7.4}\]
where the equality holds strictly when Eq. (7.2) is verified, i.e., when the two systems are not entangled.
One can also define the _entanglement entropy_\(S_{\text{max}}\) of a system as the maximum bipartition entropy over all the possible bipartitions of the system.
Mutual informationAnother measure of entanglement is given by the so-called _mutual information_\(M_{AB}\), defined as:
\[M_{AB}=S_{A}+S_{B}-S_{AB}. \tag{7.5}\]
This quantity is the crux of the algorithm developed in Ref. [170] to limit circuit depth by adapting the qubit order to the chip's topology according to the leading correlations among qubit pairs (there, \(A\) is chosen to be the Hilbert space of individual qubits \(A=\{i\}\) or qubit pairs \(A=\{i,j\}\)). It is also used in different fields of physics and chemistry to characterize the entanglement between particles (see, for instance, [245; 246; 247; 248]).
#### 7.1.2 Schmidt decomposition
Here, we restrict ourselves to the specific case where the total system is a pure state \(|\Psi\rangle\), which is the case for all ansatze discussed in section 6.2. In this case, we have \(\rho_{AB}=|\Psi\rangle\langle\Psi|\) and \(S_{AB}=0\). Splitting the system into two subsystems and introducing the two bases \(\{|\alpha\rangle\}_{\alpha=1,\mathcal{N}_{A}}\) and \(\{|\beta\rangle\}_{\beta=1,\mathcal{N}_{B}}\) of subsystem \(A\) and \(B\) respectively, one can decompose the total state as:
\[|\Psi\rangle=\sum_{\alpha,\beta}c_{\alpha\beta}\,|\alpha\rangle\otimes|\beta \rangle\,. \tag{7.6}\]
One can then interpret \(c_{\alpha\beta}\) the matrix element of a \(\mathcal{N}_{A}\times\mathcal{N}_{B}\) matrix \(\mathcal{C}\) and use the Singular Value Decomposition (SVD) to rewrite is as
\[c_{\alpha\beta}=\sum_{k=1}^{\chi}U_{\alpha k}s_{k}V_{k\beta}^{\dagger} \tag{7.7}\]
with \(s_{k}>0\) and \(\chi\leq\min(\mathcal{N}_{A},\mathcal{N}_{B})\). The normalization of state \(|\Psi\rangle\) ensures that \(\sum\limits_{k=1}^{\chi}s_{k}^{2}=1\). \(\chi\), the number of nonzero singular values, and is called, in this context, the _Schmidt rank_. The \(s_{k}\) are the Schmidt coefficients. They define the _entanglement spectrum_ of the state, from which the entropy of each subsystem can be obtained. For a total pure state, we have:
\[S_{A}=S_{B}=-\sum_{k=1}^{\chi}s_{k}^{2}\log_{2}(s_{k}^{2}). \tag{7.8}\]
Such a decomposition is useful to provide upper limits to the subsystems' entropies. For instance, in the case of a factorized state (Eq. (7.2)), there is only one coefficient \(s_{k}\) in the Schmidt decomposition, therefore \(\chi=1\) and \(S_{A}=S_{B}=0\).
Perhaps more importantly, the upper bound on the entanglement entropy at a fixed number of Schmidt coefficients \(\chi\) corresponds to a flat singular value spectrum (\(s_{k}=1/\sqrt{\chi}\) for all \(k\)). In this case, \(S_{A/B}=\log_{2}(\chi)\). Thus, in general,
\[S_{A/B}\leq\log_{2}(\chi)\leq\log_{2}\left[\min(\mathcal{N}_{A},\mathcal{N}_{ B})\right]. \tag{7.9}\]
### Gaussian qubit states
In section 6.2, we discussed the case of uncorrelated ansatze like HF states or, more generally, Gaussian states. Disregarding the extra complexity induced by the Pauli principle, we consider here such a state. More precisely, taking inspiration from a many-body density obtained usually for a set of non-interacting particles at thermal equilibrium, we consider a qubit register whose density matrix is given by:
\[\rho=\frac{1}{Z}\exp\left(-\sum_{i=0}^{n-1}\alpha_{i}Q_{i}^{+}Q_{i}^{-}\right), \tag{7.10}\]
where \(Z\) is a normalization factor ensuring that \(\text{Tr}(\rho)=1\). Here \(Q_{i}^{+}\) (resp. \(Q_{i}^{-}\)) are the operators acting on qubit \(i\). Using a technique similar to the one used in the Fock space to treat non-interacting fermions in the grand canonical ensemble, we deduce that the density can be rewritten as:
\[\rho=\bigotimes_{i=0}^{n-1}\left[(1-p_{i})|0_{i}\rangle\langle 0_{i}|+p_{i}|1_{i} \rangle\langle 1_{i}|\right], \tag{7.11}\]
with \(p_{i}=(1+e^{\alpha_{i}})^{-1}\). The mixed-state equivalent of the HF pure state case is obtained when all the \(p_{i}\) are either equal to \(0\) or \(1\).
We then consider that the total register is separated into two sets of qubits as depicted schematically in Fig. 7.1-a forming the subsystems \(A\) and \(B\) discussed previously. We then denote by \(S_{(i_{1},\ldots,i_{k})}\) the entropy associated to the subsystem containing the qubits \((i_{1},\ldots,i_{k})\). Because of the tensor structure of the total density, Eq. (7.11), this entropy verifies:
\[S_{(i_{1},\ldots,i_{k})}=\sum_{m=1,k}S_{(i_{m})}, \tag{7.12}\]
where \(S_{(i)}\) denotes the entropy of a subsystem formed by the single qubit \(i\). This entropy is given by:
\[S_{(i)}=-\left[p_{i}\ln p_{i}+(1-p_{i})\ln(1-p_{i})\right]. \tag{7.13}\]
Eq. (7.12) implies that the mutual information \(M_{AB}\) of any partition of the total register is zero independently of the number of qubits or which qubits are included in each subsystem. Said differently, there is no entanglement when a density like (7.10) is considered.
### Understanding entanglement generation with the Matrix Product State representation
Information flow along a circuit can be described as causal, 'light' cones relating a local action on a subset of qubits to its effects on other qubits at a later stage. Lieb-Robinson bounds limit the speed at which quantum correlations, aka entanglement, can be generated [249]. When translated into the language of digital quantum circuits, these bounds prescribe that a certain depth is required to reach a certain amount of entanglement. A natural framework to better understand this is the Matrix Product State (MPS) representation [250; 41] and the associated quantum circuits [251] (see section 6.2.9).
#### 7.3.1 Constructing the MPS representation of any state \(|\Psi\rangle\)
In this section, we briefly review a standard derivation (see also [41]), that of the MPS representation starting from any pure state, to shed light on the link between the MPS representation and entanglement entropy.
Let us consider a general state of \(n\) qubits \(|\Psi\rangle=\sum_{\sigma_{i}=0,1}c_{\sigma_{1}\ldots\sigma_{n}}|\sigma_{1}, \ldots,\sigma_{n}\rangle\). The complex amplitudes \(c_{\sigma_{1}\ldots\sigma_{n}}\), understood as elements of a tensor of rank \(2^{n}\), can be written as a MPS as given by Eq. (6.24). The proof briefly recalled below uses a strategy schematically represented in Fig. 7.1-b. It corresponds to an iterative set of separations of the full register in two subsystems together with applications of singular value decompositions (SVDs). This proof can be summarized as follows:
(i) Consider \(c_{\sigma_{1},\ldots\sigma_{n}}\) as a \(2\times 2^{n-1}\) matrix \(c_{\sigma_{1},(\sigma_{2}\cdots\sigma_{n})}\) and perform a SVD of it:
\[c_{\sigma_{1},(\sigma_{2}\ldots\sigma_{n})}=\sum_{a_{1}=0}^{r_{1}-1}U_{\sigma_ {1}a_{1}}s_{a_{1}}V_{a_{1},(\sigma_{2}\ldots\sigma_{n})}^{\dagger}, \tag{7.14}\]
where \(s_{a_{1}}\) are the nonzero eigenvalues whose number, i.e., the Schmidt rank, is denoted by \(r_{1}\). It verifies \(r_{1}\leq 2\). One can then introduce the notation \([A^{(1)}]^{\sigma_{1}}_{1,a_{1}}=U_{\sigma_{1},a_{1}}\) and absorb the \(s_{a_{1}}\) in \(V\) to give:
\[c_{\sigma_{1}\cdots\sigma_{n}}\equiv\sum_{a_{1}}[A^{(1)}]^{\sigma_{1}}_{1,a_{ 1}}G_{(a_{1},\sigma_{2}),(\sigma_{3},\ldots,\sigma_{n})}. \tag{7.15}\]
(ii) The matrix \(G\) has the dimension \((2r_{1})\times 2^{n-2}\). We can then redo an SVD on the matrix \(G\) giving a number \(r_{2}\) of nonzero eigenvalues with \(r_{2}\leq\min(2r_{1},2^{n-2})\). The process is then iterated until the amplitudes get rewritten as contractions over a _tensor train_ comprising \(n\) tensors with rank one (vectors) or two (matrices):
\[c_{\sigma_{1}\cdots\sigma_{n}} \equiv \left([A^{(1)}]^{\sigma_{1}}\right)\left([A^{(2)}]^{\sigma_{2}} \right)\cdots\left([A^{(n)}]^{\sigma_{n}}\right) \tag{7.16}\] \[= \sum_{\begin{subarray}{c}\{a_{i}=0,\\ \ldots,r_{i}-1\}\end{subarray}}[A^{(1)}]^{\sigma_{1}}_{1,a_{1}}[A^{(2)}]^{ \sigma_{2}}_{a_{1},a_{2}}\cdots[A^{(n)}]^{\sigma_{n}}_{a_{n-1}}.\]
The indices not summed over (the \(\sigma_{j}\)) are referred to as the _physical indices_. Here, each can take two different values. On the other hand, internal indices that are summed over (the \(a_{i}\)) correspond to so-called _virtual indices_. The different ranks \(\{r_{i}\}\) verify \(r_{i+1}\leq\min(2r_{i},2^{n-i-1})\leq 2^{n/2}\) and \(\chi=\max_{i}(r_{i})\) is nothing but the bond dimension (BD) discussed in section 6.2.9.
Figure 7.1: (a) Schematic illustration of a separation of a qubit register into two subsystems where a set of \(k\) qubits \((i_{1},\cdots i_{k})\) from a subsystem \(A\), while \(B\) contains all other qubits of the total register (A+B). (b) schematic view of how a general tensor is decomposed to give an MPS form.
A corollary of the above demonstration is the following inequality:
\[2^{S_{\rm max}}\leq\chi\leq 2^{n/2}, \tag{7.17}\]
that is a consequence of Eq. (7.9).
#### 7.3.2 Entanglement in various systems
In some systems, such as the ground state of gapped, local Hamiltonians, the bond dimension scales favorably with system size \(n\) in virtue of the _area law_[252]: the bipartition entropy increases as the area \(\propto n^{d-1}\) of the bipartition (\(d\) refers to the dimension) rather than the volumes \(\propto n^{d}\) of the subsystems. For such systems, the MPS representation thus offers a tractable way to store the wave function. Conversely, other systems require exponentially-big BDs, and it may be advantageous to turn to a quantum computer to represent them, for instance, for ground states of 2D local Hamiltonians. A counterexample is the time-evolving state of a quenched many-body problem, which typically displays a ballistic growth of entanglement with time \(t\), \(S\propto t\): then, \(\chi\) needs to scale exponentially with \(t\).
Due to their properties, MPS can be used to simulate quantum computers that generate weakly entangled states [253] or that are plagued with a finite fidelity [254; 255]. MPS are also more and more used to study quantum chemical system despite the nonlocal character of the Coulomb interaction tensor [44].
Let us also mention that the entanglement entropy, and thus the size of the MPS representation, is heavily basis-dependent. This dependence is illustrated in Figure 7.2 where the ground state entanglement entropy of the half-filled Hubbard dimer is plotted as a function of the ratio \(U/t\), which is a measure of correlations in the system. In the original basis, the Hubbard Hamiltonian is written (denoted here as the site-spin basis), the entanglement entropy is maximal (and saturates the bound) at \(U/t=0\) and decreases to a non-vanishing asymptotic value as \(U/t\to\infty\). Conversely, in the Fourier-transformed basis, the entanglement entropy vanishes at \(U/t=0\) and increases monotonically to the asymptotic value as \(U/t\to\infty\). This asymptotic value can be understood from the form of the ground state, which tends to the superposition \(\frac{1}{\sqrt{2}}\left(\left|\uparrow\downarrow\right\rangle+\left|\downarrow \uparrow\right\rangle\right)\) as \(U/t\) increases.
#### 7.3.3 Generating an MPS with a quantum computer
Above a specific entanglement entropy and correspondingly a certain MPS bond dimension, Matrix Product States become impractical to store on a classical computer. This subsection explains how MPS can be generated using a quantum computer.
The MPS is entirely characterized by the set of tensors \([A^{(i)}]^{\sigma_{i}}_{a_{i},a_{i}+1}\). In Fig. 7.3, we show a simple circuit to create a MPS with a uniform \(\chi=2\). MPS with such a bond dimension are, for instance, Greenberger-Horne-Zeilinger (GHZ) states and so-called \(\left|W\right\rangle\) states [256; 257].
The MPS example shows that the bond dimension \(\chi\) controls the entanglement entropy and prescribes a certain depth for the quantum circuit preparing the MPS on a linearly-connected chip. Indeed, applying a two-qubit gate on qubits with local BD \(\chi_{k}\) yields an MPS with local BD \(\chi^{\prime}_{k}\leq 2\chi_{k}\). This result is illustrated in Figure 7.4 with tensor network formalism. As a consequence, to prepare a MPS with BD \(2^{n}\), one has to resort to a circuit of depth \(n\).
The possibility of designing complex trial states and controlling the degree of entanglement is an active field of research today. The tensor network discussed here is beneficial to understand the link between the gate structure used in a circuit and the achieved complexity
Figure 7.2: Entanglement entropy displayed by the ground state of the half-filled (\(\mu=U/2\)) Hubbard dimer, i.e., with two doubly degenerated sites, as a function of the ratio \(U/t\) of the Hamiltonian defined in Eq. (2.2). In the site-spin basis, the entanglement entropy saturates the upper bound at \(U/t=0\). At high \(U/t\), local charge fluctuations are suppressed, pinning the entanglement entropy to a nonzero asymptotic value. Turning to the reciprocal Fourier basis – the diagonalization basis of the quadratic/single-particle part of the Hamiltonian –, one sees that the ground state exhibits no entanglement at \(U/t=0\) and monotonically increases to the asymptotic value. This fact illustrates the strong dependence of the entanglement entropy on the single-particle basis used. Note that here, we have calculated \(S\) using the natural logarithm \(\ln\) rather than \(\log_{2}\).
in entangling particles in many-body systems. The capability of layered ansatze to encompass some physical Hamiltonian ground states, as well as the BD of the converged wavefunction they yield, was studied in, e.g., Ref [259].
## 8 Noise in quantum processors
Quantum computers are imperfect and noisy. Performing calculations with quantum devices today means being able to accommodate these imperfections. Enormous efforts are being made today to understand/correct the different sources of noise. In the meantime, methods are developed to obtain acceptable results despite the various noise sources. Here, we present a brief discussion on (i) how imperfect quantum computing can be understood and might affect the evolution of a quantum system and on (ii) some methods that are used today to, at least partially, get rid of the effects of noise. The discussion below is not explicitly dedicated to application in the many-body sector but applies to any quantum computing problem.
We refer the reader to e.g [260] for an more in-depth review of decoherence, and to [261] for mathematical aspects of noisy quantum computations.
### Decoherence in NISQ processors
Imperfections on current quantum processors can be broken down into two categories: coherent and incoherent errors. Coherent errors are systematic errors like calibration errors. For instance, if the qubit's frequency is not known precisely (say it is \(\omega_{0}+\epsilon\) instead of \(\omega_{0}\)), executing a \(z\)-rotation gate as described in section 3.1.2 with a drive frequency \(\omega_{c}=\omega_{0}\) will result in an over \(z\)-rotation of angle \(\epsilon t\). Coherent errors can thus be described as additional unwanted unitary operations. In theory, they are reversible since a unitary operation \(U\) can be undone by applying the hermitian conjugate operator \(U^{\dagger}\).
Incoherent errors, on the other hand, are stochastic. They come from the uncertainty on the quantum processor's state brought by its interaction with the outside world, often called the environment. In principle, they cannot be undone and are thus irreversible. The only way to avoid decoherence induced by the environment is to isolate as much as possible the quantum computer from the rest of the world.
In this section, we focus on describing incoherent errors and their modeling in analog and digital quantum processors.
#### 8.1.1 Describing the state of a noisy quantum computer: the density matrix
Thus far, we have described the state of a quantum processor, whether analog or digital, by its wavefunction \(\ket{\Psi}\). Gates and measurements have been introduced as acting on this object.
In noisy computers, unwanted interactions with the environment lead to a loss of information on the system's state. To capture this uncertainty, the state of the quantum system can no longer be described as a single wavefunction \(\ket{\Psi}\), but as a statistical mixture of wavefunctions: the system is said to be in states \(\{\ket{\Psi_{i}}\}_{i}\) with probabilities \(\{p_{i}\}_{i}\). Thus, the average of an observable is no longer \(\langle O\rangle=\bra{\Psi}O\ket{\Psi}\) but \(\langle O\rangle=\sum_{i}p_{i}\bra{\Psi_{i}}O\ket{\Psi_{i}}\). A convenient object to manipulate this uncertain (or _mixed_) state is the so-called density matrix \(\rho\):
\[\rho=\sum_{i}p_{i}\ket{\Psi_{i}}\bra{\Psi_{i}}. \tag{8.1}\]
Figure 7.3: Illustration of a one-layer set of quantum gates used to create an MPS circuit (adapted from [251]). The ensemble of gates \(G^{[i]}\) has been constructed by truncating the SVD decomposition of the tensors in Eq. (7.16) following the method described in [258]. This MPS has \(\chi=2\). Higher \(\chi\) can be constructed by repeating the sequence as a set of layers.
Figure 7.4: Schematic representation of the effect of a two-qubit gate on the associated local bond dimension of an MPS. The tensor network’s bonds (horizontal edges, standing for the summation over virtual indices) and legs (vertical edges, representing physical indices) are represented by their dimensionality. After contraction over the internal indices, an MPS form is retrieved with an SVD.
This object completely describes the state of a noisy quantum computer. For instance, one can check that the expectation value \(\left\langle O\right\rangle\) given above can be recovered as \(\mathrm{Tr}[\rho O]\).
The density matrix has important properties: hermitian, positive semidefinite, and unit trace [244]. These properties ensure it can describe a statistical mixture. In the absence of noise, the state of the quantum processor becomes deterministic: \(\rho\) is given by \(\rho=\left|\Psi\right\rangle\left\langle\Psi\right|\). The state is called "pure", and the Schrodinger equation describes its evolution. For a given \(\rho\), one can tell whether it corresponds to a pure state or a mixed state by looking at the rank of the operator (rank one is a pure state) or at a quantity called purity, \(\mathcal{P}=\mathrm{Tr}\rho^{2}\). The state is pure if \(\mathcal{P}=1\). Otherwise, \(\mathcal{P}<1\).
Let us now describe how (possibly noisy) operations act on a noisy processor's state \(\rho\).
#### 8.1.2 Describing noise in analog processors: Lindblad master equation
Schrodinger's equation describes the temporal evolution of perfect analog processors. In theory, one could describe the temporal evolution of noisy analog processors by describing the state of the processor and of the environment as a single wavefunction \(\left|\Psi_{\mathrm{tot}}\right\rangle\). Its evolution would be driven by a total Hamiltonian \(H_{\mathrm{tot}}=H+H_{\mathrm{env}}+H_{\mathrm{coupling}}\) (where \(H_{\mathrm{env}}\) is the Hamiltonian of the environment and \(H_{\mathrm{coupling}}\) that of the coupling between the processor and the environment). One could then recover, e.g., average values of the processor's observables by computing \(\left\langle\Psi_{\mathrm{tot}}\right|O\left|\Psi_{\mathrm{tot}}\right\rangle\), or, equivalently, \(\mathrm{Tr}[\rho O]\) with \(\rho\) defined by "eliminating" the environmental degrees of freedom via a partial trace operation. This operation is denoted as \(\rho=\mathrm{Tr}_{\mathrm{env}}\left|\Psi_{\mathrm{tot}}\right\rangle\left\langle \Psi_{\mathrm{tot}}\right|\).
However, this strategy is often impractical because the environment generically comprises many degrees of freedom that (i) one cannot describe individually and (ii) one cannot solve the corresponding Schrodinger equation because of the huge size of the total Hilbert space. One thus looks for time-evolution equations that directly focus on the minimal description of the noisy quantum processor, namely the reduced density matrix of the processor, \(\rho\) (instead of \(\left|\Psi_{\mathrm{tot}}\right\rangle\)). Such equations go under the name of "master equations". One of them--the so-called Lindblad equation [244] (also known as Gorini-Kossakowski-Sudarshan-Lindblad equation)--is of particular interest since it guarantees that the time evolution of the density matrix will preserve the essential properties of \(\rho\), namely its unit trace and its positive semidefinite character. It reads:
\[i\hbar\frac{d\rho}{dt}=\left[H(t),\rho\right]-\frac{i}{2}\sum_{m}\left[\left\{ L_{m}^{\dagger}L_{m},\rho\right\}-2L_{m}\rho L_{m}^{\dagger}\right]. \tag{8.2}\]
Here, the \(L_{m}\) operators are known as Lindblad or "jump" operators. They are responsible for decoherence. In the absence of these operators, the system follows a unitary evolution (and the equation is called the Liouville - von Neumann equation). In the presence of these operators, the density matrix evolution becomes non-unitary with dissipation induced by the second term in the right-hand side of Eq. (8.2).
For instance, for a one-qubit system with idle qubits (\(H=0\)) and \(L=\sqrt{\gamma_{\varphi}/2}Z\), the density matrix evolves as
\[\rho(t)=\left[\begin{array}{cc}\rho_{00}(t=0)&\rho_{01}(t=0)e^{-\gamma_{ \varphi}t}\\ \rho_{01}(t=0)^{*}e^{-\gamma_{\varphi}t}\;(1-\rho_{00}(t=0))\end{array}\right]. \tag{8.3}\]
The off-diagonal elements of \(\rho\) (sometimes called "coherences") become negligibly small with a characteristic "dephasing" time \(T_{\varphi}=1/\gamma_{\varphi}\). For \(t\gg T_{\varphi}\), the state of the quantum system becomes \(\rho\approx\rho_{00}|0\rangle\langle 0|+\rho_{11}|1\rangle\langle 1|\). If one starts from a superposed pure state \(|\psi\rangle=(|0\rangle+|1\rangle)/\sqrt{2}\), i.e \(\rho_{ij}(t=0)=1/2\), for all \((i,j)\), one ends up in state \(\rho=1/2|0\rangle\langle 0|+1/2|1\rangle\langle 1|\). In other words, under this dephasing noise, we went from a system in a (quantum) state \(0\)_AND_\(1\) to a (classical-like) state \(0\)_OR_\(1\).
Other Lindblad operators lead to different types of noise; for instance, \(L=\sqrt{\gamma_{1}}Q^{-}\) leads to a kind of noise called relaxation (or "amplitude damping") noise, which causes the qubit to lose energy to its environment by "relaxing" to its "ground state" \(\rho=|0\rangle\langle 0|\):
\[\rho(t)=\left[\begin{array}{cc}1-\rho_{11}(t=0)e^{-\gamma_{1}t}&\rho_{01}(t= 0)e^{-\gamma_{1}t/2}\\ \rho_{10}(t=0)e^{-\gamma_{1}t/2}&\rho_{11}(t=0)e^{-\gamma_{1}t}\end{array} \right]. \tag{8.4}\]
The characteristic time is \(T_{1}=1/\gamma_{1}\).
Putting these two noise models together yields the time evolution:
\[\rho(t)=\left[\begin{array}{cc}1-\rho_{11}(t=0)e^{-t/T_{1}}&\rho_{01}(t=0)e^ {-t/T_{2}}\\ \rho_{10}(t=0)e^{-t/T_{2}}&\rho_{11}(t=0)e^{-t/T_{1}}\end{array}\right] \tag{8.5}\]
with the characteristic times:
\[\frac{1}{T_{1}} = \gamma_{1}, \tag{8.6}\] \[\frac{1}{T_{2}} = \gamma_{\varphi}+\frac{\gamma_{1}}{2}=\frac{1}{T_{\varphi}}+\frac{ 1}{2T_{1}}. \tag{8.7}\]
These times can be measured experimentally on real hardware by conducting Rabi experiments (for \(T_{1}\)) and Ramsey experiments (for \(T_{2}\)) (see e.g [262]). In real hardware, the \(t\)-dependence of the off-diagonal term is
generally not as simple as an exponential decay because noise is usually not white (contrary to the assumptions leading to the Lindblad equation) [263].
The effect of dephasing and relaxation noise is illustrated in Fig. 3.2: the red trajectory represents the evolution of \(\rho(t)\) under a Rabi and detuning drive and Lindblad jump operators of the dephasing and relaxation type. Relaxation pushes states towards the North pole (since it tends to relax states to \(\ket{0}\)), while dephasing pushes states towards the vertical axis of the sphere (it destroys superposed states, which sit on the equator of the sphere). These effects are visible in the figure, where the red trajectory is deformed towards the vertical axis and the North pole of the Bloch sphere.
In practice, these two coherence times are handy to crudely assess the number of gates that can be executed on a given hardware platform. Since the quantum execution time \(\tau_{\text{run}}\propto\tau_{\text{gate}}\) must be much shorter than the coherence time \(T\), the allowed depth is \(\ll T/\tau_{\text{run}}\propto T/\tau_{\text{gate}}\). Thus a rough quality factor for a quantum algorithm is the ratio \(T/\tau_{\text{gate}}\) (as opposed to the sole coherence time).
As already mentioned, the Lindblad master equation is itself an approximate evolution equation. It assumes that the coupling between the environment and the processor is weak and that the environment has no memory effect, a property called Markovianity. In other words, it can only describe "white" noise, i.e., noise without temporal correlations. This description may not be sufficient for some architectures. A prominent example is superconducting qubits, where dephasing noise is known to be "pink", i.e., its autocorrelation function decays as \(1/f\)[264] (instead of being a constant in frequency for white noise). Other more complex equations can be used to describe dissipative and decoherence effects, particularly non-Markovian effects [265; 244; 266]. Such effects can be incorporated at the price of a significant increase in the numerical effort, the nature of the stochastic jumps, or both (see, for instance, [267; 268; 269; 270]), and are in general not incorporated to describe noisy qubits.
#### 8.1.3 Describing noise in digital processors: quantum channels.
Noisy gatesIn digital quantum processors, the time evolution of the quantum state is specified by a sequence of gates. One usually does not have direct access to the underlying Hamiltonian \(H(t)\): for each gate, the Hamiltonian is tuned by the hardware maker to reach a target unitary operator \(U\). Because of this discrete description of the time evolution, the Lindblad equation introduced in the previous subsection is not the most convenient way to study the time evolution of the quantum state.
The most straightforward way to translate the perfect unitary evolution of the wavefunction \(\ket{\Psi}\) induced by quantum gates into a noisy evolution is to describe each operation (gate) as a transformation of the density matrix \(\rho\) introduced in subsection 8.1.1. Owing to the linear nature of the Schrodinger equation, this transformation--that we shall call \(\mathcal{E}\)--is linear. It must preserve the critical properties of \(\rho\), namely its unit trace (\(\mathcal{E}\) is said to be "trace-preserving" (TP)) and positive semidefinite character (\(\mathcal{E}\) is then said to be "positive"). The mapping must also be such that any extension \(\mathcal{E}\otimes I\) to a larger space is positive, a property called "complete positivity". Thus, a noisy quantum gate is a completely positive, trace-preserving (CPTP) map acting on density matrices. It is also called a quantum channel.
Quantum channels have several equivalent representations that are used in different contexts. A widespread representation is the Kraus, or operator-sum representation [271; 272]:
\[\mathcal{E}(\rho)=\sum_{k=1}^{K}E_{k}\rho E_{k}^{\dagger}. \tag{8.8}\]
The \(E_{k}\) operators are called Kraus operators. \(K\) is called the Kraus rank. Trace preservation imposes \(\sum_{k}E_{k}^{\dagger}E_{k}=I\). The \(K=1\) case corresponds to a unitary evolution since, in this case, \(E_{1}^{\dagger}E_{1}=I\) and the density matrix transforms as \(\rho\to E_{1}\rho E_{1}^{\dagger}\), i.e., a pure state \(\ket{\Psi}\) is mapped to a pure state \(E_{1}\ket{\Psi}\).
Alternative representations that can be used include the Pauli transfer matrix (PTM, [273]), the matrix representation of the linear map written on the basis of Pauli matrices. The matrix representation is sometimes called the superoperator (or \(\mathcal{S}\)-matrix) representation when expressed on the canonical matrix basis. One can also mention the \(\chi\)-matrix (or process) representation [274], the Choi-Jamiolkovski representation [275; 276], and the Stinespring dilation [277]. Graphical representations of these equivalent variants are given in [278].
The time-dependent approach presented in section 8.1.2 to describe the effect of noise and the one presented here are related. Assuming a certain density at time \(t\), denoted by \(\rho(t)\), the Lindblad equation makes the evolution in the presence of noise. Said differently, through the solution of the Lindblad equation, for a given time \(\mathrm{d}t>0\), we obtain \(\rho(t+\mathrm{d}t)\). For a given \(\mathrm{d}t\), one can introduce a set of Kraus operators \(\{E_{k}(\mathrm{d}t)\}\) that can be related to the Hamiltonian and Lindblad operators (see, e.g., [279]). A schematic view of the connection
between the Lindblad and Kraus techniques is given in Fig. 8.1.
In a quantum computer, each noisy quantum gate is entirely described by its Kraus operators (or any other representation of the quantum channel). This description also includes "idling noise", namely the noise that qubits incur when left idle between two gate applications: idling noise merely corresponds to a "noisy identity" map. In their simplest form, quantum channels act only on the qubits operated on by the gate at stake. However, crosstalk effects--the fact that the gate acts on other qubits than the intended ones--can, in principle, be taken into account by extending the channel's support. Finally, let us note that non-Markovian effects (temporal correlations) are not captured by such a discrete description of noise: the quantum channel that comes after a given noisy gate is not modified by the preceding quantum channels.
#### Noisy circuits
In all generality, a noisy quantum circuit is a \(n\)-qubit quantum channel \(\mathcal{E}\) that turns an initial state \(\boldsymbol{\rho}_{\text{ini}}\) into a final state \(\boldsymbol{\rho}_{\text{f}}=\mathcal{E}(\boldsymbol{\rho}_{\text{ini}})\). In practice, as illustrated in Fig. 8.2, one can approximate this "global" channel by a sequence of local channels (\(\mathcal{E}^{\text{H}}\), \(\mathcal{E}^{\text{CNOT}}\), etc in the figure) acting on an initial state that can be approximated as a product state: \(\boldsymbol{\rho}_{\text{ini}}=\rho_{0}\otimes\rho_{1}\otimes\rho_{2}\)[280]. If a gate is known to suffer from crosstalk (like the \(X\) gate in the figure), one can take this into account by assuming that the corresponding channel acts on more qubits than expected from the ideal gate. Noise also affects "idle" qubits: this is illustrated by the \(\mathcal{E}^{\text{I}}\) boxes in Fig. 8.2. Typically, if the "idling noise" is of dephasing and amplitude damping type, then the action of these CPTP maps is defined by the expression of Eq. (8.5). Equivalently, this corresponds to the following Kraus operators:
\[E_{0}^{(\text{PD})} =\left[\begin{array}{c}1\\ \sqrt{1-p_{(\text{PD})}}\end{array}\right],E_{1}^{(\text{PD})}=\left[ \begin{array}{c}0\\ \sqrt{p_{(\text{PD})}}\end{array}\right], \tag{8.9}\] \[E_{0}^{(\text{AD})} =\left[\begin{array}{c}1\\ 0\sqrt{1-p_{(\text{AD})}}\end{array}\right],E_{1}^{(\text{AD})}=\left[ \begin{array}{c}0\sqrt{p_{(\text{AD})}}\\ 0\end{array}\right] \tag{8.10}\]
with the pure dephasing and amplitude damping probabilities \(p_{(\text{PD})}(\tau)=1-e^{-2\tau/T_{\varphi}}\) and \(p_{(\text{AD})}(\tau)=1-e^{-\tau/T_{1}}\) (in a Markovian/white noise approximation).
The "local" Kraus operators corresponding to the local quantum channels can be determined by so-called quantum process tomography methods (a "process" is another name for transforming the density matrix). They are methods for experimentally characterizing the quantum channel by measuring the output distribution of a noisy gate for a well-chosen set of inputs. Since these inputs are prepared using a priori unknown noisy gates, one has to resort to self-consistent schemes to solve this chicken-and-egg problem. Such schemes go under the broad name of gate-set tomography (GST [281, 282, 283]).
In the absence of tomography, one can also resort to generic quantum channels to study the effect of noise on the execution of quantum circuits. Such channels include the amplitude damping (or relaxation) mentioned above and pure-dephasing channels, as well as
Figure 8.1: Two equivalent ways (via the Lindblad equation [solid blue arrow] or Kraus operators [solid black arrow]) to describe the evolution of the density matrix from time \(t\) to time \(t+dt\), compared to unitary evolution (dashed green arrow).
Figure 8.2: Schematic representation of a noisy circuit. Instead of a global quantum channel acting on all qubits of the initial state, followed by a global POVM, one can (approximately) break down the noisy evolution as a succession of more or less local quantum channels applied on a factorized initial state followed by local two-outcome POVMs.
the depolarizing channel and the bit-flip channel [9]. For instance, the depolarizing channel is defined by the expression:
\[\mathcal{E}(\rho)=(1-p)\rho+p\frac{I}{2^{n}}, \tag{8.11}\]
where \(n\) is the number of qubits. It leaves the density matrix unchanged with probability \(1-p\) and turns it into the "maximally mixed state" \(I/2^{n}\) with probability \(p\).
_Noisy measurements_ From a mathematical perspective, measurements are so-called positive operator-valued measures (POVM), defined as a set of so-called POVM elements \(\{F_{i}\}\), which are positive semi-definite matrices summing to identity (\(\sum_{i}F_{i}=I\)) and such that the probability of getting the outcome \(i\) is given by Born's rule,
\[P(i)=\mathrm{Tr}\left[\rho F_{i}\right]. \tag{8.12}\]
In the description of perfect quantum computers (section 3.1.2), we introduced the measurement of observable "\(Z\)". In general, the measurement of an observable \(O\) and the corresponding POVM is given by the decomposition \(O=\sum_{i}o_{i}F_{i}\). For instance, for \(O=Z\), we have \(F_{0}=|0\rangle\langle 0|\), \(F_{1}=|1\rangle\langle 1|\) and \(o_{0}=1\), \(o_{1}=-1\). \(Z\) is a particular example of two-outcome POVM. Generally, a noisy two-outcome POVM is completely determined by a matrix \(F_{0}\) (the other given by \(F_{1}=I-F_{0}\)).
Typically, one can suppose that the final measurements on each qubit are independent, and thus, since they are also two-outcome, completely determined by \(\{F_{0}^{(k)}\}_{k=0\ldots n-1}\) (see Fig. 8.2).
#### 8.1.4 Decoherence and fidelity
Noise in quantum circuits has a dramatic influence on the fidelity \(F\) of the output states. \(F\) measures the similarity of the state \(\rho\) that is actually output by the (noisy) processor with the state \(|\Psi\rangle\) that would have been output by a perfect computer, \(F=\langle\Psi|\rho|\Psi\rangle\).
A heuristic law relates the average error rate \(\varepsilon_{k}\) of individual operations (gates, measurements...) to the final fidelity [6]:
\[F=\prod_{k=1}^{N_{\mathrm{ops}}}(1-\epsilon_{k})\approx\exp(-\epsilon N_{ \mathrm{ops}}), \tag{8.13}\]
with \(N_{\mathrm{ops}}\) the total number of operations, and we have assumed an identical error rate in deriving the approximate scaling.
In other words, the output fidelity falls exponentially with the individual error rate and the number of operations. This law is also heuristically observed when the errors come from compression algorithms in random circuits [254; 255]. It is exact in the case of depolarizing noise (see Eq. (8.11)). For other noise models or assumptions, more complex inequalities relate individual error characteristics and total errors (see, e.g., [284]).
This exponential decay puts strong constraints on quantum processors' capability to outperform classical processors without quantum error correction.
### Quantum Error Mitigation
In the absence of an error-correcting scheme (see subsection 8.3), one can try to limit the effect of the incoherent errors accumulated during the execution of the circuit on the estimation of observables: this is the scope of _error mitigation_. Error mitigation does not require more physical qubits but instead trades possibly large sampling overheads for enhanced accuracy. We review a few methods here and refer the reader to [285] for a more extensive review.
#### 8.2.1 Post-selection and purification
In most applications, the output state (or a related observable) respects some mathematical properties. For instance, using the JWT technique, for a problem where particle number is conserved, the number of 1 measured is constant and equal to the particle number. Discarding measured states that do not respect the symmetries enforced by the circuit (in the example given, sampled states that have a number of 1 different from the total number of particles) provides a straightforward error mitigation scheme.
In addition, it is sometimes possible to map a noisy quantity to the pure one it represents (or at least a close approximation), a procedure referred to as _purification_. Purification can be based on so-called fermionic \(N\)-representability conditions [286]. For instance, a (well-conditioned) noisy density matrix \(\rho\) can be mapped to a pure density matrix (satisfying the idempotency criterion \(\rho^{2}=\rho\)) through repeated application of the McWeeny "purification" polynomial \(P_{\mathrm{MW}}(\rho)=3\rho^{2}-2\rho^{3}\)[287]. However, as the number of qubits increases, full density matrix tomography becomes cumbersome. Part of this complexity can be bypassed by considering the marginals of the density matrix, namely the 1- and 2-RDM that contain all the information on one- and two-body observables. Then, the methods can also
be tweaked by approximate \(N\)-representability conditions. This process requires, however, more advanced schemes than McWeeny purification. A notable exception is the preparation of a Slater determinant, e.g., within the Hartree-Fock procedure, where an idempotent 1-RDM is expected: McWeeny purification applies to the 1-RDM, providing dramatic increases in the accuracy [186]. However, the Hartree-Fock procedure is trivial on a classical computer, and the idempotency of the 1-RDM breaks down as soon as a non-Slater state is targeted.
#### 8.2.2 Zero-noise extrapolation (ZNE)
Within ZNE, the departure of the observable as measured \(\langle O\rangle_{\rm meas}\) from its noise-free counterpart \(\langle O\rangle_{\rm perfect}\) is assumed to depend on a single parameter, an error rate \(\epsilon_{\rm phys}\). Assuming some ansatz for the precise form of how these two are related, one can infer an estimation of \(\langle O\rangle_{\rm perfect}\) from a set of measurements corresponding to different effective error rates \(\epsilon=f(\epsilon_{\rm phys},r)\) where \(r\) is a tunable parameter.
A ZNE-specific challenge is to find a way to explore different error rates, which depend on the noise processes at play. Typically, the noise to be mitigated is the one stemming from the two-qubit gate of the set, say \(G\), and the'rescaling' of the error rate is obtained by inserting decompositions of the identity under the form \(GG^{\dagger}\) after each occurrence of \(G\)[288]. This process does not change the state encoded by the circuit. However, it makes it more error-prone: under the assumption that a depolarizing channel can model the two-qubit gates errors, \(r\) insertions correspond to inflating the (two-qubit gate) error rate from its physical value \(\epsilon_{\rm phys}^{(2)}\) to \(\epsilon(r)=(2r+1)\epsilon_{\rm phys}^{(2)}\), and a noise-free observable can subsequently be inferred by extrapolating to the \(r=-1/2\) regime, see Figure 8.3. Alternatively, one can resort to pulse stretches rather than identity insertions to increase the noise picked along the execution of the circuit [289]: the only underlying assumption is that the noise is time-invariant.
#### 8.2.3 Clifford data regression
Clifford data regression (CDR) [290] is a learning-based method (and is thus sometimes referred to as Learning-Based Error Mitigation [291]) where an ansatz is trained to map noisy values to noise-free ones. It applies only to digital quantum computers.
For instance, one can look for a relation of the form
\[\langle O\rangle_{\rm perfect}=a\langle O\rangle_{\rm noisy}+b \tag{8.14}\]
by fitting on a set of tuples \((\langle O\rangle_{\rm noisy}^{\mathcal{C}_{j}},\langle O\rangle_{\rm perfect} ^{\mathcal{C}_{j}})\). The training set \(\{\mathcal{C}_{j}\}\) comprises circuits that are easy to simulate classically. In the original method, near-Clifford circuits were used to this end, and we will stick to this example in what follows. Alternatively, one can sum-mon another class of easily-simulable circuits to study
Figure 8.4: Principle of Clifford data regression illustrated with a linear ansatz for interpolation. A number \(K\) (here \(K=2\)) of non-Clifford gates from the original circuit are replaced by Clifford gates (in light purple) to obtain circuits that can be simulated classically. A linear ansatz linking noisy observable measurements to noiseless values is then trained on the set of circuits obtained so. A noise-free observable can finally be inferred by interpolating from the noisy measurement result obtained on the original circuit.
Figure 8.3: Principle of zero-noise extrapolation illustrated with a linear ansatz for inference. Occurrences of two-qubit gates \(G\) are followed by a number \(r\) of resolutions of the identity \(I=GG^{\dagger}\) to scale the noise to a factor \((2r+1)\). A noiseless observable value can be inferred from the noisy observables measured on the original circuit and the circuit with \(r=1\) by linearly extrapolating to \(r=-1/2\).
fermionic systems: gaussian circuits [292]. To ensure the predictive character of Equation (8.14) (namely, that coefficients \(a\) and \(b\) obtained by fitting over the training set give good predictions for the value of \(\langle O\rangle_{\text{noisy}}^{\text{C}}\) for the circuit \(\mathcal{C}\) of interest), the training set is obtained by replacing some of the non-Clifford gates in the original circuit with Clifford gates. Assume a universal gate set made of single-qubit rotations and the CNOT gate; this could be done by compiling the circuit replacing the \(R_{z}(\theta)\) gates - that are Clifford only for \(\theta_{n}=n\pi/2,n\in[0,1,2,3]\) because they correspond to the phase gate to the power of \(n\), \(S^{n}\) - by some \(R_{z}(\theta_{n})\). The number of gates that are replaced acts as a refinement parameter. The non-Clifford gates to replace and their Clifford gates replacements are chosen out of a distance criterion. Alternatively, a Markov Chain Monte Carlo (MCMC) technique can be employed.
A significant obstacle in successfully implementing CDR is that there is no known recipe for designing the training set optimally. A method dubbed _variable noise CDR_ (vnCDR) was proposed, which mixes ZNE and CDR features. A training set's element in vnCDR is defined by both a circuit and a noise strength. The scheme consists in guiding the ZNE with CDR, cutting the need for precise knowledge of the noise strength. Note that CDR and ZNE, along with a third technique not reviewed here - Virtual Distillation [293], employing more qubits than the state fits in - can be subsumed and combined in a unified framework [294].
#### 8.2.4 Quasiprobability method
The Quasi Probability Error Mitigation (QPEM) method, introduced in [295], originates from the so-called Quasi Probability Decomposition (QPD) of a perfect quantum channel \(\mathcal{E}^{\text{perfect}}\) onto a set of the noisy quantum channels \(\{\mathcal{E}_{k}\}\) that are implemented by the hardware:
\[\mathcal{E}^{\text{perfect}}(\rho)=\sum_{k\in\text{available ops}}q_{k}\mathcal{E }_{k}(\rho). \tag{8.15}\]
The set of coefficients \(\{q_{k}\}\) denote the "quasiprobabilities": trace preservation ensures \(\sum_{k}q_{k}=1\), but the \(q_{k}\) may take negative values. The "negativity" of the channel is defined as \(\eta=-\sum_{k,q_{k}<0}q_{k}\).
Measuring the expectation value of an observable \(O\) output by channel \(\mathcal{E}_{k}\) picked with probability \(\frac{|q_{k}|}{\sum_{k}|q_{k}|}\) thus provides an unbiased estimator of \(\langle O\rangle\equiv\text{Tr}(O\mathcal{E}(\rho))\) as \(C\langle\text{sgn}(q_{k})\text{Tr}(O\mathcal{E}_{k}(\rho))\rangle_{k}\). Here we have defined \(C=\sum_{k}|q_{k}|=1+2\eta\). This factor measures the sampling overhead incurred by the QPEM procedure: to maintain a given variance, one needs \(O(C^{2})\) more shots to evaluate \(\langle O\rangle\) with QPEM than would be required if one were able to implement the quantum channel \(\mathcal{E}\) perfectly.
Usually, the quasiprobability decomposition is obtained at the level of individual gates since performing the decomposition at the circuit level would be exponentially costly. The resulting \(C\)-factor will be the product of the individual ones: \(C_{\text{tot}}=\prod_{l=1}^{N_{g}}C_{l}\). Since, on the other hand, \(C_{l}=1+2\eta_{l}\approx e^{2\eta_{l}}\) (assuming weak negativity), we see that the method incurs a cost exponential in \(\eta_{\text{tot}}=\sum_{l=1}^{N_{g}}\eta_{l}\approx N_{g}\eta\) if \(\eta\) is uniform. This fact is illustrated in Fig. 8.5.
The sampling overhead can be reduced by reintroducing some bias in the estimator of \(\langle O\rangle\), using approximate QPDs [296; 297].
Similarly to the ZNE, it requires a good knowledge of the noise processes at work in the hardware. The quantum channels (in the form, e.g., of Pauli transfer matrices) for each operation can be obtained via so-called _gate set tomography_[283; 298] as done in Ref. [299]. This process assumes that noise is both local (meaning that crosstalk between qubits can be neglected) and Markovian (time-invariant); see discussion in section 8.1.3.
Figure 8.5: Principle of quasiprobability error mitigation. Each perfect gate (represented, e.g., as a Pauli transfer matrix [PTM]) is decomposed onto available (noisy) operations (see Eq. (8.15)). The perfect observable \(\langle O\rangle_{\text{perfect}}\) is then sampled, with a sampling overhead of \(e^{2\eta_{\text{tot}}}\) compared to the noisy observable, but with no bias.
### Quantum error correction (QEC) and fault tolerance (FT) in a nutshell
#### 8.3.1 Quantum error correction
In analogy to classical computers, quantum computers can benefit from error correction by using redundancy, namely by encoding the information of one (quantum) bit, called a "logical qubit", into several physical qubits (see [300] for a recent reference). Encoded states live in a subspace \(\mathcal{C}\) (called codespace) of the physical Hilbert space designed in such a way that errors (described by a quantum channel \(\mathcal{E}\), see subsection 8.1.1 above) can be detected and then corrected using a recovery operation \(\mathcal{R}\) such that \(\mathcal{R}\circ\mathcal{E}(\rho)=\rho\), with \(\rho\in\mathcal{C}\). The encoding (or code) \(\mathcal{C}\) is chosen based on the error model \(\mathcal{E}\). The following necessary and sufficient conditions, known as QEC or Knill-Laflamme conditions [301], ensure the existence of a recovery operation:
\[P_{\mathcal{C}}E_{k}^{\dagger}E_{l}P_{\mathcal{C}}=\beta_{k}\delta_{kl}P_{ \mathcal{C}},\ \ \forall k,l \tag{8.16}\]
with \(\{E_{k}\}\) the Kraus operators associated with \(\mathcal{E}\), \(P_{\mathcal{C}}\) the projector onto the codespace, and \(\beta_{k}>0\).
The principle of QEC is illustrated in Fig. 8.6. A state \(|\psi_{L}\rangle\) of the codespace undergoes an error map \(\mathcal{E}\) and thus becomes a mixed state \(\rho=\mathcal{E}(|\psi_{L}\rangle\langle\psi_{L}|)=\sum_{k}E_{k}|\psi_{L} \rangle\langle\psi_{L}|E_{k}^{\dagger}\). In other words, there is uncertainty as to which error \(E_{k}\) occurred. Measurements of so-called "syndromes" project the state into one of the error code spaces and also allow us to determine in which error code space the state was projected. The design of these measurements is subtle due to the wave function collapse: one wants to learn information about the error that occurred _without learning information on the quantum state that was corrupted_ (lest information on the data qubit is lost). Experimentally, this is achieved by measuring ancilla qubits entangled with the "data" qubits.
As a last step, thanks to the syndrome information, the proper recovery operation is applied to recover the initial state \(|\psi_{L}\rangle\). Finding the recovery operation given a syndrome can be a complex (classical) computational task. Designing good heuristics for finding the best recovery operation is an active research topic for advanced codes.
A consequence of the QEC conditions is that the recovery operation can correct any error that is a linear combination of the Kraus operators that satisfy the QEC condition. This fact implies that a code and recovery built to protect against one-qubit Pauli noise is enough for correcting any one-qubit noise (since its Kraus operators can be decomposed on the Pauli basis). This procedure is known as the "digitization" of errors. Well-known codes include so-called stabilizer codes [302], which generalize classical linear codes and are particularly well-suited for one-qubit Pauli errors.
#### 8.3.2 Fault tolerance
While QEC is meant to preserve quantum information, fault tolerance (FT) is the ability to perform quantum circuits without propagating errors.
The influence of errors during recoveryOne simple context where errors need to be considered is the recovery operation. If recovery were made with perfect gates, a code able to correct a number \(t\) of errors would have an error per gate probability of \(p_{1}=c\epsilon^{t+1}\) after correction, for an error per gate before correction of \(p_{0}=\epsilon\). Thus, one could reach arbitrarily small error rates by choosing a large enough \(t\). In practice, however, recovery is made with noisy gates. If recovery involves \(O(t^{\alpha})\) gates, then the error probability becomes \(\tilde{p}_{1}=c(t^{\alpha}\epsilon)^{t+1}\). This function is monotonic for \(t\): at a certain point, a larger \(t\) means that recovery brings more errors than it corrects. There is thus an optimal \(t\). At the optimal \(t\), one finds a minimal error probability \(\tilde{p}_{1}^{\text{min}}(\epsilon,\alpha)\). This probability is an increasing function of the physical error rate \(\epsilon\). To ensure that no error occurs for a circuit of length \(N_{g}\), one must choose \(N_{g}\tilde{p}_{1}^{\text{min}}(\epsilon,\alpha)<1\). This in turn requires \(\epsilon<\epsilon_{0}\), with
\[\epsilon_{0}\propto\frac{1}{\log(cN_{g})^{\alpha}}. \tag{8.17}\]
The rate \(\epsilon_{0}\) is much more favorable than that one would have obtained in the absence of error correction, namely \(\epsilon_{0}\propto 1/N_{g}\).
Figure 8.6: Schematic view of the principle behind the quantum error correction (see text for more details).
Concatenation and the threshold theorem [303; 304]
In the reasoning above, with a fixed error \(\epsilon\), one still reaches a limit in terms of the circuit length one can execute without error. One way to solve this issue is concatenation, a method where the code is replicated several times, similar to a kind of renormalization group flow or fractal structure. Using \(L\) nested levels of encoding, the error probability (with \(t=1\)) becomes \(p_{L}=(c\epsilon)^{2^{L}}/c\). To reach an accuracy \(\epsilon\) for a circuit with \(N_{g}\) gates, i.e. an accuracy per gate \(\epsilon/N_{g}\), we need to use \(L\) such that \(p_{L}\leq\epsilon/N_{g}\). Such a \(L\) exists provided \(\epsilon\leq p_{\text{th}}\equiv 1/c\). Summarizing: after \(L\) levels of concatenation, starting from a physical error rate \(\epsilon\), one can achieve an error rate of
\[p_{L}=p_{\text{th}}\left(\frac{\epsilon}{p_{\text{th}}}\right)^{2^{L}}. \tag{8.18}\]
Provided the physical error rate is below a threshold value \(p_{\text{th}}\), the error rate after concatenation is reduced doubly exponentially. This fact is illustrated in Fig. 8.7. At the same time, the circuit length is increased exponentially (we have a length \(N_{g}^{L}\) after concatenation).
Surface and color codesIn practice, concatenation often requires long-range two-qubit gates, which are unavailable in current and near-term hardware. This issue is addressed by, e.g., surface codes [305; 306; 307], and color codes [308; 309], which are subclasses of stabilizer codes with good locality properties and are thus more suitable for actual hardware. A threshold theorem also holds for these codes, except one does not scale the number of concatenation levels but the size of the lattice in which the codes live. The error rate scales exponentially with the lattice size (instead of the double exponential of concatenated codes) for the surface code. In contrast, the circuit length scales linearly with the size (instead of exponentially).
First implementations of QEC with logical error rates close or slightly below the threshold have been demonstrated recently with surface codes on superconducting qubits [310; 311], and color codes on trapped ions [312]. Interestingly, new types of superconductor-based hardware implementations are being developed specifically to perform more robust and/or economical quantum error correction [313].
## 9 Conclusions
Many-body systems are among the hardest problems to solve with classical computers. Their defining property is an exponential difficulty that can appear in many guises: the sheer size of the relevant portion of the Hilbert space, the Monte-Carlo sign problem, or a bond dimension exponential in the entanglement.
In the last century, scientists have accumulated deep expertise in solving this problem in many areas of physics and chemistry on classical computers despite the aforementioned exponential wall: a plethora of heuristic methods to gain insights into the exotic physics of these systems has been designed. These methods usually resort to the most advanced numerical techniques and are thus a difficult target for nascent quantum processors. However, there are some regimes where the complexity of the problems at stake still prevents these classical methods from uncovering the critical physical mechanisms at play.
Quantum computers can be regarded as promising complementary tools to tackle such problems in difficult regimes. Indeed, as illustrated in this review, quantum processors are physical many-body systems, contrary to classical computers. Therefore, at least on paper, they appear as an ideal tool for understanding many-body phenomena. We presented several methods proposed in recent years to leverage this many-body nature with many different computational paradigms. In theory, these methods offer interesting solutions to the exponential wall. Yet, contrary to classical computers, today's quantum processors must also reckon with decoherence. This hurdle is also generically, in the absence of error correction, of exponential nature. Therefore, the practical gain of using quantum processors needs to be carefully assessed by factoring in the advantages of exploiting inherent many-body phenomena together with the corresponding weaknesses.
Figure 8.7: Logical error as a function of physical error in a concatenated code for a threshold \(p_{\text{th}}=0.1\).
Today's efforts have not yet converged to an example of a many-body problem that can be solved more efficiently with a quantum processor's (maybe partial) help. However, steady experimental and theoretical progress gives reasonable hope that such examples will emerge. More importantly, the fact that natural many-body systems can exhibit large-scale entangled states (like high-temperature superconductivity, superfluidity, etc.) despite decoherence is a quite robust indication that quantum processors--that is, synthetic many-body systems--can also be engineered to generate, and therefore gain insights into, phenomena with large-scale entanglement.
While the possibility of performing accurate enough computations with quantum computers or proving quantum advantage is still uncertain [111], there is no doubt that this domain is progressing very fast both in terms of technology and in terms of algorithms.
###### Acknowledgements.
This project has received financial support from the CNRS through the 80Prime program and the AIQI-IN2P3 project. This work is part of HQI initiative (www.hqi.fr) and is supported by France 2030 under the French National Research Agency award number "ANR-22-PNCQ-0002". This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 951821. This project has received funding from the European High-Performance Computing Joint Undertaking (JU) under grant agreement No 101018180.
|
2307.07676 | Computing SEQ-IC-LCS of Labeled Graphs | We consider labeled directed graphs where each vertex is labeled with a
non-empty string. Such labeled graphs are also known as non-linear texts in the
literature. In this paper, we introduce a new problem of comparing two given
labeled graphs, called the SEQ-IC-LCS problem on labeled graphs. The goal of
SEQ-IC-LCS is to compute the the length of the longest common subsequence (LCS)
$Z$ of two target labeled graphs $G_1 = (V_1, E_1)$ and $G_2 = (V_2, E_2)$ that
includes some string in the constraint labeled graph $G_3 = (V_3, E_3)$ as its
subsequence. Firstly, we consider the case where $G_1$, $G_2$ and $G_3$ are all
acyclic, and present algorithms for computing their SEQ-IC-LCS in
$O(|E_1||E_2||E_3|)$ time and $O(|V_1||V_2||V_3|)$ space. Secondly, we consider
the case where $G_1$ and $G_2$ can be cyclic and $G_3$ is acyclic, and present
algorithms for computing their SEQ-IC-LCS in $O(|E_1||E_2||E_3| +
|V_1||V_2||V_3|\log|\Sigma|)$ time and $O(|V_1||V_2||V_3|)$ space, where
$\Sigma$ is the alphabet. | Yuki Yonemoto, Yuto Nakashima, Shunsuke Inenaga | 2023-07-15T01:23:56Z | http://arxiv.org/abs/2307.07676v1 | # Computing SEQ-IC-LCS of Labeled Graphs
###### Abstract
We consider labeled directed graphs where each vertex is labeled with a non-empty string. Such labeled graphs are also known as non-linear texts in the literature. In this paper, we introduce a new problem of comparing two given labeled graphs, called the SEQ-IC-LCS problem on labeled graphs. The goal of SEQ-IC-LCS is to compute the the length of the longest common subsequence (LCS) \(Z\) of two target labeled graphs \(G_{1}=(V_{1},E_{1})\) and \(G_{2}=(V_{2},E_{2})\) that includes some string in the constraint labeled graph \(G_{3}=(V_{3},E_{3})\) as its subsequence. Firstly, we consider the case where \(G_{1}\), \(G_{2}\) and \(G_{3}\) are all acyclic, and present algorithms for computing their SEQ-IC-LCS in \(O(|E_{1}||E_{2}||E_{3}|)\) time and \(O(|V_{1}||V_{2}||V_{3}|)\) space. Secondly, we consider the case where \(G_{1}\) and \(G_{2}\) can be cyclic and \(G_{3}\) is acyclic, and present algorithms for computing their SEQ-IC-LCS in \(O(|E_{1}||E_{2}||E_{3}|+|V_{1}||V_{2}||V_{3}|\log|\Sigma|)\) time and \(O(|V_{1}||V_{2}||V_{3}|)\) space, where \(\Sigma\) is the alphabet.
## 1 Introduction
We consider _labeled (directed) graphs_ where each vertex is labeled with a non-empty string. Such labeled graphs are also known as _non-linear texts_ or _hypertexts_ in the literature. Labeled graphs are a natural generalization of usual (unary-path) strings, which can also be regarded as a compact representation of a set of strings. After introduced by the Database community [13], labeled graphs were then considered by the string matching community [21, 23, 2, 22, 16, 17, 10]. Recently, graph representations of large-scale string sets appear in the real-world applications including graph databases [3] and pan-genomics [14]. For instance, _elastic degenerate strings_[18, 4, 8, 19, 7], which recently gain attention with bioinformatics background, can be regarded as a special case of labeled graphs. In the best case, a single labeled graph can represent exponentially many strings. Thus, efficient string algorithms that directly work on labeled graphs without expansion are of significance both in theory and in practice.
Shimohira et al. [24] introduced the problem of computing the _longest common subsequence_ (_LCS_) of two given labeled graphs, which, to our knowledge, the first and the only known similarity measure of labeled graphs. Since we can easily convert any labeled graph with string labels to an equivalent labeled graph with single character labels (see Figure 1), in what follows, we evaluate the size of a labeled graph by the number of vertices and edges in the (converted) graph. Given two labeled graphs \(G_{1}=(V_{1},E_{1})\) and \(G_{2}=(V_{2},E_{2})\), Shimohira et al. [24] showed how to solve the LCS problem on labeled graphs in \(O(|E_{1}||E_{2}|)\) time and
\(O(|V_{1}||V_{2}|)\) space when both \(G_{1}\) and \(G_{2}\) are acyclic, and in \(O(|E_{1}||E_{2}|+|V_{1}||V_{2}|\log|\Sigma|)\) time and \(O(|V_{1}||V_{2}|)\) space when \(G_{1}\) and \(G_{2}\) can be cyclic, where \(\Sigma\) is the alphabet. It is noteworthy that their solution is almost optimal since the quadratic \(O((|A||B|)^{1-\epsilon})\)-time conditional lower bound [1, 9] with any constant \(\epsilon>0\) for the LCS problem on two strings \(A,B\) also applies to the LCS problem on labeled graphs.
The _constrained LCS problems_ on strings, which were first proposed by Tsai [25] and then extensively studied in the literature [25, 12, 6, 11, 15, 27, 28], use a third input string \(P\) which introduces a-priori knowledge of the user to the solution string \(Z\) to output. The task here is to compute the longest common subsequence \(Z\) of two target strings \(A\) and \(B\) that meets the condition w.r.t. \(P\), such that
**STR-IC-LCS:**: \(Z\) includes (contains) \(P\) as substring;
**STR-EC-LCS:**: \(Z\) excludes (does not contain) \(P\) as substring;
**SEQ-IC-LCS:**: \(Z\) includes (contains) \(P\) as subsequence;
**SEQ-EC-LCS:**: \(Z\) excludes (does not contain) \(P\) as subsequence.
While STR-IC-LCS can be solved in \(O(|A||B|)\) time [15], the state-of-the-art solutions to STR-EC-LCS and SEQ-IC/EC-LCS run in \(O(|A||B||P|)\) time [12, 6, 11, 27].
In this paper, we consider the SEQ-IC-LCS problems on labeled graphs, where the inputs are two target labeled graphs \(G_{1}=(V_{1},E_{1})\) and \(G_{2}=(V_{2},E_{2})\), and a constraint text \(G_{3}=(V_{3},E_{3})\), and the output is (the length of) a longest common subsequence of \(G_{1}\) and \(G_{2}\) such that \(Z\) includes as subsequence some string that is represented by \(G_{3}\). Firstly, we consider the case where \(G_{1}\), \(G_{2}\) and \(G_{3}\) are all acyclic, and present algorithms for computing their SEQ-IC-LCS in \(O(|E_{1}||E_{2}||E_{3}|)\) time and \(O(|V_{1}||V_{2}||V_{3}|)\) space. Secondly, we consider the case where \(G_{1}\) and \(G_{2}\) can be cyclic and \(G_{3}\) is acyclic, and present algorithms for computing their SEQ-IC-LCS in \(O(|E_{1}||E_{2}||E_{3}|+|V_{1}||V_{2}||V_{3}|\log|\Sigma|)\) time and \(O(|V_{1}||V_{2}||V_{3}|)\) space, where \(\Sigma\) is the alphabet. The time complexities of our algorithms and related work are summarized in Table 1. Our algorithms for solving SEQ-IC-LCS on labeled graphs are based on the solutions to SEQ-IC-LCS of usual strings proposed by Chin et al. [12]. We emphasize that a faster \(o(|E_{1}||E_{2}||E_{3}|)\)-time solution to the SEQ-IC-LCS problems implies a major improvement over the SEQ-IC-LCS problems for strings whose best known solutions require cubic time.
A related work is the _regular language constrained sequence alignment_ (_RLCSA_) problem [5] for two input strings \(A\) and \(B\) in which the constraint is given as an NFA. It is known that this problem can be solved in \(O(|A||B||V|^{3}/\log|V|)\) time [20], where \(|V|\) denotes the number of states in the NFA.
## 2 Preliminaries
### Strings and Graphs
Let \(\Sigma\) be an alphabet. An element of \(\Sigma^{*}\) is called a _string_. The _length_ of a string \(w\) is denoted by \(|w|\). The _empty string_, denoted by \(\varepsilon\), is a string of length \(0\). Let \(\Sigma^{+}=\Sigma^{*}\setminus\{\varepsilon\}\). For a string \(w=xyz\) with \(x,y,z\in\Sigma^{*}\), strings \(x\), \(y\), and \(z\) are called a _prefix_, _substring_, and _suffix_ of string \(w\), respectively. The \(i\)th character of a string \(w\) is denoted by \(w[i]\) for \(1\leq i\leq|w|\), and the substring of \(w\) that begins at position \(i\) and ends at position \(j\) is denoted by \(w[i..j]\) for \(1\leq i\leq j\leq|w|\). For convenience, let \(w[i..j]=\varepsilon\) for \(i>j\). A string \(u\) is a
subsequence_ of another string \(w\) if \(u=\varepsilon\) or there exists a sequence of integers \(i_{1},\ldots,i_{|u|}\) such that \(1\leq i_{1}<\cdots<i_{|u|}\leq|w|\) and \(u=w[i_{1}]\cdots w[i_{|u|}]\).
A _directed graph_\(G\) is an ordered pair \((V,E)\) of the set \(V\) of _vertices_ and the set \(E\subseteq V\times V\) of _edges_. The _in-degree_ of a vertex \(v\) is denoted by \(\textsf{in\_deg}(v)=|\{u\mid(u,v)\in E\}|\). A _path_ in a (directed) graph \(G=(V,E)\) is a sequence \(v_{0},\ldots,v_{k}\) of vertices such that \((v_{i-1},v_{i})\in E\) for every \(i=1,\ldots,k\). A path \(\pi=v_{0},\ldots,v_{k}\) in graph \(G\) is said to be _left-maximal_ if its left-end vertex \(v_{0}\) has no in-coming edges, and \(\pi\) is said to be _right-maximal_ if its right-end vertex \(v_{k}\) has no out-going edges. A path \(\pi\) is said to be _maximal_ if \(\pi\) is both left-maximal and right-maximal. For any vertex \(v\in V\), let \(\textsf{P}(v)\) denote the set of all paths ending at vertex \(v\), and \(\textsf{LMP}(v)\) denote the set of left-maximal paths ending at \(v\). The set of all paths in \(G=(V,E)\) is denoted by \(\textsf{P}(G)=\{\textsf{P}(v)\mid v\in V\}\). Let \(\textsf{MP}(G)\) denote the set of maximal paths in \(G\).
### Longest Common Subsequence (LCS) of Strings
The _longest common subsequence_ (LCS) problem for two given strings \(A\) and \(B\) is to compute (the length of) the longest string \(Z\) that is a subsequences of both \(A\) and \(B\). It is well-known that LCS can be solved in \(O(|A||B|)\) time by using the following recurrence [26]:
\[C_{i,j}=\left\{\begin{array}{ll}0&\mbox{if $i=0$ or $j=0$;}\\ 1+C_{i-1,j-1}&\mbox{if $i,j>0$ and $x[i]=y[j]$;}\\ \max(C_{i-1,j},C_{i,j-1})&\mbox{if $i,j>0$ and $x[i]\neq y[j]$,}\end{array}\right.\]
where \(C_{i,j}\) is the LCS length of \(A[1..i]\) and \(B[1..j]\).
### SEQ-IC-LCS of Strings
Let \(A\), \(B\), and \(P\) be strings. A string \(Z\) is said to be an _SEQ-IC-LCS_ of two target strings \(A\) and \(B\)_including_ the pattern \(P\) if \(Z\) is a longest string such that \(P\) is a subsequence of
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline problem & text-1 & text-2 & text-3 & time complexity \\ \hline \hline \multirow{3}{*}{LCS} & string & string & - & \(O(|E_{1}||E_{2}|)\)[26] \\ \cline{2-5} & DAG & DAG & - & \(O(|E_{1}||E_{2}|)\)[24] \\ \cline{2-5} & graph & graph & - & \(O(|E_{1}||E_{2}+|V_{1}||V_{2}|\log|\Sigma|)\)[24] \\ \hline \hline \multirow{3}{*}{SEQ-IC-LCS} & string & string & string & \(O(|E_{1}||E_{2}||E_{3}|)\)[12, 6] \\ \cline{2-5} & DAG & DAG & DAG & \(O(|E_{1}||E_{2}||E_{3}|)\)[this work] \\ \cline{2-5} & graph & graph & DAG & \(O(|E_{1}||E_{2}||E_{3}|+|V_{1}||V_{2}||V_{3}|\log|\Sigma|)\)[this work] \\ \hline \hline SEQ-EC-LCS & string & string & string & \(O(|E_{1}||E_{2}||E_{3}|)\)[11] \\ \hline \hline STR-IC-LCS & string & string & - & \(O(|E_{1}||E_{2}|)\)[15] \\ \hline \hline STR-EC-LCS & string & string & - & \(O(|E_{1}||E_{2}|)\)[27] \\ \hline \hline RLCSA & string & string & NFA & \(O(|E_{1}||E_{2}||V_{3}|^{3}/\log|V_{3}|)\)[20] \\ \hline \end{tabular}
\end{table}
Table 1: Time complexities of algorithms for labeled graph/usual string comparisons, for inputs text-1 \(G_{1}=(V_{1},E_{1})\), text-2 \(G_{2}=(V_{2},E_{2})\), and text-3 \(G_{3}=(V_{3},E_{3})\). Here, a string input of length \(n\) is regarded as a unary path graph \(G=(V,E)\) with \(|E|=n\).
and that \(Z\) is a common subsequence of \(A\) and \(B\). Chin et al. [12] solved this problem in \(O(|A||B||P|)\) time by using the following recurrence:
\[C_{i,j,k}=\begin{cases}0&\text{if $k=0$ and $(i=0$ or $j=0$)};\\ -\infty&\text{if $k\neq 0$ and $(i=0$ or $j=0$)};\\ C_{i-1,j-1,k-1}+1&\text{if $i,j,k>0$ and $A[i]=B[j]=P[k]$};\\ C_{i-1,j-1,k}+1&\text{if $i,j>0$ and $A[i]=B[j]\neq P[k]$};\\ \max(C_{i-1,j,k},C_{i,j-1,k})&\text{if $i,j>0$ and $A[i]\neq B[j]$},\end{cases} \tag{1}\]
where \(C_{i,j,k}\) is the SEQ-IC-LCS length of \(A[1..i]\), \(B[1..j]\), and \(P[1..k]\).
### Labeled Graphs
A _labeled graph_ is a directed graph with vertices labeled by strings, namely, it is a directed graph \(G=(V,E,L)\) where \(V\) is the set of vertices, \(E\) is the set of edges, and \(L:V\to\Sigma^{+}\) is a labeling function that maps nodes \(v\in V\) to non-empty strings \(L(v)\in\Sigma^{+}\). For a path \(\pi=v_{0},\ldots,v_{k}\in\mathsf{P}(G)\), let \(L(\pi)\) denote the string spelled out by \(w\), namely \(L(\pi)=L(v_{0})\cdots L(v_{k})\). The size \(|G|\) of a labeled graph \(G=(V,E,L)\) is \(|V|+|E|+\sum_{v\in V}|L(v)|\). Let \(\mathsf{Subseq}(G)=\{\mathsf{Subseq}(L(\pi))\mid\pi\in\mathsf{P}(G)\}\) denote the set of subsequences of a labeled graph \(G=(V,E,L)\). For a set \(P\in\mathsf{P}(G)\) of paths in \(G\), let \(L(P)=\{L(\pi)\mid\pi\in P\}\) denote the set of string labels for the paths in \(P\).
For a labeled graph \(G=(V,E,L)\), consider an "atomic" labeled graph \(G^{\prime}=(V^{\prime},E^{\prime},L^{\prime})\) such that \(L^{\prime}:V^{\prime}\to\Sigma\),
\[V^{\prime} = \{v_{i,j}\mid L^{\prime}(v_{i,j})=L(v_{i})[j],v_{i}\in V,1\leq j \leq|L(v_{i})|\},\text{ and }\] \[E^{\prime} = \{(v_{i,|L(v_{i})|},v_{k,1})\mid(v_{i},v_{k})\in E\}\cup\{(v_{i, j},v_{i,j+1})\mid v_{i}\in V,1\leq j<|L(v_{i})|\},\]
that is, \(G^{\prime}\) is a labeled graph with each vertex being labeled by a single character, which represents the same set of strings as \(G\). An example is shown in Figure 1. Since \(|V^{\prime}|=\sum_{v\in V}|L(v)|\), \(|E^{\prime}|=|E|+\sum_{v\in V}(|L(v)|-1)\), and \(\sum_{v^{\prime}\in V^{\prime}}|L(v^{\prime})|=\sum_{v\in V}|L(v)|\), we have \(|G^{\prime}|=O(|G|)\). We remark that given \(G\), we can easily construct \(G^{\prime}\) in \(O(|G|)\) time. Observe that \(\mathsf{Subseq}(G)=\mathsf{Subseq}(G^{\prime})\) also holds.
In the sequel we only consider atomic labeled graphs where each vertex is labeled with a single character.
### LCS of Acyclic Labeled Graphs
The problem of computing the length of longest common subsequence of two input acyclic labeled graphs is formalized by Shimohira et al. [24] as follows.
**Problem 1** (Longest common subsequence problem for acyclic labeled graphs).:
**Input:** Labeled graphs \(G_{1}=(V_{1},E_{1},L_{1})\) and \(G_{2}=(V_{2},E_{2},L_{2})\).
**Output:** The length of a longest string in \(\mathsf{Subseq}(G_{1})\cap\mathsf{Subseq}(G_{2})\).
This problem can be solved in \(O(|E_{1}||E_{2}|)\) time and \(O(|V_{1}||V_{2}|)\) space by sorting \(G_{1}\) and \(G_{2}\) topologically and using the following recurrence:
\[\begin{split} C^{\prime}_{i,j}=\\ &\left\{\begin{array}{ll}1\!+\!\max(\{C^{\prime}_{k,\ell}\mid(v _{1,k},v_{1,i})\!\in\!E_{1},(v_{2,\ell},v_{2,j})\!\in\!E_{2}\}\cup\{0\})& \mbox{if }L_{1}(v_{1,i})\!=\!L_{2}(v_{2,j});\\ \max\!\left(\begin{array}{l}\{C^{\prime}_{k,j}\mid(v_{1,k},v_{1,i})\!\in\!E _{1}\}\cup\\ \{C^{\prime}_{i,\ell}\mid(v_{2,\ell},v_{2,j})\!\in\!E_{2}\}\cup\{0\}\end{array} \right)&\mbox{otherwise},\end{array}\right.\end{split} \tag{2}\]
where \(v_{1,i}\) and \(v_{2,j}\) are respectively the \(i\)th and \(j\)th vertices of \(G_{1}\) and in \(G_{2}\) in topological order, for \(1\leq i\leq|V_{1}|\) and \(1\leq j\leq|V_{2}|\), and \(C^{\prime}_{i,j}\) is the length of a longest string in \(\mathsf{Subseq}(L_{1}(\mathsf{P}(v_{1,i})))\cap\mathsf{Subseq}(L_{2}(\mathsf{ P}(v_{2,j})))\).
### LCS of Cyclic Labeled Graphs
Here we consider a generalized version of Problem 1 where the input labeled graphs \(G_{1}\) and/or \(G_{2}\) can be cyclic. In this problem, the output is \(\infty\) if there is a string \(s\in\mathsf{Subseq}(G_{1})\cap\mathsf{Subseq}(G_{2})\) such that \(|s|=\infty\), and that is the length of a longest string in \(\mathsf{Subseq}(G_{1})\cap\mathsf{Subseq}(G_{2})\). Shimohira et al. [24] proposed an \(O(|E_{1}||E_{2}|+|V_{1}||V_{2}|\log|\Sigma|)\) time and \(O(|V_{1}||V_{2}|)\) space algorithm solving this problem. Their algorithm judges whether the output is \(\infty\) by using a balanced tree, and computes the length of the solution by using Equation (2) and the balanced tree if the output is not \(\infty\).
## 3 The SEQ-IC-LCS Problem for Labeled Graphs
In this paper, we tackle the problem of computing the SEQ-IC-LCS length of three labeled graphs, which formalized as follows:
**Problem 2** (SEQ-IC-LCS problem for labeled graphs).:
**Input:** Labeled graphs \(G_{1}=(V_{1},E_{1},L_{1})\), \(G_{2}=(V_{2},E_{2},L_{2})\), and \(G_{3}=(V_{3},E_{3},L_{3})\).
**Output:** The length of a longest string in the set
\(\{z\mid\exists\ q\in L_{3}(\mathsf{MP}(G_{3}))\mbox{ such that }q\in\mathsf{ Subseq}(z)\mbox{ and }z\in\mathsf{Subseq}(G_{1})\cap\mathsf{ Subseq}(G_{2})\}\).
Intuitively, Problem 2 asks to compute a longest string \(z\) such that \(z\) is a subsequence occurring in both \(G_{1}\) and \(G_{2}\) and that there exists a string \(q\) which corresponds to a maximal path of \(G_{3}\) and is a subsequence of \(z\).
For a concrete example, see the labeled graphs \(G_{1}\), \(G_{2}\) and \(G_{3}\) of Figure 2. String cdba is a common subsequence of \(G_{1}\) and \(G_{2}\) and that contains an element ba of a maximal path
string in \(L_{3}(\mathsf{MP}(G_{3}))\). Since cdba is such a longest string, we ouput the SEQ-IC-LCS length \(|\mathsf{cdba}|=4\) as the solution to this instance.
In the sequel, Section 4 presents our solution to the case where the all input labeled graphs are acyclic, and Section 5 presents our solutions case where \(G_{1}\) and/or \(G_{2}\) can be cyclic and \(G_{3}\) is acyclic.
## 4 Computing SEQ-IC-LCS of Acyclic Labeled Graphs
In this section, we present our algorithm which solves Problem 2 in the case where all of \(G_{1}\), \(G_{2}\) and \(G_{3}\) are acyclic. The following is our result:
**Theorem 1**.: _Problem 2 where input labeled graphs \(G_{1}\), \(G_{2}\) and \(G_{3}\) are all acyclic is solvable in \(O(|E_{1}||E_{2}||E_{3}|)\) time and \(O(|V_{1}||V_{2}||V_{3}|)\) space._
Proof.: We perform topological sort to the vertices of \(G_{1}\), \(G_{2}\), and \(G_{3}\) in \(O(|E_{1}|+|E_{2}|+|E_{3}|)\) time and \(O(|V_{1}|+|V_{2}|+|V_{3}|)\) space. For \(1\leq i\leq|V_{1}|\), \(1\leq j\leq|V_{2}|\), and \(1\leq k\leq|V_{3}|\), let \(v_{1,i}\), \(v_{2,j}\), \(v_{3,k}\) denote the \(i\)th, \(j\)th, and \(k\)th vertices in \(G_{1}\), \(G_{2}\), and \(G_{3}\) in topological order, respectively. Let
\[\mathsf{S}_{\mathrm{IC}}(v_{1,i},v_{2,j},v_{3,k})=\left\{z\left|\begin{array} []{l}\exists q\in L_{3}(\mathsf{LMP}(v_{3,k}))\mbox{ such that }q\in\mathsf{ Subseq}(z)\\ \mbox{ and }z\in\mathsf{Subseq}(L_{1}(\mathsf{P}(v_{1,i})))\cap\mathsf{ Subseq}(L_{2}(\mathsf{P}(v_{2,j})))\end{array}\right.\right\}\]
be the set of candidates of SEQ-IC-LCS strings for the maximal induced graphs of \(G_{1}\), \(G_{2}\), and \(G_{3}\) whose sinks are \(v_{1,i}\), \(v_{2,j}\), and \(v_{3,k}\), respectively. Let \(D_{i,j,k}\) denote the length of a longest string in \(\mathsf{S}_{\mathrm{IC}}(v_{1,i},v_{2,j},v_{3,k})\). The solution to Problem 2 (the SEQ-IC-LCS length) is the maximum value of \(D_{i,j,k}\) for which \(v_{3,k}\) does not have out-going edges (i.e. \(v_{3,k}\) is the end of a maximal path in \(G_{3}\)).
When \(k=0\), then the problem is equivalent to Problem 1 of computing SEQ-IC-LCS of strings. In that follows, we show how to compute \(D_{i,j,k}\) for \(k>0\):
1. If \(L_{1}(v_{1,i})=L_{2}(v_{2,j})=L_{3}(v_{3,k})\), there are three cases to consider: 1. If \(v_{1,i}\) does not have in-coming edges or \(v_{2,j}\) does not have in-coming edges, and if \(v_{3,k}\) does not have in-coming edges (i.e., \(\mathsf{in\_deg}(v_{1,i})=\mathsf{in\_deg}(v_{3,k})=0\), or \(\mathsf{in\_deg}(v_{2,j})=\mathsf{in\_deg}(v_{3,k})=0\)), then clearly \(D_{i,j,k}=1\). 2. If \(v_{1,i}\) does not have in-coming edges or \(v_{2,j}\) does not have in-coming edges, and if \(v_{3,k}\) has some in-coming edge(s) (i.e., \(\mathsf{in\_deg}(v_{1,i})=0\) and \(\mathsf{in\_deg}(v_{3,k})\geq 1\), or \(\mathsf{in\_deg}(v_{2,j})=0\) and \(\mathsf{in\_deg}(v_{3,k})\geq 1\)), then clearly \(D_{i,j,k}=-\infty\). 3. If both \(v_{1,i}\) and \(v_{2,j}\) have some in-coming edge(s) and \(v_{3,k}\) does not have in-coming edges (i.e., \(\mathsf{in\_deg}(v_{1,i})\geq 1\), \(\mathsf{in\_deg}(v_{2,j})\geq 1\), and \(\mathsf{in\_deg}(v_{3,k})=0\)), then let \(v_{1,x}\) and \(v_{2,y}\) be any nodes s.t. \((v_{1,x},v_{1,i})\in E_{1}\), and \((v_{2,y},v_{2,j})\in E_{2}\), respectively. Let \(s\) be a longest string in \(\mathsf{Subseq}(L_{1}(\mathsf{P}(v_{1,i})))\cap\mathsf{Subseq}(L_{2}(\mathsf{P} (v_{2,j})))\). Assume on the contrary that there exists a string \(t\in\mathsf{Subseq}(L_{1}(\mathsf{P}(v_{1,x})))\cap\mathsf{Subseq}(L_{2}( \mathsf{P}(v_{2,y})))\) such that \(|t|>|s|-1\). This contradicts that \(s\) is a longest common subsequence of \(L_{1}(\mathsf{P}(v_{1,i}))\) and \(L_{2}(\mathsf{P}(v_{2,j}))\), since \(L_{1}(v_{1,i})=L_{2}(v_{2,j})\). Hence \(|t|\leq|s|-1\). If \(v_{1,x}\) and \(v_{2,y}\) are vertices satisfying \(C^{\prime}_{x,y,0}=|s|-1\), then \(C^{\prime}_{i,j,k}=C^{\prime}_{x,y,0}+1\). Note that such nodes \(v_{1,x}\) and \(v_{2,y}\) always exist.
. Otherwise (all \(v_{1,i}\), \(v_{2,j}\), and \(v_{3,k}\) have some in-coming edge(s)), let \(v_{1,x}\), \(v_{2,y}\) and \(v_{3,z}\) be any nodes s.t. \((v_{1,x},v_{1,i})\in E_{1}\), \((v_{2,y},v_{2,j})\in E_{2}\) and \((v_{3,z},v_{3,k})\in E_{3}\), respectively. Let \(s\) be a longest string in \(\mathsf{S}_{\text{IC}}(v_{1,i},v_{2,j},v_{3,k})\). Assume on the contrary that there exists a string \(t\in\mathsf{S}_{\text{IC}}(v_{1,x},v_{2,y},v_{3,z})\) such that \(|t|>|s|-1\). This contradicts that \(s\) is a SEQ-IC-LCS of \(L_{1}(\mathsf{P}(v_{1,i}))\), \(L_{2}(\mathsf{P}(v_{2,j}))\) and \(L_{3}(\mathsf{LMP}(v_{3,k}))\), since \(L_{1}(v_{1,i})=L_{2}(v_{2,j})=L_{3}(v_{3,k})\). Hence \(|t|\leq|s|-1\). If \(v_{1,x}\), \(v_{2,y}\) and \(v_{3,z}\) are vertices satisfying \(D_{x,y,z}=|s|-1\), then \(D_{i,j,k}=D_{x,y,z}+1\). Note that such nodes \(v_{1,x}\), \(v_{2,y}\) and \(v_{3,z}\) always exist.
2. If \(L_{1}(v_{1,i})=L_{2}(v_{2,j})\neq L_{3}(v_{3,k})\), there are two cases to consider: 1. If \(v_{1,i}\) does not have in-coming edges or \(v_{2,j}\) does not have-incoming edges (i.e., \(\mathsf{in\_deg}(v_{1,i})=0\) or \(\mathsf{in\_deg}(v_{2,j})=0\)), then clearly \(D_{i,j,k}\) does not exist and let \(D_{i,j,k}=-\infty\). 2. Otherwise (both \(v_{1,i}\) and \(v_{2,j}\) have in-coming edge(s)), let \(v_{1,x}\) and \(v_{2,y}\) be any nodes s.t. \((v_{1,x},v_{1,i})\in E_{1}\) and \((v_{2,y},v_{2,j})\in E_{2}\), respectively. Let \(s\) be a longest string in \(\mathsf{S}_{\text{IC}}(v_{1,i},v_{2,j},v_{3,k})\). Assume on the contrary that there exists a string \(t\in\mathsf{S}_{\text{IC}}(v_{1,x},v_{2,y},v_{3,k})\) such that \(|t|>|s|-1\). This contradicts that \(s\) is a SEQ-IC-LCS of \(L_{1}(\mathsf{P}(v_{1,i}))\), \(L_{2}(\mathsf{P}(v_{2,j}))\) and \(L_{3}(\mathsf{LMP}(v_{3,k}))\), since \(L_{1}(v_{1,i})=L_{2}(v_{2,j})\). Hence \(|t|\leq|s|-1\). If \(v_{1,x}\), \(v_{2,y}\) and \(v_{3,k}\) are vertices satisfying \(D_{x,y,k}=|s|-1\), then \(D_{i,j,k}=D_{x,y,k}+1\). Note that such nodes \(v_{1,x}\), \(v_{2,y}\) and \(v_{3,k}\) always exist.
3. If \(L_{1}(v_{1,i})\neq L_{2}(v_{2,j})\), there are two cases to consider: 1. If \(v_{1,i}\) does not have in-coming edges and \(v_{2,j}\) does not have in-coming edges (i.e., \(\mathsf{in\_deg}(v_{1,i})=\mathsf{in\_deg}(v_{2,j})=0\)), then clearly \(D_{i,j,k}\) does not exist and let \(D_{i,j,k}=-\infty\). 2. Otherwise (\(v_{1,i}\) has some in-coming edge(s) or \(v_{2,j}\) has some in-coming edge(s)), let \(v_{1,x}\) and \(v_{2,y}\) be any nodes such that \((v_{1,x},v_{1,i})\in E_{1}\) and \((v_{2,y},v_{2,j})\in E_{2}\), respectively. Let \(s\) be a \(\mathsf{S}_{\text{IC}}(v_{1,i},v_{2,j},v_{3,k})\). Assume on the contrary that there exists a string \(t\in\mathsf{S}_{\text{IC}}(v_{1,i},v_{2,j},v_{3,k})\) such that \(|t|>|s|\). This contradicts that \(s\) is a SEQ-IC-LCS of \(L_{1}(\mathsf{P}(v_{1,i}))\), \(L_{2}(\mathsf{P}(v_{2,j}))\) and \(L_{3}(\mathsf{LMP}(v_{3,k}))\), since \(\mathsf{S}_{\text{IC}}(v_{1,x},v_{2,y},v_{3,k})\subseteq\mathsf{S}_{\text{IC} }(v_{1,i},v_{2,j},v_{3,k})\). Hence \(|t|\leq|s|\). If \(v_{1,x}\) is a vertex satisfying \(D_{x,j,k}=|z|\), then \(D_{i,j,k}=D_{x,j,k}\). Similarly, if \(v_{2,y}\) is a vertex satisfying \(D_{i,y,k}=|s|\), then \(D_{i,j,k}=D_{i,y,k}\). Note that such node \(v_{1,x}\) or \(v_{2,y}\) always exists.
Consequently we obtain the following recurrence:
\[D_{i,j,k}=\] (3) \[\left\{\begin{array}{ll}\mbox{Recurrence in Equation (\ref{eq:D_i,j,k})}&\mbox{ if $k=0$;}\\ 1+\max\left(\left\{D_{x,y,z}\,\left|\begin{array}{l}(v_{1,x},v_{1,i})\in E_{1 },\\ (v_{2,y},v_{2,j})\in E_{2},\\ (v_{3,z},v_{3,k})\in E_{3},\\ \mbox{or $z=0$}\end{array}\right.\right\}\cup\left\{\gamma\right\}\right)& \mbox{ if $k>0$ and}\\ \max\left(\left\{1+D_{x,y,k}\,\left|\begin{array}{l}(v_{1,x},v_{1,i})\in E_{1 },\\ (v_{2,y},v_{2,j})\in E_{2}\end{array}\right.\right\}\cup\left\{-\infty\right\} \right)&\mbox{ if $k>0$ and}\\ \max\left(\left\{D_{x,j,k}\,\left|\begin{array}{l}(v_{1,x},v_{1,i})\in E_{1 }\end{array}\right.\right\}\cup\left\{\gamma\right\}\right)&\mbox{ if $k>0$ and}\\ \max\left(\left\{1+D_{x,y,k}\,\left|\begin{array}{l}(v_{1,x},v_{1,i})\in E_{1 },\\ (v_{2,y},v_{2,j})\in E_{2}\end{array}\right.\right\}\cup\left\{-\infty\right\} \right)&\mbox{ if $k>0$ and}\\ \max\left(\left\{D_{x,j,k}\,\left|\begin{array}{l}(v_{1,x},v_{1,i})\in E_{1 }\end{array}\right.\right\}\cup\left\{-\infty\right\}\right)&\mbox{ }\mbox{
* Fourth, let us analyze the time cost for computing \[M^{\prime\prime}_{i,j,k}=\max\{D_{x,j,k},D_{i,y,k}\mid(v_{1,x},v_{1,i})\in E_{1}, (v_{2,y},v_{2,j})\in E_{2}\}\] in the fourth case of the recurrence for all \(i,j,k\). For each fixed \((v_{1,x},v_{1,i})\in E_{1}\), we refer the value of \(D_{x,j,k}\) for all \(1\leq j\leq|V_{2}|\) and all \(1\leq k\leq|V_{3}|\) in \(O(|V_{2}||V_{3}|)\) time. Similarly, for each fixed \((v_{2,y},v_{2,j})\in E_{2}\), we refer the value of \(D_{i,y,k}\) for all \(1\leq i\leq|V_{1}|\) and all \(1\leq k\leq|V_{3}|\) in \(O(|V_{1}||V_{3}|)\) time. Therefore, the total time cost for computing \(M^{\prime\prime}_{i,j,k}\) for all \(i,j,k\) is \(O(|V_{3}|(|V_{2}||E_{1}|+|V_{1}||E_{2}|))\subseteq O(|E_{1}||E_{2}||E_{3}|)\).
Thus the total time complexity is \(O(|E_{1}||E_{2}||E_{3}|)\).
An example of computing \(D_{i,j,k}\) using dynamic programming is show in Figure 2. We remark that the recurrence in Equation (3) is a natural generalization of the recurrence in Equation (1) for computing the SEQ-IC-LCS length of given two strings.
Algorithm 1 in Appendix A shows a pseudo-code of our algorithm which solves Problem 2 in the case where all \(G_{1}\), \(G_{2}\) and \(G_{3}\) are acyclic.
## 5 Computing SEQ-IC-LCS of Cyclic Labeled Graphs
In this section, we present an algorithm to solve Problem 2 in case where \(G_{1}\) and/or \(G_{2}\) can be cyclic and \(G_{3}\) is acyclic. We output \(\infty\) if the set of output candidates in Problem 2 contains a string of infinite length, and outputs the (finite) SEQ-IC-LCS length otherwise.
To deal with cyclic graphs, we follow the approach by Shimohira et al. [24] which transforms a cyclic labeled graph \(G=(V,E,L)\) into an acyclic labeled graph \(\hat{G}=(\hat{V},\hat{E},\hat{L})\) based on the strongly connected components.
For each vertex \(v\in V\), let \([v]\) denote the set of vertices that belong to the same strongly connected component. Formally, \(\hat{G}=(\hat{V},\hat{E},\hat{L})\) is defined by
\[\hat{V} = \{[v]\mid v\in V\},\] \[\hat{E} = \{([v],[u])\mid[v]\neq[u],(\hat{v},\hat{u})\in E\text{ for some }\hat{v}\in[v],\,\hat{u}\in[u]\}\cup\{(v,v)\mid|[v]|\geq 2\},\]
and \(\hat{L}([v])=\{L(v)\mid v\in[v]\}\subseteq\Sigma\). We regard each \([v]\) as a single vertex that is contracted from vertices in \([v]\). Observe that \(\textsf{Subseq}(\hat{G})=\textsf{Subseq}(G)\). An example of transformed acyclic labeled graphs is shown in Figure 3.
It is possible that a vertex \(\hat{v}\in\hat{V}\) in the transformed graph \(\hat{G}\) has a self-loop. We regard that a self-loop \((\hat{v},\hat{v})\) is also an in-coming edge of vertex \(\hat{v}\). We say that vertex \(\hat{v}\) does not have in-coming edges _at all_, if \(\hat{v}\) does not have in-coming edges from _any_ vertex in \(\hat{V}\) (including \(\hat{v}\)).
Our main result of this section follows:
**Theorem 2**.: _Problem 2, where input labeled graphs \(G_{1}\) and \(G_{2}\) can be cyclic and \(G_{3}\) is acyclic, is solvable in \(O(|E_{1}||E_{2}||E_{3}|+|V_{1}||V_{2}||V_{3}|\log|\Sigma|)\) time and \(O(|V_{1}||V_{2}||V_{3}|)\) space._
Proof.: We first transform cyclic labeled graphs \(G_{1}\) and \(G_{2}\) into corresponding acyclic labeled graphs \(\hat{G}_{1}\) and \(\hat{G}_{2}\), as described previously. For \(1\leq i\leq|\hat{V}_{1}|\) and \(1\leq j\leq|\hat{V}_{2}|\), let \(\hat{v}_{1,i}\) and \(\hat{v}_{2,j}\) respectively denote the \(i\)th and \(j\)th vertices in \(\hat{G}_{1}\) and \(\hat{G}_{2}\) in topological order. Let \(v_{3,k}\) denote the \(k\)-th vertex in topological ordering in \(G_{3}\) for \(1\leq k\leq|V_{3}|\).
Let
\[\hat{\mathsf{S}}_{\mathrm{IC}}(\hat{v}_{1,i},\hat{v}_{2,j},v_{3,k})=\left\{z\ \middle|\begin{array}{l}\exists q\in L_{3}(\mathsf{MP}(v_{3,k}))\text{ such that }q\in\mathsf{Subseq}(z)\\ \text{ and }z\in\mathsf{Subseq}(\hat{L}_{1}(\mathsf{P}(\hat{v}_{1,i})))\cap \mathsf{Subseq}(\hat{L}_{2}(\mathsf{P}(\hat{v}_{2,j})))\end{array}\right\}.\]
Let \(\hat{D}_{i,j,k}\) denote the length of a longest string in \(\hat{\mathsf{S}}_{\mathrm{IC}}(\hat{v}_{1,i},\hat{v}_{2,j},v_{3,k})\). For convenience, we let \(\hat{D}_{i,j,k}=-\infty\) if \(\hat{\mathsf{S}}_{\mathrm{IC}}(\hat{v}_{1,i},\hat{v}_{2,j},v_{3,k})=\emptyset\). The solution to Problem 2 (the SEQ-IC-LCS length) is the maximum value of \(\hat{D}_{i,j,k}\) for which \(v_{3,k}\) has no out-going edges (i.e. \(v_{3,k}\) is the end of a maximal path in \(G_{3}\)).
\(\hat{D}_{i,j,k}\) can be computed as follows:
1. If both \(\hat{v}_{1,i}\) and \(\hat{v}_{2,j}\) are cyclic vertices (i.e. \(|[\hat{v}_{1,i}]|\geq 2\) and \(|[\hat{v}_{2,j}]|\geq 2\)), then remark that both \(\hat{v}_{1,i}\) and \(\hat{v}_{2,j}\) have some self-loop(s). There are four cases to consider: 1. If \(k=0\), there are two cases to consider: 1. If \(\hat{L}_{1}(\hat{v}_{1,i})\cap\hat{L}_{2}(\hat{v}_{2,j})\neq\emptyset\), then clearly \(\hat{D}_{i,j,k}=\infty\).
Figure 2: Example of dynamic programming table \(D\) for computing the SEQ-IC-LCS length of acyclic labeled graphs \(G_{1}\), \(G_{2}\) and \(G_{3}\). Each vertex is annotated with its topological order. In this example, \(v_{3,2}\) and \(v_{3,4}\) with \(k\in\{2,4\}\) in \(G_{3}\) are vertices with no out-going edges. The maximum value of \(D_{i,j,k}\) with \(k\in\{2,4\}\) is \(D_{6,6,2}=4\), and the corresponding SEQ-IC-LCS is cdba of length \(4\).
Otherwise, there are two cases to consider: 1. If the in-coming edges of \(\hat{v}_{1,i}\) are \(\hat{v}_{2,j}\) only self-loops, then clearly \(\hat{D}_{i,j,k}=0\). 2. Otherwise (\(\hat{v}_{1,i}\) has some in-coming edge(s) other than self-loops, or \(\hat{v}_{2,j}\) has some in-coming edge(s) other than self-loops), let \(\hat{v}_{1,x}\) and \(\hat{v}_{2,y}\) be any nodes such that \((\hat{v}_{1,x},\hat{v}_{1,i})\in\hat{E}_{1}\) and \((\hat{v}_{2,y},\hat{v}_{2,j})\in\hat{E}_{2}\), respectively. Let \(s\) be a longest string in the set \(\mathsf{Subseq}(\hat{L}_{1}(\mathsf{LMP}(\hat{v}_{1,i})))\cap\mathsf{Subseq}( \hat{L}_{2}(\mathsf{LMP}(\hat{v}_{2,j})))\). Assume on the contrary that there is a string \(t\in\mathsf{Subseq}(\hat{L}_{1}(\mathsf{LMP}(\hat{v}_{1,x})))\cap\mathsf{Subseq }(\hat{L}_{2}(\mathsf{LMP}(\hat{v}_{2,j})))\) such that \(|t|>|s|\). This contradicts that \(s\) is a longest common subsequence of \(\hat{L}_{1}(\mathsf{LMP}(\hat{v}_{1,i}))\) and \(\hat{L}_{2}(\mathsf{LMP}(\hat{v}_{2,j}))\), since
Figure 3: Example of dynamic programming table \(\hat{D}\) for computing the SEQ-IC-LCS length of cyclic labeled graphs \(G_{1}\) and \(G_{2}\), and acyclic labeled graph \(G_{3}\). \(\hat{G}_{1}\) and \(\hat{G}_{2}\) are the labeled graphs which are transformed from \(G_{1}\) and \(G_{2}\) by grouping vertices into strongly connected components. Each vertex is annotated with its topological order. In this example, \(v_{3,2}\) and \(v_{3,4}\) with \(k\in\{2,4\}\) in \(G_{3}\) are vertices with no out-going edges. The maximum value of \(\hat{D}_{i,j,k}\) with \(k\in\{2,4\}\) is \(\hat{D}_{4,3,2}=3\), and the corresponding SEQ-IC-LCS is aab of length \(3\).
\(\mathsf{Subseq}(\hat{L}_{1}(\mathsf{LMP}(\hat{v}_{1,x})))\cap\mathsf{Subseq}( \hat{L}_{2}(\mathsf{LMP}(\hat{v}_{2,j})))\subseteq\mathsf{Subseq}(\hat{L}_{1}( \mathsf{LMP}(\hat{v}_{1,i})))\cap\mathsf{Subseq}(\hat{L}_{2}(\mathsf{LMP}(\hat{ v}_{2,j})))\). Hence \(|t|\leq|s|\). If \(\hat{v}_{1,x}\) is a vertex satisfying \(\hat{D}_{x,j,k}=|s|\), then \(\hat{D}_{i,j,k}=\hat{D}_{x,j,k}\). Similarly, if \(\hat{v}_{2,y}\) is a vertex satisfying \(\hat{D}_{i,y,k}=|s|\), then \(\hat{D}_{i,j,k}=\hat{D}_{i,y,k}\). Note that such \(\hat{v}_{1,x}\) or \(\hat{v}_{2,y}\) always exists. 2. If \(k>0\) and \(\hat{L}_{1}(\hat{v}_{1,i})\cap\hat{L}_{2}(\hat{v}_{2,j})\cap\{L_{3}(v_{3,k})\}\neq\emptyset\), there are two cases to consider: 1. If \(v_{3,k}\) has no in-coming edges, let \(\hat{v}_{1,x}\) and \(\hat{v}_{2,y}\) be any nodes such that \((\hat{v}_{1,x},\hat{v}_{1,i})\in\hat{E}_{1}\) and \((\hat{v}_{2,y},\hat{v}_{2,j})\in\hat{E}_{2}\), respectively (these edges may be self-loops). If \(\hat{D}_{x,y,0}=-\infty\) for all \(1\leq x<i\) and \(1\leq y<j\), then clearly \(\hat{D}_{i,j,k}=-\infty\). Otherwise, clearly \(\hat{D}_{i,j,k}=\infty\). 2. Otherwise (\(v_{3,k}\) has some in-coming edge(s)), let \(\hat{v}_{1,x}\), \(\hat{v}_{2,y}\) and \(v_{3,z}\) be any nodes such that \((\hat{v}_{1,x},\hat{v}_{1,i})\in\hat{E}_{1}\), \((\hat{v}_{2,y},\hat{v}_{2,j})\in\hat{E}_{2}\) and \((v_{3,z},v_{3,k})\in E_{3}\), respectively (the first two edges may be self-loops). If \(\hat{D}_{x,y,z}=-\infty\) for all \(1\leq x<i\) and \(1\leq y<j\), then clearly \(\hat{D}_{i,j,k}=-\infty\). Otherwise, \(\hat{D}_{i,j,k}=\infty\). 3. If \(k>0\) and \(\hat{L}_{1}(\hat{v}_{1,i})\cap\hat{L}_{2}(\hat{v}_{2,j})\cap\{L_{3}(v_{3,k})\}=\emptyset\) and \(\hat{L}_{1}(\hat{v}_{1,i})\cap\hat{L}_{2}(\hat{v}_{2,j})\neq\emptyset\), there are two cases to consider: 1. If the in-coming edges of \(\hat{v}_{1,i}\) are \(\hat{v}_{2,j}\) only self-loops, then clearly \(\hat{D}_{i,j,k}=-\infty\). 2. Otherwise (\(\hat{v}_{1,i}\) has some in-coming edge(s) other than self-loops), or \(\hat{v}_{2,j}\) has some in-coming edge(s) other than self-loops), let \(\hat{v}_{1,x}\) and \(\hat{v}_{2,y}\) be any nodes such that \((\hat{v}_{1,x},\hat{v}_{1,i})\in\hat{E}_{1}\) and \((\hat{v}_{2,y},\hat{v}_{2,j})\in\hat{E}_{2}\), respectively. If all \(\hat{D}_{x,y,k}=-\infty\), then clearly \(\hat{D}_{i,j,k}=-\infty\). Otherwise, clearly \(\hat{D}_{i,j,k}=\infty\). 4. If \(k>0\) and \(\hat{L}_{1}(\hat{v}_{1,i})\cap\hat{L}_{2}(\hat{v}_{2,j})=\emptyset\), there are two cases to consider: 1. If the in-coming edges of \(\hat{v}_{1,i}\) and \(\hat{v}_{2,j}\) are only self-loops, then clearly \(\hat{D}_{i,j,k}=-\infty\). 2. Otherwise (\(\hat{v}_{1,i}\) has some in-coming edge(s) other than self-loops, or \(\hat{v}_{2,j}\) has some in-coming edge(s) other than self-loops), let \(\hat{v}_{1,x}\) and \(\hat{v}_{2,y}\) be any nodes such that \((\hat{v}_{1,x},\hat{v}_{1,i})\in\hat{E}_{1}\) and \((\hat{v}_{2,y},\hat{v}_{2,j})\in\hat{E}_{2}\), respectively. If all \(\hat{D}_{x,y,k}=-\infty\), then clearly \(\hat{D}_{i,j,k}=-\infty\). Otherwise, clearly \(\hat{D}_{i,j,k}=\infty\). 4. If \(k>0\) and \(\hat{L}_{1}(\hat{v}_{1,i})\cap\hat{L}_{2}(\hat{v}_{2,j})=\emptyset\), there are two cases to consider: 1. If the in-coming edges of \(\hat{v}_{1,i}\) and \(\hat{v}_{2,j}\) are only self-loops, then clearly \(\hat{D}_{i,j,k}=-\infty\). 2. Otherwise (\(\hat{v}_{1,i}\) has some in-coming edge(s) other than self-loops, or \(\hat{v}_{2,j}\) has some in-coming edge(s) other than self-loops), let \(\hat{v}_{1,x}\) and \(\hat{v}_{2,y}\) be any nodes such that \((\hat{v}_{1,x},\hat{v}_{1,i})\in\hat{E}_{1}\) and \((\hat{v}_{2,y},\hat{v}_{2,j})\in\hat{E}_{2}\), respectively. If all \(\hat{v}_{1,x}\) and \(\hat{v}_{2,y}\) be any nodes such that \((\hat{v}_{1,x},\hat{v}_{1,i})\in\hat{E}_{1}\) and \((\hat{v}_{2,y},\hat{v}_{2,j})\in\hat{E}_{2}\), respectively. Let \(s\) be a vertex satisfying \(\hat{D}_{i,j,k}=|s|\), then \(\hat{D}_{i,j,k}=\hat{D}_{i,j,k}\). Similarly, if \(\hat{v}_{2,y}\) is a vertex satisfying \(\hat{D}_{i,j,k}=|s|\), then \(\hat{D}_{i,j,k}=\hat{D}_{i,j,k}\). Note that such \(\hat{v}_{1,x}\) or \(\hat{v}_{2,y}\) always exists.
2. Otherwise (\(v_{1,i}\) is not a cyclic vertex and/or \(v_{2,j}\) is not a cyclic vertex), there are four cases to consider: 1. If \(k=0\), there are two cases to consider: 1. If \(\hat{L}_{1}(\hat{v}_{1,i})\cap\hat{L}_{2}(\hat{v}_{2,j})\neq\emptyset\), there are two cases to consider: 1. If \(\hat{v}_{1,i}\) does not have in-coming edges at all or \(\hat{v}_{2,j}\) does not have in-coming edges at all, then clearly \(\hat{D}_{i,j,k}=1\). 2. Otherwise (both \(\hat{v}_{1,i}\) and \(\hat{v}_{2,j}\) have some in-coming edge(s) including self-loops), let \(\hat{v}_{1,x}\) and \(\hat{v}_{2,y}\) be any nodes such that \((\hat{v}_{1,x},\hat{v}_{1,i})\in\hat{E}_{1}\) and \((\hat{v}_{2,y},\hat{v}_{2,j})\in\hat{E}_{2}\), respectively. Let \(s\) be a longest string in the set \(\mathsf{Subseq}(\hat{L}_{1}(\mathsf{LMP}(\hat{v}_{1,x})))\cap\mathsf{Subseq}( \hat{L}_{2}(\mathsf{LMP}(\hat{v}_{2,j})))\). 3. If \(s\) is a cyclic vertex and/or \(v_{2,j}\) is not a cyclic vertex, there are four cases to consider: 1. If \(\hat{L}_{1}(\hat{v}_{1,x},\hat{v}_{1,i})\cap\hat{L}_{2}(\hat{v}_{2,j})\neq\emptyset\), there are two cases to consider: 1. If \(\hat{v}_{1,i}\) does not have in-coming edges at all or \(\hat{v}_{2,j}\) does not have in-coming edges at all, then clearly \(\hat{D}_{i,j,k}=1\). 2. Otherwise (both \(\hat{v}_{1,i}\) and \(\hat{v}_{2,j}\) have some in-coming edge(s) including self-loops), let \(\hat{v}_{1,x}\) and \(\hat{v}_{2,y}\) be any nodes such that \((\hat{v}_{1,x},\hat{v}_{1,i})\in\hat{E}_{1}\) and \((\hat{v}_{2,y},\hat{v}_{2,j})\in\hat{E}_{2}\), respectively. Let \(s\) be a longest string in the set \(\mathsf{Subseq}(\hat{L}_{1}(\mathsf{LMP}(\hat{v}_{1,x})))\)
\((\mathsf{LMP}(\hat{v}_{1,i})))\cap\mathsf{Subseq}(\hat{L}_{2}(\mathsf{LMP}(\hat{v} _{2,j})))\). Assume on the contrary that there is a string \(t\in\mathsf{Subseq}(\hat{L}_{1}(\mathsf{LMP}(\hat{v}_{1,x})))\cap\mathsf{Subseq}( \hat{L}_{2}(\mathsf{LMP}(\hat{v}_{2,y})))\) such that \(|t|>|s|-1\). This contradicts that \(s\) is a longest common subsequence of \(\hat{L}_{1}(\mathsf{LMP}(\hat{v}_{1,i}))\) and \(\hat{L}_{2}(\mathsf{LMP}(\hat{v}_{2,j}))\), since \(\hat{L}_{1}(\hat{v}_{1,i})\cap\hat{L}_{2}(\hat{v}_{2,j})\neq\emptyset\). Hence \(|t|\leq|s|-1\). If \(\hat{v}_{1,x}\) and \(\hat{v}_{2,y}\) are vertices satisfying \(\hat{D}_{x,y,k}=|s|-1\), then \(\hat{D}_{i,j,k}=\hat{D}_{x,y,k}+1\). Note that such \(\hat{v}_{1,x}\) and \(\hat{v}_{2,y}\) always exist. 2. Otherwise, then this case is the same as Case 1(a)ii.
3. If \(\hat{L}_{1}(\hat{v}_{1,i})\cap\hat{L}_{2}(\hat{v}_{2,j})\cap\{L_{3}(v_{3,k})\}\neq\emptyset\), there are three cases to consider: 1. If \(\hat{v}_{1,i}\) does not have in-coming edges at all or \(\hat{v}_{2,j}\) does not have in-coming edges at all, and if \(v_{3,k}\) does not have in-coming edges, then clearly \(\hat{D}_{i,j,k}=1\). 2. If \(\hat{v}_{1,i}\) does not have in-coming edges at all or \(\hat{v}_{2,j}\) does not have in-coming edge at all, and if \(v_{3,k}\) has some in-coming edge(s), then clearly \(\hat{D}_{i,j,k}=-\infty\). 3. If both \(\hat{v}_{1,i}\) and \(\hat{v}_{2,j}\) have some in-coming edge(s) including self-loops and \(v_{3,k}\) does not have in-coming edges, let \(\hat{v}_{1,x}\) and \(\hat{v}_{2,y}\) be any nodes such that \((\hat{v}_{1,x},\hat{v}_{1,i})\in\hat{E}_{1}\) and \((\hat{v}_{2,y},\hat{v}_{2,j})\in\hat{E}_{2}\), respectively. Let \(s\) be a longest string in the set \(\mathsf{Subseq}(\hat{L}_{1}(\mathsf{LMP}(\hat{v}_{1,i})))\cap\mathsf{Subseq}( \hat{L}_{2}(\mathsf{LMP}(\hat{v}_{2,j})))\). Assume on the contrary that there exists a string \(t\in\mathsf{Subseq}(\hat{L}_{1}(\mathsf{LMP}(\hat{v}_{1,x})))\cap\mathsf{Subseq }(\hat{L}_{2}(\mathsf{LMP}(\hat{v}_{2,y})))\) such that \(|t|>|s|-1\). This contradicts that \(s\) is a longest common subsequence of \(\hat{L}_{1}(\mathsf{LMP}(\hat{v}_{1,i}))\) and \(\hat{L}_{2}(\mathsf{LMP}(\hat{v}_{2,j}))\), since \(\hat{L}_{1}(\hat{v}_{1,i})\cap\hat{L}_{2}(\hat{v}_{2,j})\neq\emptyset\). Hence \(|t|\leq|s|-1\). If \(\hat{v}_{1,x}\) and \(\hat{v}_{2,y}\) are vertices satisfying \(\hat{D}_{x,y,0}=|s|-1\), then \(\hat{D}_{i,j,k}=\hat{D}_{x,y,0}+1\). Note that such \(\hat{v}_{1,x}\) and \(\hat{v}_{2,y}\) always exist. 4. Otherwise (all \(\hat{v}_{1,i}\), \(\hat{v}_{2,j}\), and \(\hat{v}_{3,k}\) have some in-coming edge(s) including self-loops), let \(\hat{v}_{1,x}\), \(\hat{v}_{2,y}\) and \(v_{3,z}\) be any nodes such that \((\hat{v}_{1,x},\hat{v}_{1,i})\in\hat{E}_{1}\), \((\hat{v}_{2,y},\hat{v}_{2,j})\in\hat{E}_{2}\), and \((v_{3,z},v_{3,k})\in E_{3}\), respectively. Let \(s\) be a longest string in \(\hat{\mathsf{S}}_{\mathrm{IC}}(\hat{v}_{1,i},\hat{v}_{2,j},v_{3,k})\). Assume on the contrary that there exists a string \(t\in\hat{\mathsf{S}}_{\mathrm{IC}}(\hat{v}_{1,x},\hat{v}_{2,y},v_{3,z})\) such that \(|t|>|s|-1\). This contradicts that \(s\) is a SEQ-IC-LCS of \(\hat{L}_{1}(\mathsf{LMP}(\hat{v}_{1,i}))\), \(\hat{L}_{2}(\mathsf{LMP}(\hat{v}_{2,j}))\) and \(L_{3}(\mathsf{MP}(v_{3,k}))\), since \(\hat{L}_{1}(\hat{v}_{1,i})\cap\hat{L}_{2}(\hat{v}_{2,j})\cap L_{3}(v_{3,k})\neq\emptyset\). Hence \(|t|\leq|s|-1\). If \(\hat{v}_{1,x}\), \(\hat{v}_{2,y}\) and \(v_{3,z}\) are vertices satisfying \(\hat{D}_{x,y,z}=|s|-1\), then \(\hat{D}_{i,j,k}=\hat{D}_{x,y,z}+1\). Note that such \(\hat{v}_{1,x}\), \(\hat{v}_{2,y}\) and \(v_{3,z}\) always exist.
5. If \(\hat{L}_{1}(\hat{v}_{1,i})\cap\hat{L}_{2}(\hat{v}_{2,j})\cap\{L_{3}(v_{3,k})\}=\emptyset\) and \(\hat{L}_{1}(\hat{v}_{1,i})\cap\hat{L}_{2}(\hat{v}_{2,j})\neq\emptyset\), there are two cases to consider: 1. If \(\hat{v}_{1,i}\) does not have in-coming edges at all or \(\hat{v}_{2,j}\) does not have in-coming edges at all, then clearly \(\hat{D}_{i,j,k}=-\infty\). 2. Otherwise (both \(\hat{v}_{1,i}\) and \(\hat{v}_{2,j}\) have some in-coming edges including self-loops), let \(\hat{v}_{1,x}\) and \(\hat{v}_{2,y}\) be any nodes such that \((\hat{v}_{1,x},\hat{v}_{1,i})\in\hat{E}_{1}\) and \((\hat{v}_{2,y},\hat{v}_{2,j})\in\hat{E}_{2}\), respectively. Let \(s\) be a longest string in \(\hat{\mathsf{S}}_{\mathrm{IC}}(\hat{v}_{1,i},\hat{v}_{2,j},v_{3,k})\). Assume on the contrary that there exists a string \(t\in\hat{\mathsf{S}}_{\mathrm{IC}}(\hat{v}_{1,x},\hat{v}_{2,y},v_{3,k})\) such that \(|t|>|s|-1\). This contradicts that \(s\) is a SEQ-IC-LCS of \(\hat{L}_{1}(\mathsf{LMP}(\hat{v}_{1,i}))\), \(\hat{L}_{2}(\mathsf{LMP}(\hat{v}_{2,j}))\) and \(L_{3}(\mathsf{MP}(v_{3,k}))\), since \(\hat{L}_{1}(\hat{v}_{1,i}|\cap\hat{L}_{2}(\hat{v}_{2,j})\neq\emptyset\). Hence \(|t|\leq|s|-1\). If \(\hat{v}_{1,x}\), \(\hat{v}_{2,y}\) and \(v_{3,k}\) are vertices satisfying \(\hat{D}_{x,y,k}=|s|-1\), then \(\hat{D}_{i,j,k}=\hat{D}_{x,y,k}+1\). Note that such \(\hat{v}_{1,x}\), \(\hat{v}_{2,y}\) and \(v_{3,k}\) always exist.
6. If \(\hat{L}_{1}(\hat{v}_{1,i})\cap\hat{L}_{2}(\hat{v}_{2,j})=\emptyset\), then this case is the same as Case 1d.
The above arguments lead us to the following recurrence:
\[\hat{D}_{i,j,k}=\] \[\left\{\begin{array}{ll}\delta+\max\left(\left\{\hat{D}_{x,y,k} \mid\begin{array}{l}(\hat{v}_{1,x},\hat{v}_{1,i})\in\hat{E}_{1},\\ (\hat{v}_{2,y},\hat{v}_{2,j})\in\hat{E}_{2}\end{array}\right\}\cup\{0\}\right) &\text{if $k=0$ and}\\ \max\left(\left\{\hat{D}_{x,j,k}\mid(\hat{v}_{1,x},\hat{v}_{1,i})\in\hat{E}_{1} \right\}\cup\left\{\hat{L}_{1}(\hat{v}_{1,i})\cap\hat{L}_{2}(\hat{v}_{2,j})= \emptyset;\right.\right.\\ \left.\left.\left.\left\{\hat{D}_{i,y,k}\mid(\hat{v}_{2,y},\hat{v}_{2,j})\in \hat{E}_{2}\right\}\cup\{0\}\right\}\right)&\text{if $k=0$ and}\\ \delta+\max\left(\left\{\hat{D}_{x,y,z}\mid\begin{array}{l}(\hat{v}_{1,x}, \hat{v}_{1,i})\in\hat{E}_{1},\\ (\hat{v}_{2,y},\hat{v}_{2,j})\in\hat{E}_{2},\\ (v_{3,z},v_{3,k})\in\hat{E}_{3}\end{array}\right\}\cup\{\gamma\}\right)& \text{if $k>0$ and}\\ \text{or $z=0$}\end{array}\right)&\text{if $\gamma\}\right)&\text{if $k>0$ and}\\ \max\left(\left\{\delta+\hat{D}_{x,y,k}\mid\begin{array}{l}(\hat{v}_{1,x}, \hat{v}_{1,i})\in\hat{E}_{1},\\ (\hat{v}_{2,y},\hat{v}_{2,j})\in\hat{E}_{2}\end{array}\right\}\cup\{-\infty\} \right)&\text{if $k>0$,}\\ \max\left(\left\{\hat{D}_{x,j,k}\mid(\hat{v}_{1,x},\hat{v}_{1,i})\in\hat{E}_{1 }\right\}\cup\left\{\hat{D}_{i,y,k}\mid(\hat{v}_{2,y},\hat{v}_{2,j})\in\hat{E} _{2}\right\}\cup\{-\infty\}\right)&\text{otherwise,}\end{array}\right.\]
where
\[\delta = \left\{\begin{array}{ll}\infty&\text{if both $\hat{L}_{1}(\hat{v}_{1,i}) $ and $\hat{L}_{2}(\hat{v}_{2,j})$ are cyclic vertices;}\\ 1&\text{otherwise,}\\ \gamma&=&\left\{\begin{array}{ll}0&\text{if $\hat{v}_{1,i}$ does not have in-coming edges at all or $\hat{v}_{2,j}$ does not have}\\ &\text{in-coming edges at all, and $v_{3,k}$ does not have in-coming edges;}\\ -\infty&\text{otherwise.}\end{array}\right.\]
In the above recurrence, we use a convention that \(\infty+(-\infty)=-\infty\).
We perform preprocessing which transforms \(G_{1}\) and \(G_{2}\) into \(\hat{G}_{1}\) and \(\hat{G}_{2}\) in \(O(|E_{1}|+|E_{2}|)\) time with \(O(|V_{1}|+|V_{2}|)\) space, based on strongly connected components.
To examine the conditions in the above recurrence, we explicitly construct the intersection of the character labels of the given vertices \(\hat{v}_{1,i}\in\hat{V_{1}}\), \(\hat{v}_{2,j}\in\hat{V_{2}}\), and \(\hat{v}_{3,k}\in V_{3}\) by using balanced trees, as follows:
* Checking whether \(\hat{L}_{1}(\hat{v}_{1,i})\cap\hat{L}_{2}(\hat{v}_{2,j})=\emptyset\) or \(\neq\emptyset\): Let \(\Sigma_{1}\) and \(\Sigma_{2}\) be the sets of characters that appear in \(G_{1}\) and \(G_{2}\), respectively. For every node \(\hat{v}_{1,i}\in\hat{V}_{1}\) of the transformed graph \(\hat{G}_{1}\), we build a balanced tree \(\mathcal{T}_{i}\) which consists of the characters in \(\hat{L}_{1}(\hat{v}_{i})\). Since the total number of characters in the original graph \(G_{1}=(V_{1},E_{1})\) is equal to \(|V_{1}|\), we can build the balanced trees \(\mathcal{T}_{i}\) for all \(i\) in a total of \(O(|V_{1}|\log|\Sigma_{1}|)\) time and \(O(|V_{1}|)\) space. Then, for each fixed \(\hat{L}_{1}(\hat{v}_{1,i})\in\hat{V}_{1}\), by using its balanced tree, the intersection \(\hat{L}_{1}(\hat{v}_{1,i})\cap\hat{L}_{2}(\hat{v}_{2,j})\) can be computed in \(O(|V_{2}|\log|\Sigma_{1}|)\) time for all \(\hat{L}_{2}(\hat{v}_{2,j})\in V_{2}\). Therefore, \(\hat{L}_{1}(\hat{v}_{1,i})\cap\hat{L}_{2}(\hat{v}_{2,j})\) for all \(1\leq i\leq|\hat{V}_{1}|\) and \(1\leq j\leq|\hat{V}_{2}|\) can be computed in \(O(|V_{1}||V_{2}|\log|\Sigma_{1}|)\) total time.
* Checking whether \(\hat{L}_{1}(\hat{v}_{1,i})\cap\hat{L}_{2}(\hat{v}_{2,j})\cap L_{3}(v_{3,k})=\emptyset\) or \(\neq\emptyset\): While computing \(\Sigma_{i,j}=\hat{L}_{1}(\hat{v}_{1,i})\cap\hat{L}_{2}(\hat{v}_{2,j})\) in the above, we also build another balanced tree \(\mathcal{T}_{i,j}\) which consists of the characters in \(\Sigma_{i,j}\) for every \(1\leq i\leq|\hat{V}_{1}|\) and \(1\leq j\leq|\hat{V}_{2}|\). This can be done
in \(O(|V_{1}||V_{2}|\log|\Sigma_{1}|)\) total time and \(O(|V_{1}||V_{2}|)\) space. Then, for each fixed \(1\leq i\leq|\hat{V}_{1}|\) and \(1\leq j\leq|\hat{V}_{2}|\), \(\hat{L}_{1}(\hat{v}_{1,i})\cap\hat{L}_{2}(\hat{v}_{2,j})\cap L_{3}(v_{3,k})\) can be computed in a total of \(O(|V_{3}|\log|\Sigma_{i,j}|)\) time. Therefore, \(\hat{L}_{1}(\hat{v}_{1,i})\cap\hat{L}_{2}(\hat{v}_{2,j})\cap L_{3}(v_{3,k})\) for all \(1\leq i\leq|\hat{V}_{1}|\), \(1\leq j\leq|\hat{V}_{2}|\) and, \(1\leq k\leq|V_{3}|\) can be computed in \(O(|V_{1}||V_{2}||V_{3}|\log|\Sigma|)\) time.
Assuming that the above preprocessing for the conditions in the recurrence are all done, we can compute \(\hat{D}_{i,j,k}\) for all \(1\leq i\leq|\hat{V}_{1}|\), \(1\leq j\leq|\hat{V}_{2}|\) and \(1\leq k\leq|V_{3}|\) using dynamic programming of size \(O(|\hat{V}_{1}||\hat{V}_{2}||V_{3}|)\) in \(O(|\hat{E}_{1}||\hat{E}_{2}||E_{3}|)\) time, in a similar way to the acyclic case for Theorem 1.
Overall, the total time complexity is \(O(|E_{1}|+|E_{2}|+|E_{3}|+|\hat{V}_{1}||\hat{V}_{2}|\log|\Sigma_{1}|+|\hat{V}_ {1}||\hat{V}_{2}||V_{3}|\log|\Sigma|+|\hat{E}_{1}||\hat{E}_{2}||E_{3}|)\subseteq O (|E_{1}||E_{2}||E_{3}|+|V_{1}||V_{2}||V_{3}|\log|\Sigma|)\).
The total space complexity is \(O(|V_{1}||V_{2}|+|\hat{V}_{1}||\hat{V}_{2}||V_{3}|)\subseteq O(|V_{1}||V_{2}|| V_{3}|)\).
An example of computing \(\hat{D}_{i,j,k}\) using dynamic programming is shown in Figure 3.
Algorithm 2 in Appendix B shows a pseudo-code of our algorithm which solves Problem 2 in the case where \(G_{1}\) and \(G_{2}\) can be cyclic and \(G_{3}\) is acyclic.
## 6 Conclusions and Open Questions
In this paper, we introduced the new problem of computing the SEQ-IC-LCS on labeled graphs. We showed that when the all the input labeled graphs are acyclic, the problem can be solved in \(O(|E_{1}||E_{2}|||E_{3})\) time and \(O(|V_{1}||V_{2}||V_{3}|)\) space by a dynamic programming approach. Furthermore, we extend our algorithm to a more general case where the two target labeled graphs can contain cycles, and presented an efficient algorithm that runs in \(O(|E_{1}||E_{2}||E_{3}|+|V_{1}||V_{2}||V_{3}|\log|\Sigma|)\) time and \(O(|V_{1}||V_{2}||V_{3}|)\) space.
Interesting open questions are whether one can extend the framework of our methods to the other variants STR-IC/EC-LCS and SEQ-EC-LCS of the constrained LCS problems in the case of labeled graph inputs. We believe that SEQ-EC-LCS for labeled graphs can be solved by similar methods to our SEQ-IC-LCS methods, within the same bounds.
## Acknowledgments
This work was supported by JSPS KAKENHI Grant Numbers JP21K17705 (YN), JP23H04386 (YN), JP22H03551 (SI), and JP23K18466 (SI).
|
2310.04968 | Multivariate Meixner polynomials as Birth and Death polynomials | Based on the framework of Plamen Iliev, multivariate Meixner polynomials are
constructed explicitly as Birth and Death polynomials. They form the complete
set of eigenpolynomials of a birth and death process with the birth and death
rates at population $x=(x_1,\ldots,x_n)\in\mathbb{N}_0^n$ are
$B_j(x)=\bigl(\beta+\sum_{i=1}^nx_j\bigr)$ and $D_j(x)=c_j^{-1}x_j$, $0<c_j$,
$j=1,\ldots,n$, $\sum_{j=1}^nc_j<1$. The corresponding stationary distribution
is
$(\beta)_{\sum_{j=1}^nc_j}\prod_{j=1}^n(c_j^{x_j}/x_j!)(1-\sum_{j=1}^nc_j)^\beta$,
the trivial $n$-variable generalisation of the orthogonality weight of the
single variable Meixner polynomials. The polynomials, depending on $n+1$
parameters ($\{c_i\}$ and $\beta$), satisfy the difference equation with the
coefficients $B_j(x)$ and $D_j(x)$ $j=1,\ldots,n$, which is the straightforward
generalisation of the difference equation governing the single variable Meixner
polynomials. The polynomials are truncated $(n+1,2n+2)$ hypergeometric
functions of Aomoto-Gelfand. The polynomials and the derivation are very
similar to those of the multivariate Krawtchouk polynomials reported recently. | Ryu Sasaki | 2023-10-08T01:59:07Z | http://arxiv.org/abs/2310.04968v1 | # Multivariate Meixner polynomials as Birth and Death polynomials
###### Abstract
Based on the framework of Plamen Iliev, multivariate Meixner polynomials are constructed explicitly as Birth and Death polynomials. They form the complete set of eigenpolynomials of a birth and death process with the birth and death rates at population \(\boldsymbol{x}=(x_{1},\ldots,x_{n})\in\mathbb{N}_{0}^{n}\) are \(B_{j}(\boldsymbol{x})=\left(\beta+\sum_{i=1}^{n}x_{j}\right)\) and \(D_{j}(\boldsymbol{x})=c_{j}^{-1}x_{j}\), \(0<c_{j}\), \(j=1,\ldots,n\), \(\sum_{j=1}^{n}c_{j}<1\). The corresponding stationary distribution is \((\beta)_{\sum_{j=1}^{n}c_{j}}\prod_{j=1}^{n}(c_{j}^{x_{j}}/x_{j}!)(1-\sum_{j=1 }^{n}c_{j})^{\beta}\), the trivial \(n\)-variable generalisation of the orthogonality weight of the single variable Meixner polynomials. The polynomials, depending on \(n+1\) parameters (\(\{c_{i}\}\) and \(\beta\)), satisfy the difference equation with the coefficients \(B_{j}(\boldsymbol{x})\) and \(D_{j}(\boldsymbol{x})\)\(j=1,\ldots,n\), which is the straightforward generalisation of the difference equation governing the single variable Meixner polynomials. The polynomials are truncated \((n+1,2n+2)\) hypergeometric functions of Aomoto-Gelfand. The polynomials and the derivation are very similar to those of the multivariate Krawtchouk polynomials reported recently.
## 1 Introduction
As the second member of the discrete multivariate hypergeometric orthogonal polynomials of Askey scheme [2, 20, 23, 30], _i.e._ those satisfying second order difference equations with the nearest neighbour interactions, the multivariate Meixner polynomials are constructed _explicitly_ as the multivariate Birth and Death (BD) [5, 21, 20] polynomials. Multivariate BD processes are the nearest neighbour interactions of multi-dimensional discrete systems, the most basic type of interactions, like the well known Ising models.
The main strategy is essentially the same as that of the multivariate Krawtchouk orthogonal polynomials reported recently [30]. The basic framework, the orthogonality measure, the design of the truncated hypergeometric functions and the generating function connecting them, is provided by a lucent paper of Iliev [18] which follows the trends of many preceding works, _e.g._[8, 25]. The additional input is the reformulation of the birth and death
problems with the link to the associated self-adjoint matrices [27, 29, 28]. It is markedly different from the traditional Karlin-McGregor approach [21, 20]. Since the central motivation is the pursuit of multivariate orthogonal polynomials obeying second order difference equations, this paper is rather distant from other works on multivariate orthogonal polynomials [3, 31, 33, 24, 26, 15, 16, 11, 14, 22, 12, 13, 17, 7, 32, 4, 10].
This paper is organised as follows. Starting with a short resume of the single variable Meixner polynomials in section 2, the necessary ingredients for the construction of multivariate Meixner polynomials are listed in the logical order in section 3. In section 4, Iliev's framework for the multivariate Meixner polynomials is reproduced in my notation. The main contents are in section 5. An overview of multivariate birth and death problem is recapitulated in section 5.1. The birth and death rates are introduced in section 5.2. The orthogonality of the Meixner polynomials is proved in section 5.3. The second order difference equation of the multivariate Meixner polynomials is demonstrated by using the generating function in section 5.4. The solution of the BD problem is presented in section 5.5. The exceptional cases are remarked in section 5.6.
## 2 Single variable Meixner polynomials
Let us start with the summary of the single variable Meixner polynomials [23] with real positive parameters \(1>c>0\), \(\beta>0\),
\[P_{m}(\beta,c;x)\stackrel{{\rm def}}{{=}}{}_{2}F_{1}\Big{(} \genfrac{}{}{0.0pt}{}{-m,\,-x}{\beta}\Bigm{|}1-c^{-1}\Big{)}=\sum_{k\in\mathbb{ N}_{0}}\frac{(-m)_{k}(-x)_{k}}{(\beta)_{k}}\frac{(1-\frac{1}{c})^{k}}{k!}, \quad m\in\mathbb{N}_{0}, \tag{2.1}\]
which are obtained by the generating function \(G(\beta,c,x;t)\),
\[G(\beta,c,x;t)\stackrel{{\rm def}}{{=}}\left(1-\frac{t}{c} \right)^{x}(1-t)^{-\beta-x}=\sum_{m\in\mathbb{N}_{0}}\frac{(\beta)_{m}}{m!}P_{ m}(\beta,c;x)t^{n}, \tag{2.2}\]
in which \((a)_{n}\) is the shifted factorial defined for \(a\in\mathbb{C}\) and nonnegative integer \(n\), \((a)_{0}=1\), \((a)_{n}=\prod_{k=0}^{n-1}(a+k)\), \(n\geq 1\). They satisfy the orthogonality relations with the normalised orthogonality weight \(W(\beta,c;x)\),
\[W(\beta,c;x)\stackrel{{\rm def}}{{=}}\frac{(\beta) _{x}c^{x}}{x!}(1-c)^{\beta}, \tag{2.3}\] \[\sum_{x\in\mathbb{N}_{0}}W(\beta,c;x)P_{m}(\beta,c;x)P_{m^{\prime }}(\beta,c;x)=\frac{1}{(\beta)_{n}\frac{c^{m}}{m!}}\,\delta_{m\,m^{\prime}}, \tag{2.4}\]
and the second order difference equations are
\[\widetilde{\mathcal{H}}\stackrel{{\text{def}}}{{=}}( \beta+x)(1-e^{\partial})+\frac{x}{c}(1-e^{-\partial}),\qquad\partial=\frac{d}{ dx},\quad e^{\pm\partial}f(x)=f(x\pm 1), \tag{2.5}\] \[\widetilde{\mathcal{H}}P_{m}(\beta,c;x)=\frac{1-c}{c}mP_{m}( \beta,c;x),\qquad m\in\mathbb{N}_{0}. \tag{2.6}\]
The eigenvalue is easily guessed as
\[\widetilde{\mathcal{H}}x^{m}=m\left(-1+\frac{1}{c}\right)x^{m}+\text{lower degrees},\qquad m\in\mathbb{N}_{0}.\]
It is straightforward to verify (2.6) in terms of the generating function,
\[\widetilde{\mathcal{H}}G(\beta,c,x;t)=\frac{1-c}{c}t\frac{\partial}{\partial t }G(\beta,c,x;t). \tag{2.7}\]
## 3 Path to multivariate Meixner polynomials
In this paper I present the explicit forms of multivariate Meixner polynomials satisfying second order difference equations in a similar way to the single variable Meixner polynomials shown above. The necessary ingredients are
1. (normalised) orthogonality weight \(W(\mathbf{x})\)
2. generating function \(G(\mathbf{x})\)
3. general form of the polynomial \(P_{\mathbf{m}}(\mathbf{x})\)
4. proof of the orthogonality \(\sum_{\mathbf{x}\in\mathbb{N}_{0}^{n}}W(\mathbf{x})P_{\mathbf{m}}(\mathbf{x})P_{\mathbf{m}^{\prime} }(\mathbf{x})=0,\,\mathbf{m}\neq\mathbf{m}^{\prime}\)
5. second order difference operator \(\widetilde{\mathcal{H}}\)
6. eigenvalue spectrum \(\mathcal{E}(\mathbf{m})\)
7. proof of \(\widetilde{\mathcal{H}}P_{\mathbf{m}}(\mathbf{x})=\mathcal{E}(\mathbf{m})P_{\mathbf{m}}(\mathbf{x})\)
Plamen Iliev provided the first four ingredients in [18], which will be cited as I. The formulation of multivariate Birth and Death polynomials introduced by myself [30] provides the rest and the explicit forms of the multivariate Meixner polynomials are obtained.
Framework for the multivariate Meixner polynomials due to Iliev [18]
**Definition 4.1**: _The normalised orthogonality weight of the multivariate Meixner polynomials is a simple \(n\)-variable generalisation of the single variable one (2.3)_
\[W(\beta,\boldsymbol{c};\boldsymbol{x})\stackrel{{\mbox{\tiny def }}}{{=}}\frac{(\beta)_{|\boldsymbol{x}|}\boldsymbol{c}^{\boldsymbol{x}}}{ \boldsymbol{x}!}(1-|c|)^{\beta},\quad\beta>0, \tag{4.1}\]
_in which the probability parameters \(\{c_{i}>0\}\) are restricted by the summability of \(W\),_
\[\boldsymbol{x} =(x_{1},x_{2},\ldots,x_{n})\in\mathbb{N}_{0}^{n},\quad|x| \stackrel{{\mbox{\tiny def}}}{{=}}\sum_{i=1}^{n}x_{i},\quad \boldsymbol{x}!\stackrel{{\mbox{\tiny def}}}{{=}}\prod_{i=1}^{n} x_{i}!,\] \[\boldsymbol{c} =(c_{1},c_{2},\ldots,c_{n})\in\mathbb{R}_{>0}^{n},\quad|c| \stackrel{{\mbox{\tiny def}}}{{=}}\sum_{i=1}^{n}c_{i},\quad \boldsymbol{c}^{\boldsymbol{x}}\stackrel{{\mbox{\tiny def}}}{{= }}\prod_{i=1}^{n}c_{i}^{x_{i}},\] \[\sum_{\boldsymbol{x}\in\mathbb{N}_{0}^{n}}W(\beta,\boldsymbol{ c};\boldsymbol{x})=1\quad\Rightarrow 0<|c|<1. \tag{4.2}\]
First, prepare \(n^{2}\) real parameters \(\boldsymbol{u}=\{u_{i\,j}\}\), \(i,j=1,\ldots,n\) constrained by the conditions
\[\sum_{i=1}^{n}c_{i}u_{i\,j}=|c|-1,\quad j=1,\ldots,n, \tag{4.3}\] \[\sum_{i=1}^{n}c_{i}u_{i\,j}u_{i\,k}=|c|-1,\quad j\neq k,\quad j,k =1,\ldots,n. \tag{4.4}\]
By introducing another closely related \(n^{2}\) real parameters \(\boldsymbol{b}=\{b_{i\,j}\}\),
\[b_{i\,j}\stackrel{{\mbox{\tiny def}}}{{=}}1-u_{i\,j},\qquad i,j=1,\ldots,n, \tag{4.5}\]
which are Iliev's original parameters, these conditions look simpler
\[\sum_{i=1}^{n}c_{i}b_{i\,j}=1,\quad j=1,\ldots,n,\] (I.2.3a) \[\sum_{i=1}^{n}c_{i}b_{i\,j}b_{i\,k}=1,\quad j\neq k,\quad j,k=1, \ldots,n.\] (I.2.3b)
Similar conditions have been introduced by many authors for the construction of multivariate orthogonal polynomials [8, 9, 25, 10, 30]. It should be stressed that these \(n(n+1)/2\) conditions are not enough to determine \(n^{2}\) parameters \(\boldsymbol{u}\) completely.
**Definition 4.2**: _Iliev [I.(2.3b)] also introduces another set of probability parameters \(\{\bar{c}_{j}\}\), by_
\[1-|c|+\sum_{i=1}^{n}c_{i}u_{i\,j}^{2}=\frac{1-|c|}{\bar{c}_{j}},\quad j=1,\ldots,n. \tag{4.6}\]
**Definition 4.3**: _The generating function \(G\) is defined by (I.(2.6))_
\[G(\beta,\mathbf{u},\mathbf{x};\mathbf{t})\stackrel{{ \mbox{\tiny def}}}{{=}}(1-|t|)^{-\beta-|x|}\prod_{i=1}^{n}\left(1-\sum_{j=1 }^{n}b_{i\,j}t_{j}\right)^{x_{i}}, \tag{4.7}\]
_in which_
\[\mathbf{t}=(t_{1},t_{2},\ldots,t_{n})\in\mathbb{C}^{n},\quad|t| \stackrel{{\mbox{\tiny def}}}{{=}}\sum_{i=1}^{n}t_{i}.\]
**Definition 4.4**: _The polynomials \(P_{\mathbf{m}}(\mathbf{x})\) are defined by the expansion of \(G\) (I.(2.7)) around \(\mathbf{t}=\mathbf{0}\),_
\[G(\beta,\mathbf{u},\mathbf{x};\mathbf{t})=\sum_{ \mathbf{m}\in\mathbb{N}_{0}^{n}}\frac{(\beta)_{|m|}}{\mbox{\boldmath $m$}!}P_{\mathbf{m}}(\beta,\mathbf{u};\mathbf{x}) \mathbf{t}^{\mathbf{m}},\quad\mathbf{m}=(m_{1}, \ldots,m_{n})\in\mathbb{N}_{0}^{n},\quad\mathbf{t}^{\mbox{\boldmath $m$}}=\prod_{i=1}^{n}t_{i}^{m_{i}}. \tag{4.8}\]
His two main Theorems are on the orthogonality and the concrete form of \(P_{\mathbf{m}}(\mathbf{x})\).
**Theorem 4.5**: _Orthogonality relation reads_
\[\sum_{\mathbf{x}\in\mathbb{N}_{0}^{n}}W(\beta,\mathbf{c};\mathbf{x})P_{\mathbf{m}}(\beta,\mathbf{u};\mathbf{x})P_{\mathbf{m}^{\prime}}(\beta, \mathbf{u};\mathbf{x}) =\frac{1}{\bar{W}(\beta,\bar{\mathbf{c}};\mathbf{m})}\,\delta_{\mathbf{m}\,\mathbf{m}^{\prime}}, \quad\mathbf{m},\mathbf{m}^{\prime}\in\mathbb{N}_{0}^{n}, \tag{4.9}\] \[\bar{W}(\beta,\bar{\mathbf{c}};\mathbf{m}) \stackrel{{\mbox{\tiny def}}}{{=}}\frac{(\beta)_{|m|}\bar{\mathbf{c}}^{\mathbf{m}}}{\mathbf{m}!}. \tag{4.10}\]
**Theorem 4.6**: _Concrete form of the polynomial in terms of \(\mathbf{u}\) is_
\[P_{\mathbf{m}}(\beta,\mathbf{u};\mathbf{x}) \stackrel{{\mbox{\tiny def}}}{{=}}\sum_{\begin{subarray}{c}\sum_{ i,j}c_{ij}\\ (c_{ij})\in\mathbb{M}_{n}\end{subarray}}\frac{\prod_{i=1}^{n}(-x_{i})\sum_{j=1}^{n}c_{ ij}\prod_{j=1}^{n}(-m_{j})\sum_{i=1}^{n}c_{ij}}{(\beta)_{\sum_{i,j}c_{ij}}}\, \frac{\prod(u_{ij})^{c_{ij}}}{\prod c_{ij}!}, \tag{4.11}\]
_in which \(\mathbb{M}_{n}\) is the set of all \(n\times n\) matrices with entries in \(\mathbb{N}_{0}\). This is the hypergeometric function of Aomoto-Gelfand [1, 6] of type \((n+1,2n+2)\) and it has almost the same form as that of the multivariate Krawtchouk polynomials reported earlier ([25].2), ([4].7), ([30].3.15), except that \(-N\) is replaced by \(\beta\)._
It should be stressed that these results constitute the framework only. In order to construct the multivariate Meixner polynomials explicitly, the \(n\times n\) matrix of the parameters \(u\) must be specified completely. Two types of examples are presented in Iliev's article [18]SS5.
Approach via Birth and Death problem
Now I present the procedure to derive the multivariate Meixner polynomials explicitly, _i.e._ to determine the parameters \(\{u_{\,i\,j}\}\) explicitly, based on the multivariate Birth and Death problem setting.
### An overview of \(n\)-variate Birth and Death problem
This is a simple recapitulation of the problem setting of multivariate Birth and Death (BD) problems reported in a previous paper [30]. For the single variable BD problem, the well-known approach by Karlin-McGregor [21, 20] is based on the three term recursion relations of generic orthogonal polynomials. My method [29] employs second order difference equations which are dual to the three term recursion relations. All of the Askey scheme polynomials of a single discrete variable provide 'exactly solvable BD problem' of one population group. For the construction of multivariate orthogonal polynomials of Askey type, I adopt the BD problem approach after the one successful example of the multivariate Krawtchouk polynomials [30].
The differential equation for the birth and death (BD) process of \(n\) groups of unlimited population \(\boldsymbol{x}=(x_{1},x_{2},\ldots,x_{n})\in\mathbb{N}_{0}\) reads
\[\frac{\partial}{\partial t}\mathcal{P}(\boldsymbol{x};t) =(L_{BD}\mathcal{P})(\boldsymbol{x};t)=\sum_{\boldsymbol{y}\in \mathbb{N}_{0}^{n}}L_{BD_{\boldsymbol{x}\,\boldsymbol{y}}}\mathcal{P}( \boldsymbol{y};t),\quad\mathcal{P}(\boldsymbol{x};t)\geq 0,\quad\sum_{ \boldsymbol{x}\in\mathbb{N}_{0}^{n}}\mathcal{P}(\boldsymbol{x};t)=1, \tag{5.1}\] \[=-\sum_{j=1}^{n}(B_{j}(\boldsymbol{x})+D_{j}(\boldsymbol{x})) \mathcal{P}(\boldsymbol{x};t) +\sum_{j=1}^{n}B_{j}(\boldsymbol{x}-\boldsymbol{e}_{j}) \mathcal{P}(\boldsymbol{x}-\boldsymbol{e}_{j};t)\] \[+\sum_{j=1}^{n}D_{j}(\boldsymbol{x}+\boldsymbol{e}_{j}) \mathcal{P}(\boldsymbol{x}+\boldsymbol{e}_{j};t), \tag{5.2}\]
in which \(\boldsymbol{e}_{j}\) is the \(j\)-th unit vector, \(j=1,\ldots,n\). The birth and death rates for \(n\) groups \(B_{j}(\boldsymbol{x})\), \(D_{j}(\boldsymbol{x})\) are all positive with the boundary conditions
\[B_{j}(\boldsymbol{x})>0,\quad D_{j}(\boldsymbol{x})>0,\quad D_{j}(\boldsymbol {x})=0\ \ \text{if}\ \ \boldsymbol{x}\in\mathbb{N}_{0}^{n}\ \ \text{and}\ \ \boldsymbol{x}-\boldsymbol{e}_{j}\notin\mathbb{N}_{0}^{n}\quad j=1,\ldots,n. \tag{5.3}\]
This is a typical example of the nearest neighbour interactions in \(n\) dimensions. The birth and death operator \(L_{BD}\), an \(\mathbb{N}_{0}^{n}\times\mathbb{N}_{0}^{n}\) matrix, can be expressed succinctly as
\[L_{BD}=-\sum_{j=1}^{n}\left[B_{j}(\boldsymbol{x})-B_{j}(\boldsymbol{x}- \boldsymbol{e}_{j})e^{-\partial_{j}}+D_{j}(\boldsymbol{x})-D_{j}(\boldsymbol {x}+\boldsymbol{e}_{j})e^{\partial_{j}}\right], \tag{5.4}\]
\[= -\sum_{j=1}^{n}\big{[}(1-e^{-\partial_{j}})B_{j}(\mathbf{x})+(1-e^{ \partial_{j}})D_{j}(\mathbf{x})\big{]}\,, \tag{5.5}\] \[= -\sum_{j=1}^{n}(1-e^{-\partial_{j}})\big{(}B_{j}(\mathbf{x})-D_{j}( \mathbf{x}+\mathbf{e}_{j})\,e^{\partial_{j}}\big{)}. \tag{5.6}\]
It is required that the system has a stationary distribution \(W(\mathbf{x})\)
\[(L_{BD}W)(\mathbf{x})=0,\quad\sum_{\mathbf{x}\in\mathbb{N}_{0}^{n}}W(\mathbf{x})=1,\quad W( \mathbf{x})>0,\quad\mathbf{x}\in\mathbb{N}_{0}^{n}, \tag{5.7}\]
which constrains \(\{B_{j}(\mathbf{x}),D_{j}(\mathbf{x})\}\) severely. The sufficient condition for the existence of the zero mode of \(L_{BD}\) (5.7), the stationary distribution \(W(\mathbf{x})>0\), reads
\[\big{(}B_{j}(\mathbf{x})-D_{j}(\mathbf{x}+\mathbf{e}_{j})e^{\partial_{j}}\big{)}W(\mathbf{x}) =0\ \Rightarrow\frac{W(\mathbf{x}+\mathbf{e}_{j})}{W(\mathbf{x})}=\frac{B_{j}(\mathbf{x})}{D_{j} (\mathbf{x}+\mathbf{e}_{j})},\quad j=1,\ldots,n, \tag{5.8}\]
together with the compatibility conditions.
\[\frac{B_{j}(\mathbf{x})}{D_{j}(\mathbf{x}+\mathbf{e}_{j})}\frac{B_{k}(\mathbf{x}+\mathbf{e}_{j})} {D_{k}(\mathbf{x}+\mathbf{e}_{j}+\mathbf{e}_{k})}=\frac{B_{k}(\mathbf{x})}{D_{k}(\mathbf{x}+\mathbf{e} _{k})}\frac{B_{j}(\mathbf{x}+\mathbf{e}_{k})}{D_{j}(\mathbf{x}+\mathbf{e}_{k}+\mathbf{e}_{j})}, \quad j,k=1,\ldots,n, \tag{5.9}\]
When satisfied, these conditions determine entire \(W(\mathbf{x})\) starting from the origin \(W(\mathbf{0})\).
By a similarity transformation of \(L_{BD}\) in terms of \(\sqrt{W(\mathbf{x})}\), a new operator (matrix) \(\mathcal{H}\) is introduced,
\[\mathcal{H} \stackrel{{\rm def}}{{=}}-\big{(}\sqrt{W(\mathbf{x})} \big{)}^{-1}L_{BD}\sqrt{W(\mathbf{x})}. \tag{5.10}\] \[= \sum_{j=1}^{n}\left[B_{j}(\mathbf{x})+D_{j}(\mathbf{x})-\sqrt{B_{j}(\mathbf{ x})D_{j}(\mathbf{x}+\mathbf{e}_{j})}\,e^{\partial_{j}}-\sqrt{B_{j}(\mathbf{x}-\mathbf{e}_{j})D_{j}( \mathbf{x})}\,e^{-\partial_{j}}\right],\] (5.11) \[\mathcal{H}_{\mathbf{x}\,\mathbf{y}} = \sum_{j=1}^{n}\left[\left(B_{j}(\mathbf{x})+D_{j}(\mathbf{x})\right)\delta _{\mathbf{x}\,\mathbf{y}}-\sqrt{B_{j}(\mathbf{x})D_{j}(\mathbf{x}+\mathbf{e}_{j})}\,\delta_{\mathbf{x }+\mathbf{e}_{j}\,\mathbf{y}}\right.\] (5.12) \[\left.-\sqrt{B_{j}(\mathbf{x}-\mathbf{e}_{j})D_{j}(\mathbf{x})}\,\delta_{\bm {x}-\mathbf{e}_{j}\,\mathbf{y}}\right].\]
The operator \(\mathcal{H}\) is a _positive semi-definite real symmetric matrix_ as is clear by the following factorisation,
\[\mathcal{H}=\sum_{j=1}^{n}\mathcal{A}_{j}(\mathbf{x})^{T}\mathcal{A}_ {j}(\mathbf{x}),\qquad\mathcal{H}_{\mathbf{x}\,\mathbf{y}}=\mathcal{H}_{\mathbf{y}\,\mathbf{x}}, \tag{5.13}\] \[\mathcal{A}_{j}(\mathbf{x})\stackrel{{\rm def}}{{=}} \sqrt{B_{j}(\mathbf{x})}-e^{\partial_{j}}\sqrt{D_{j}(\mathbf{x})},\ \mathcal{A}_{j}(\mathbf{x})^{T}=\sqrt{B_{j}(\mathbf{x})}-\sqrt{D_{j}(\mathbf{x})}\,e^{- \partial_{j}},\ j=1,\ldots,n. \tag{5.14}\]
As \(W(\mathbf{x})\) is the zero mode of \(L_{BD}\) (5.7), \(\sqrt{W(\mathbf{x})}\) is the zero mode of \(\mathcal{A}_{j}(\mathbf{x})\) and \(\mathcal{H}\)
\[\mathcal{A}_{j}(\mathbf{x})\sqrt{W(\mathbf{x})}=0,\quad j=1,\ldots,n\quad \Longrightarrow\mathcal{H}\sqrt{W(\mathbf{x})}=0. \tag{5.15}\]
Another operator \(\widetilde{\mathcal{H}}\) is introduced by a similarity transformation of \(\mathcal{H}\) in terms of the square root of the stationary distribution \(W(\mathbf{x})\),
\[\widetilde{\mathcal{H}} \stackrel{{\text{def}}}{{=}}\big{(}\sqrt{W(\mathbf{x}) }\big{)}^{-1}\mathcal{H}\sqrt{W(\mathbf{x})}. \tag{5.16}\] \[=\sum_{j=1}^{n}\big{[}B_{j}(\mathbf{x})\big{(}1-e^{\partial_{j}} \big{)}+D_{j}(\mathbf{x})\big{(}1-e^{-\partial_{j}}\big{)}\big{]}\,, \tag{5.17}\]
which provides the difference equations for the possible multivariate orthogonal polynomials of Askey type. A trivial fact that a constant is the zero mode of \(\widetilde{\mathcal{H}}\) is worth mentioning
\[\widetilde{\mathcal{H}}\,1=0, \tag{5.18}\]
which corresponds to (5.15).
### BD rates for Meixner
**Definition 5.1**: **BD rates for Meixner** _The following Birth and Death rates are adopted for the \(n\)-variate Meixner polynomials,_
\[B_{j}(\mathbf{x})\stackrel{{\text{def}}}{{=}}\beta+|x|,\qquad D_{j}( \mathbf{x})\stackrel{{\text{def}}}{{=}}c_{j}^{-1}x_{j},\quad c_{j}>0, \qquad j=1,\ldots,n,\quad\beta>0, \tag{5.19}\]
_in which the parameters \(\{c_{j}\}\), \(j=1,\ldots,n\) are asuumed to be generic._
They trivially satisfy the compatibility conditions (5.9) for \(j,k=1,\ldots,n\),
\[\frac{B_{j}(\mathbf{x})}{D_{j}(\mathbf{x}+\mathbf{e}_{j})}\frac{B_{k}(\mathbf{x}+\mathbf{e}_{j})}{ D_{k}(\mathbf{x}+\mathbf{e}_{j}+\mathbf{e}_{k})}=\frac{c_{k}c_{k}(\beta+|x|)(\beta+|x|+1)}{(x_{j} +1)(x_{k}+1)}=\frac{B_{k}(\mathbf{x})}{D_{k}(\mathbf{x}+\mathbf{e}_{k})}\frac{B_{j}(\mathbf{x} +\mathbf{e}_{k})}{D_{j}(\mathbf{x}+\mathbf{e}_{k}+\mathbf{e}_{j})}.\]
They lead to the stationary distribution introduced in Definition 4.1
\[W(\beta,\mathbf{c};\mathbf{x})\stackrel{{\text{def}}}{{=}}\frac{(\beta)_ {|x|}\mathbf{c}^{\mathbf{x}}}{\mathbf{x}!}(1-|c|)^{\beta},\quad\beta>0, \tag{4.1}\]
as the sufficient conditions (5.8) are easily verified
\[\big{(}(\beta+|x|)-c_{j}^{-1}(x_{j}+1)e^{\partial_{j}}\big{)}\,W(\beta,\mathbf{c}; \mathbf{x})=\frac{(\beta)_{|x|+1}\mathbf{c}^{\mathbf{x}}}{\mathbf{x}!}(1-|c|)^{\beta}-\frac{( \beta)_{|x|+1}\mathbf{c}^{\mathbf{x}}}{\mathbf{x}!}(1-|c|)^{\beta}=0.\]
The summability of \(W(\beta,\mathbf{c};\mathbf{x})\) due to the summation formula
\[\sum_{n=0}^{\infty}\frac{(\gamma)_{n}}{n!}z^{n}={}_{1}F_{0}\Big{(}\genfrac{}{}{0. 0pt}{}{\gamma}{-}\Bigm{|}z\Big{)}=(1-|z|)^{-\gamma} \tag{5.20}\]
limits the parameter ranges
\[0<\sum_{j=1}^{n}c_{j}\equiv|c|<1. \tag{5.21}\]
**Definition 5.2**: **operator \(\widetilde{\mathcal{H}}\) for \(n\)-variate Meixner** _takes a very simple form_
\[\widetilde{\mathcal{H}}=(\beta+|x|)\sum_{j=1}^{n}(1-e^{\partial_{j}})+\sum_{j= 1}^{n}c_{j}^{-1}x_{j}(1-e^{-\partial_{j}}). \tag{5.22}\]
It is easy to see that the set of \(n\)-variate polynomials of maximal degree \(M\)
\[V_{M}(\mathbf{x})\stackrel{{\mathrm{def}}}{{=}}\mathrm{Span}\{\mathbf{x }^{\mathbf{m}}|0\leq|m|\leq M\},\quad|m|\stackrel{{\mathrm{def}}}{{=} }\sum_{j=1}^{n}m_{j}, \tag{5.23}\]
is invariant under \(\widetilde{\mathcal{H}}\)
\[\widetilde{\mathcal{H}}V_{M}(\mathbf{x})\subseteq V_{M}(\mathbf{x}), \tag{5.24}\]
and \(\widetilde{\mathcal{H}}\) has eigenpolynomials in each \(V_{M}(\mathbf{x})\). For generic values of the parameters \(\{c_{j}\}\), these polynomials are orthogonal with each other due to the real symmetry of the operator \(\mathcal{H}\) (5.13).
### Meixner polynomials are orthogonal with each other
It is easy to determine \(n\) degree 1 eigenpolynomials of \(\widetilde{\mathcal{H}}\) (5.22) with unknown coefficients \(\{a_{i}\}\) and unit constant part,
\[P_{|m|=1}(\mathbf{x})=1+\sum_{i=1}^{n}a_{i}x_{i},\quad\widetilde{ \mathcal{H}}P_{|m|=1}(\mathbf{x})=\lambda P_{|m|=1}(\mathbf{x}), \tag{5.25}\] \[\Rightarrow-(\beta+|x|)\sum_{i=1}^{n}a_{i}+\sum_{i=1}^{n}c_{i}^{ -1}a_{i}x_{i}=\lambda\Big{(}1+\sum_{i=1}^{n}a_{i}x_{i}\Big{)}.\]
By equating the coefficients of \(x_{i}\) and 1, an eigenvalue equations of \(\{a_{i}\}\) are obtained,
\[-\sum_{k=1}^{n}a_{k}+c_{i}^{-1}a_{i} =\lambda a_{i},\qquad-\beta\sum_{k=1}^{n}a_{k}=\lambda, \tag{5.26}\] \[\implies a_{i}=\frac{1}{\beta}\frac{\lambda}{\lambda-c_{i}^{-1}}, \quad i=1,\dots,n,. \tag{5.27}\]
Here \(\lambda\) is the root of a degree \(n\) characteristic polynomial \(\mathcal{F}(\lambda)\) of an \(n\times n\) matrix \(F(\boldsymbol{c})\) depending on \(\{c_{i}\}\),
\[0=\mathcal{F}(\lambda)\stackrel{{\rm def}}{{=}}\text{Det}\big{(} \lambda I_{n}-F(\boldsymbol{c})\big{)},\quad F(\boldsymbol{c})_{i\,j} \stackrel{{\rm def}}{{=}}-1+c_{i}^{-1}\delta_{i\,j}. \tag{5.28}\]
For each eigenvalue \(\lambda_{j}\), which is positive by construction, the unknown coefficients \(\{a_{i}\}\) are determined,
\[a_{i,j}=\frac{\lambda_{j}}{\beta(\lambda_{j}-c_{i}^{-1})},\quad i,j=1,\ldots,n,\]
and it satisfies the relation
\[\sum_{i=1}^{n}\frac{1}{\lambda_{j}-c_{i}^{-1}}\equiv\sum_{i=1}^{n}\frac{c_{i}} {c_{i}\lambda_{j}-1}=-1,\quad j=1,\ldots,n. \tag{5.29}\]
Let us tentatively identify the above \(j\)-th solution as \(\boldsymbol{m}=\boldsymbol{e}_{j}\) solution
\[P_{\boldsymbol{e}_{j}}(\boldsymbol{x})=1+\frac{1}{\beta}\sum_{i=1}^{n}\frac{ \lambda_{j}}{\lambda_{j}-c_{i}^{-1}}x_{i},\quad j=1,\ldots,n. \tag{5.30}\]
By comparing these polynomials with the general hypergeometric functions [1, 6] in **Theorem 4.6**
\[P_{\boldsymbol{m}}(\beta,\boldsymbol{u};\boldsymbol{x})\stackrel{{ \rm def}}{{=}}\sum_{\begin{subarray}{c}\sum_{i,j}c_{ij}\\ (c_{ij})\in\mathbb{M}_{n}\end{subarray}}\frac{\prod\limits_{i=1}^{n}(-x_{i}) \sum\limits_{j=1}^{n}c_{ij}\prod\limits_{j=1}^{n}(-m_{j})\sum\limits_{i=1}^{n }c_{ij}}{(\beta)_{\sum_{i,j}c_{ij}}}\ \frac{\prod(u_{ij})^{c_{ij}}}{\prod c_{ij}!}, \tag{4.11}\]
the system parameters \(\{u_{\,ij}\}\) are completely identified
\[P_{\boldsymbol{e}_{j}}(\beta,\boldsymbol{u};\boldsymbol{x})=1+\frac{1}{\beta }\sum_{i=1}^{n}u_{\,j}x_{i},\qquad u_{\,j}=\frac{\lambda_{j}}{\lambda_{j}-c_{ i}^{-1}},\quad i,j=1,\ldots,n. \tag{5.31}\]
The \(n+1\) eigenvectors \(\sqrt{W(\beta,\boldsymbol{c};\boldsymbol{x})}\), \(\{\sqrt{W(\beta,\boldsymbol{c};\boldsymbol{x})}P_{\boldsymbol{e}_{j}}(\beta, \boldsymbol{u};\boldsymbol{x})\}\), \(j=1,\ldots,n\) of the real symmetric matrix \(\mathcal{H}\) (5.11) are orthogonal with each other for generic parameters \(\{c_{i}\}\),
\[\sum_{\boldsymbol{x}\in\mathbb{N}_{0}^{n}}W(\beta,\boldsymbol{c };\boldsymbol{x})P_{\boldsymbol{e}_{j}}(\beta,\boldsymbol{u};\boldsymbol{x}) =0, j=1,\ldots,n, \tag{5.32}\] \[\sum_{\boldsymbol{x}\in\mathbb{N}_{0}^{n}}W(\beta,\boldsymbol{c };\boldsymbol{x})P_{\boldsymbol{e}_{j}}(\beta,\boldsymbol{u};\boldsymbol{x}) P_{\boldsymbol{e}_{k}}(\beta,\boldsymbol{u};\boldsymbol{x}) =0, j\neq k,\quad j,k=1,\ldots,n. \tag{5.33}\]
By using the summation formulas
\[\sum_{\boldsymbol{x}\in\mathbb{N}_{0}^{n}}W(\beta,\boldsymbol{c };\boldsymbol{x})x_{j} =\frac{c_{j}\beta}{1-|c|}, j=1,\ldots,n,\]
\[\sum_{\mathbf{x}\in\mathbb{N}_{0}^{n}}W(\beta,\mathbf{c};\mathbf{x})x_{j}x_{k}=\frac{\beta( \beta+1)c_{j}c_{k}}{(1-|c|)^{2}}+\frac{\beta c_{j}}{1-|c|}\,\delta_{j\,k},\quad j,k=1,\ldots,n,\]
they are reduced to the same expressions as the constraining conditions (4.3) and (4.4) of the \(n^{2}\) real parameters \(\mathbf{u}=\{u_{i\,j}\}\), \(i,j=1,\ldots,n\) in **Definition 4.1**,
\[\sum_{i=1}^{n}c_{i}u_{i\,j}=|c|-1,\quad j=1,\ldots,n, \tag{4.3}\] \[\sum_{i=1}^{n}c_{i}u_{i\,j}u_{i\,k}=|c|-1,\quad j\neq k,\quad j,k =1,\ldots,n. \tag{4.4}\]
The norm of \(P_{\mathbf{e}_{j}}(\beta,\mathbf{u};\mathbf{x})\) can be calculated similarly
\[\sum_{\mathbf{x}\in\mathbb{N}_{0}^{n}}W(\beta,\mathbf{c};\mathbf{x})P_{\mathbf{e}_{j}}(\beta, \mathbf{u};\mathbf{x})^{2}=\frac{1}{\beta}\frac{1-|c|+\sum_{i=1}^{n}c_{i}u_{i\,j}^{2} }{1-|c|}=\frac{1}{\beta\bar{c}_{j}},\qquad j=1,\ldots,n. \tag{4.6}\]
These lead to the following
**Theorem 5.3**: **Orthogonality of \(P_{\mathbf{m}}(\beta,\mathbf{u};\mathbf{x})\)**__
_The explicitly assembled parameters \(\{u_{i\,j}=\lambda_{j}/(\lambda_{j}-c_{i}^{-1})\}\), \(i,j=1,\ldots,n\) (5.31) satisfy all the constraints required for the definition of the generating function \(G(\beta,\mathbf{u},\mathbf{x};\mathbf{t})\) (4.7) in_ **Definition 4.3**_._ **Definition 4.4** _and_ **Theorem 4.5, 4.6** _assure that the polynomials \(P_{\mathbf{m}}(\beta,\mathbf{u};\mathbf{x})\) (4.11) with the above \(\mathbf{u}=\{u_{i\,j}\}\) satisfy the orthogonality relation_
\[\sum_{\mathbf{x}\in\mathbb{N}_{0}^{n}}W(\beta,\mathbf{c};\mathbf{x})P_{\mathbf{m }}(\beta,\mathbf{u};\mathbf{x})P_{\mathbf{m}^{\prime}}(\beta,\mathbf{u};\mathbf{x}) =\frac{1}{\bar{W}(\beta,\bar{\mathbf{c}};\mathbf{m})}\,\delta_{\mathbf{m}\, \mathbf{m}^{\prime}},\quad\mathbf{m},\mathbf{m}^{\prime}\in\mathbb{N}_{0}^{n}, \tag{4.9}\] \[\bar{W}(\beta,\bar{\mathbf{c}};\mathbf{m}) =\frac{(\beta)_{|m|}\bar{\mathbf{c}}^{\mathbf{m}}}{\mathbf{m}!}, \tag{4.10}\]
_with the \(\{\bar{c}_{j}>0\}\), \(j=1,\ldots,n\) determined explicitly by (4.6)._
### All \(\{P_{\mathbf{m}}(\beta,\mathbf{u};\mathbf{x})\}\) are eigenvectors of \(\widetilde{\mathcal{H}}\)
Verifying that all higher degree ones \(\{P_{\mathbf{m}}(\beta,\mathbf{u};\mathbf{x})\}\) (4.11) are also the eigenpolynomials of \(\widetilde{\mathcal{H}}\) (5.22) is the next task. The explicit forms of the eigenvalues \(\mathcal{E}(\mathbf{m})\) are necessary. Since \(P_{\mathbf{m}}(\beta,\mathbf{u};\mathbf{x})\) has the form
\[P_{\mathbf{m}}(\beta,\mathbf{u};\mathbf{x})=1+\frac{1}{\beta}\sum_{i,j=1}^{n}x_{i}m_{j}u_{ i\,j}+\text{higher degrees}, \tag{5.34}\]
\(\widetilde{\mathcal{H}}\) acting on the higher degrees produces only the terms of linear and higher degrees. The only constant part of \(\widetilde{\mathcal{H}}P_{\boldsymbol{m}}(\beta,\boldsymbol{u};\boldsymbol{x})\) comes from \(\beta\sum_{i=1}^{n}(1-e^{\partial_{j}})\) acting on the linear part,
\[\beta\sum_{k=1}^{n}(1-e^{\partial_{k}})\left\{\frac{1}{\beta}\sum_{i,j=1}^{n}x _{i}m_{j}u_{i\,j}\right\}=-\sum_{i\,j}m_{j}u_{i\,j}=-\sum_{j=1}^{n}m_{j}\lambda _{j}\sum_{i=1}^{n}\frac{1}{\lambda_{j}-c_{i}^{-1}}=\sum_{j=1}^{n}m_{j}\lambda_ {j},\]
in which (5.29) is used. After applying \((1-e^{\partial_{i}})\), all higher degree terms vanish at the origin \(\boldsymbol{x}=\boldsymbol{0}\) as they consist of terms like \((x_{i})_{k}(x_{j})_{l}\), \(k+l\geq 2\), the typical structure of the hypergeometric functions. This leads to the following
**Proposition 5.4**: _If \(P_{\boldsymbol{m}}(\beta,\boldsymbol{u};\boldsymbol{x})\) (4.11) is an eigenpolynomial of \(\widetilde{\mathcal{H}}\) (5.22), it has a linear spectrum_
\[\mathcal{E}(\boldsymbol{m})\stackrel{{\text{def}}}{{=}}\sum_{j=1 }^{n}m_{j}\lambda_{j}.\]
The remaining task is to demonstrate
\[\widetilde{\mathcal{H}}P_{\boldsymbol{m}}(\beta,\boldsymbol{u};\boldsymbol{x}) =\Bigl{(}\sum_{k=1}^{n}m_{k}\lambda_{k}\Bigr{)}P_{\boldsymbol{m}}(\beta, \boldsymbol{u};\boldsymbol{x}),\qquad\boldsymbol{x},\boldsymbol{m}\in\mathbb{ N}_{0}^{n}. \tag{5.35}\]
After the example of the single variable Meixner polynomials in SS2, the following \(n\)-variables generalisation of the generating function formula (2.7)
\[\widetilde{\mathcal{H}}G(\beta,\boldsymbol{u},\boldsymbol{x}; \boldsymbol{t}) =\left(\sum_{k=1}^{n}\lambda_{k}t_{k}\frac{\partial}{\partial t_{k}} \right)G(\beta,\boldsymbol{u},\boldsymbol{x};\boldsymbol{t}), \tag{5.36}\] \[G(\beta,\boldsymbol{u},\boldsymbol{x};\boldsymbol{t})\stackrel{{ \text{def}}}{{=}}(1-|t|)^{-\beta-|x|}\prod_{i=1}^{n}\left(1-\sum_{j=1}^{n}b _{i\,j}t_{j}\right)^{x_{i}}, \tag{4.7}\]
will lead to (5.35) above.
The action of the two parts of \(\widetilde{\mathcal{H}}\)
\[(\beta+|x|)\sum_{i=1}^{n}(1-e^{\partial_{i}}),\qquad\sum_{i=1}^{n}c_{i}^{-1} x_{i}(1-e^{-\partial_{i}}),\]
on \(G(u,\boldsymbol{x},t)\) is evaluated separately. The first part gives
\[(\beta+|x|)\sum_{i=1}^{n}(1-e^{\partial_{i}})G(\beta,\boldsymbol{ u},\boldsymbol{x};\boldsymbol{t})\] \[\qquad=(\beta+|x|)\sum_{i=1}^{n}\left(G(\beta,\boldsymbol{u}, \boldsymbol{x};\boldsymbol{t})-\frac{1-\sum_{j=1}^{n}b_{i\,j}t_{j}}{1-|t|}G( \beta,\boldsymbol{u},\boldsymbol{x};\boldsymbol{t})\right)\]
\[=(\beta+|x|)\frac{1}{1-|t|}\sum_{i,j=1}^{n}(b_{i\,j}-1)t_{j}G(\beta, \mathbf{u},\mathbf{x};\mathbf{t})\] \[=(\beta+|x|)\sum_{j=1}^{n}\frac{\lambda_{j}t_{j}}{1-|t|}G(\beta, \mathbf{u},\mathbf{x};\mathbf{t}),\] ( \[*\] )
in which (4.5), (5.31) and (5.29) are used to obtain
\[\sum_{i=1}^{n}\sum_{j=1}^{n}(b_{i\,j}-1)t_{j}=-\sum_{i=1}^{n}\sum_{j=1}^{n}u_{i \,j}t_{j}=-\sum_{i=1}^{n}\sum_{j=1}^{n}\frac{\lambda_{j}}{\lambda_{j}-c_{i}^{- 1}}t_{j}=\sum_{j=1}^{n}\lambda_{j}t_{j}.\]
The second part gives
\[\sum_{i=1}^{n}c_{i}^{-1}x_{i}(1-e^{\partial_{i}})G(\beta,\mathbf{u},\mathbf{x};\mathbf{t})\] \[=\sum_{i=1}^{n}c_{i}^{-1}x_{i}\left(G(\beta,\mathbf{u}, \mathbf{x};\mathbf{t})-\frac{1-|t|}{1-\sum_{j=1}^{n}b_{i\,j }t_{j}}G(\beta,\mathbf{u},\mathbf{x};\mathbf{t})\right)\] \[=\sum_{i=1}^{n}c_{i}^{-1}x_{i}\frac{|t|-\sum_{j=1}^{n}b_{i\,j}t_{ j}}{1-\sum_{j=1}^{n}b_{i\,j}t_{j}}G(\beta,\mathbf{u},\mathbf{x} ;\mathbf{t})\] \[=\sum_{i,j=1}^{n}\frac{c_{i}^{-1}x_{i}u_{i\,j}t_{j}}{1-\sum_{j=1}^ {n}b_{i\,j}t_{j}}G(\beta,\mathbf{u},\mathbf{x};\mathbf{t}).\] ( \[**\] )
Now r.h.s. of (5.36) reads
\[(\beta+|x|)\sum_{k=1}^{n}\frac{\lambda_{k}t_{k}}{1-|t|}G(\beta,\mathbf{u},\mathbf{x};\mathbf{t})-\sum_{i,k=1}^{n}\frac{ \lambda_{k}t_{k}x_{i}b_{i\,k}}{1-\sum_{j=1}^{n}b_{i\,j}t_{j}}G(\beta,\mathbf{u},\mathbf{x};\mathbf{t}).\]
Here the following equality holds
\[\lambda_{k}x_{i}b_{i\,k}=\lambda_{k}x_{i}\frac{-c_{i}^{-1}}{\lambda_{k}-c_{i}^ {-1}}=-c_{i}^{-1}x_{i}\frac{\lambda_{k}}{\lambda_{k}-c_{i}^{-1}}=c_{i}^{-1}x_{ i}u_{i\,k}.\]
The r.h.s. are qual to \((*)+(**)\) and this concludes the proof, which leads to the following
**Theorem 5.5**: _The multivariate Meixner polynomials \(\{P_{\mathbf{m}}(\beta,\mathbf{u};\mathbf{x})\}\)_(4.11) _constitute the complete set of eigenpolynomils of the difference operator \(\widetilde{\cal H}\) (5.22) which is derived from the \(n\)-variate Birth and Death process (5.19). The orthogonality (4.9) is the consequence of the self-adjointness of \({\cal H}\) (5.12) and the generic parameters._
**Proposition 5.6**: \(\mathfrak{S}_{n}\) **Symmetry** _The Meixner polynomials \(\{P_{\mathbf{m}}(\beta,\mathbf{u};\mathbf{x})\}\)_(4.11) _is invariant under the symmetric group \(\mathfrak{S}_{n}\), due to the arbitrariness of the ordering of \(n\) roots \(\{\lambda_{j}\}\) of the characteristic equation (5.28) in the parameters \(u_{i\,j}\) (5.31)._
### Solution of the BD problem
In terms of the norm formula (4.9), the following set of orthonormal vectors on the Hilbert space \(\mathbb{N}_{0}^{n}\) are defined
\[\sum_{\boldsymbol{x}\in\mathbb{N}_{0}^{n}}\hat{\phi}_{\boldsymbol{m }}(\beta,\boldsymbol{u};\boldsymbol{x})\hat{\phi}_{\boldsymbol{m}^{\prime}}( \beta,\boldsymbol{u};\boldsymbol{x})=\delta_{\boldsymbol{m}\,\boldsymbol{m}^{ \prime}},\qquad\qquad\boldsymbol{m},\boldsymbol{m}^{\prime}\in\mathbb{N}_{0}^ {n},, \tag{5.37}\] \[\hat{\phi}_{\boldsymbol{m}}(\beta,\boldsymbol{u};\boldsymbol{x}) \stackrel{{\rm def}}{{=}}\sqrt{W(\beta,\boldsymbol{c}; \boldsymbol{x})}P_{\boldsymbol{m}}(\beta,\boldsymbol{u};\boldsymbol{x})\sqrt{ \bar{W}(\beta,\bar{\boldsymbol{c}};\boldsymbol{m})},\qquad\boldsymbol{x}, \boldsymbol{m}\in\mathbb{N}_{0}^{n},\] (5.38) \[\bar{W}(\beta,\bar{\boldsymbol{c}};\boldsymbol{m})=\frac{( \beta)_{|m|}\bar{\boldsymbol{c}}^{\boldsymbol{m}}}{\boldsymbol{m}!},\quad \sum_{\boldsymbol{m}\in\mathbb{N}_{0}^{n}}\bar{W}(\beta,\bar{\boldsymbol{c}} ;\boldsymbol{m})=(1-|\bar{c}|)^{-\beta}. \tag{5.39}\]
They define an orthogonal matrix \(\mathcal{S}\) on \(\mathbb{N}_{0}^{n}\),
\[\mathcal{S}_{\boldsymbol{x}\,\boldsymbol{m}} \stackrel{{\rm def}}{{=}}\hat{\phi}_{\boldsymbol{m}}( \beta,\boldsymbol{u};\boldsymbol{x}),\] \[\left(\mathcal{S}^{T}\mathcal{S}\right)_{\boldsymbol{m}\, \boldsymbol{m}^{\prime}}=\sum_{\boldsymbol{x}\in\mathbb{N}_{0}^{n}}\bigl{(} \mathcal{S}^{T}\bigr{)}_{\boldsymbol{m}\,\boldsymbol{x}}\mathcal{S}_{ \boldsymbol{x}\,\boldsymbol{m}^{\prime}}=\sum_{\boldsymbol{x}\in\mathbb{N}_{0}^ {n}}\hat{\phi}_{\boldsymbol{m}}(\beta,\boldsymbol{u};\boldsymbol{x})\hat{\phi} _{\boldsymbol{m}^{\prime}}(\beta,\boldsymbol{u};\boldsymbol{x})=\delta_{ \boldsymbol{m}\,\boldsymbol{m}^{\prime}},\] \[\Rightarrow\delta_{\boldsymbol{x}\,\boldsymbol{y}}=\bigl{(} \mathcal{S}\mathcal{S}^{T}\bigr{)}_{\boldsymbol{x}\,\boldsymbol{y}}=\sum_{ \boldsymbol{m}\in\mathbb{N}_{0}^{n}}\mathcal{S}_{\boldsymbol{x}\,\boldsymbol{ m}}\bigl{(}\mathcal{S}^{T}\bigr{)}_{\boldsymbol{m}\,\boldsymbol{y}}=\sum_{ \boldsymbol{m}\in\mathbb{N}_{0}^{n}}\hat{\phi}_{\boldsymbol{m}}(\beta, \boldsymbol{u};\boldsymbol{x})\hat{\phi}_{\boldsymbol{m}}(\beta,\boldsymbol{u };\boldsymbol{y}). \tag{5.40}\]
This means that \(\hat{\phi}_{\boldsymbol{m}}(\beta,\boldsymbol{u};\boldsymbol{x})\) defines dual polynomials in \(\boldsymbol{m}\) indexed by \(\boldsymbol{x}\). The discussion of dual polynomials is essentially the same as that for the multivariate Krawtchouk polynomials in [30] and it will not be repeated here.
In terms of the orthonormal eigenvectors \(\{\hat{\phi}_{\boldsymbol{m}}(\beta,\boldsymbol{u};\boldsymbol{x})\}\) of \(\mathcal{H}\) (5.11) and \(L_{BD}\) (5.4) the transition probability \(\mathcal{T}(\boldsymbol{x},\boldsymbol{y};t)\) of the Birth and Death equation (5.1) is expressed neatly. The transition probability \(\mathcal{T}(\boldsymbol{x},\boldsymbol{y};t)\) is the solution of the BD equation (5.1) with the initial condition
\[\mathcal{P}(\boldsymbol{x};0)=\delta_{\boldsymbol{x}\,\boldsymbol{y}},\]
\[\mathcal{T}(\boldsymbol{x},\boldsymbol{y};t)=\hat{\phi}_{\boldsymbol{0}}( \beta,\boldsymbol{u};\boldsymbol{x})\hat{\phi}_{\boldsymbol{0}}(\beta, \boldsymbol{u};\boldsymbol{y})^{-1}\sum_{\boldsymbol{m}\in\mathbb{N}_{0}^{n}}e ^{-\mathcal{E}(\boldsymbol{m})t}\hat{\phi}_{\boldsymbol{m}}(\beta,\boldsymbol {u};\boldsymbol{x})\hat{\phi}_{\boldsymbol{m}}(\beta,\boldsymbol{u}; \boldsymbol{y}),\quad t>0. \tag{5.41}\]
It is trivial to verify the initial condition by (5.40). By using
\[\hat{\phi}_{\boldsymbol{0}}(\beta,\boldsymbol{u};\boldsymbol{x})=\sqrt{W( \beta,\boldsymbol{c};\boldsymbol{x})}\sqrt{\bar{W}(\beta,\bar{\boldsymbol{c}}; \boldsymbol{m})},\]
(5.41) is reduced to
\[\mathcal{T}(\boldsymbol{x},\boldsymbol{y};t)=W(\beta,\boldsymbol{c}; \boldsymbol{x})\bar{W}(\beta,\bar{\boldsymbol{c}};\boldsymbol{m})\sum_{ \boldsymbol{m}\in\mathbb{N}_{0}^{n}}e^{-\mathcal{E}(\boldsymbol{m})t}P_{ \boldsymbol{m}}(\beta,\boldsymbol{u};\boldsymbol{x})P_{\boldsymbol{m}}(\beta, \boldsymbol{u};\boldsymbol{y}),\quad t>0,\]
\[\frac{\partial}{\partial t}\mathcal{T}(\mathbf{x},\mathbf{y};t)=-W(\beta,\mathbf{c};\mathbf{x}) \bar{W}(\beta,\bar{\mathbf{c}};\mathbf{m})\sum_{\mathbf{m}\in\mathbb{N}_{0}^{n}}\,\mathcal{ E}(\mathbf{m})\,e^{-\mathcal{E}(\mathbf{m})t}P_{\mathbf{m}}(\beta,\mathbf{u};\mathbf{x})P_{\mathbf{m}}( \beta,\mathbf{u};\mathbf{y}),\quad t>0.\]
Since \(P_{\mathbf{m}}(\beta,\mathbf{u};\mathbf{x})W(\beta,\mathbf{c};\mathbf{x})\) is the eigenvector of \(L_{BD}\) with the eigenvalue \(-\mathcal{E}(\mathbf{m})\),
\[L_{BD}P_{\mathbf{m}}(\beta,\mathbf{u};\mathbf{x})W(\beta,\mathbf{c};\mathbf{x})=-\mathcal{E}(\bm {m})P_{\mathbf{m}}(\beta,\mathbf{u};\mathbf{x})W(\beta,\mathbf{c};\mathbf{x}),\]
the BD equation is satisfied
\[\frac{\partial}{\partial t}\mathcal{T}(\mathbf{x},\mathbf{y};t)=L_{BD}\mathcal{T}( \mathbf{x},\mathbf{y};t),\quad t>0.\]
The transition probability (5.41) provides a simple example of Chapman-Kolmogorov equation in \(\mathbf{N}_{0}^{n}\),
\[\mathcal{T}(\mathbf{x},\mathbf{y};t+t^{\prime})=\sum_{\mathbf{z}\in\mathbb{N}_{0}^{n}} \mathcal{T}(\mathbf{x},\mathbf{z};t)\mathcal{T}(\mathbf{z},\mathbf{y};t^{\prime}). \tag{5.42}\]
The r.h.s. is
\[\sum_{\mathbf{z}\in\mathbb{N}_{0}^{n}}\hat{\phi}_{\mathbf{0}}(\beta,\mathbf{u };\mathbf{x})\hat{\phi}_{\mathbf{0}}(\beta,\mathbf{u};\mathbf{z})^{-1}\sum_{\mathbf{m}\in\mathbb{N }_{0}^{n}}\,e^{-\mathcal{E}(\mathbf{m})t}\hat{\phi}_{\mathbf{m}}(\beta,\mathbf{u};\mathbf{x}) \hat{\phi}_{\mathbf{m}}(\beta,\mathbf{u};\mathbf{z})\] \[\qquad\times\hat{\phi}_{\mathbf{0}}(\beta,\mathbf{u};\mathbf{z})\hat{\phi}_{ \mathbf{0}}(\beta,\mathbf{u};\mathbf{y})^{-1}\sum_{\mathbf{m}^{\prime}\in\mathbb{N}_{0}^{n}} \,e^{-\mathcal{E}(\mathbf{m}^{\prime})t^{\prime}}\hat{\phi}_{\mathbf{m}^{\prime}}( \beta,\mathbf{u};\mathbf{z})\hat{\phi}_{\mathbf{m}^{\prime}}(\beta,\mathbf{u};\mathbf{y})\] \[=\sum_{\mathbf{z}\in\mathbb{N}_{0}^{n}}\hat{\phi}_{\mathbf{0}}(\beta,\bm {u};\mathbf{x})\hat{\phi}_{\mathbf{0}}(\beta,\mathbf{u};\mathbf{y})^{-1}\sum_{\mathbf{m}\in\mathbb{ N}_{0}^{n}}\,e^{-\mathcal{E}(\mathbf{m})t}\hat{\phi}_{\mathbf{m}}(\beta,\mathbf{u};\mathbf{x}) \hat{\phi}_{\mathbf{m}}(\beta,\mathbf{u};\mathbf{z})\] \[\qquad\times\sum_{\mathbf{m}^{\prime}\in\mathbb{N}_{0}^{n}}\,e^{- \mathcal{E}(\mathbf{m}^{\prime})t^{\prime}}\hat{\phi}_{\mathbf{m}^{\prime}}(\beta,\bm {u};\mathbf{z})\hat{\phi}_{\mathbf{m}^{\prime}}(\beta,\mathbf{u};\mathbf{y}).\]
The summation over \(\mathbf{z}\), \(\sum_{\mathbf{z}\in\mathbb{N}_{0}^{n}}\hat{\phi}_{\mathbf{m}}(\beta,\mathbf{u};\mathbf{z}) \hat{\phi}_{\mathbf{m}^{\prime}}(\beta,\mathbf{u};\mathbf{z})=\delta_{\mathbf{m}\,\mathbf{m}^{ \prime}}\), gives
\[r.h.s. =\hat{\phi}_{\mathbf{0}}(\beta,\mathbf{u};\mathbf{x})\hat{\phi}_{\mathbf{0}}( \beta,\mathbf{u};\mathbf{y})^{-1}\sum_{\mathbf{m}\in\mathbb{N}_{0}^{n}}\,e^{-\mathcal{E}( \mathbf{m})(t+t^{\prime})}\hat{\phi}_{\mathbf{m}}(\beta,\mathbf{u};\mathbf{x})\hat{\phi}_{\bm {m}}(\beta,\mathbf{u};\mathbf{y})\] \[=l.h.s..\]
### Exceptional Cases
So far the parameter values are assumed to be generic. But obviously at certain parameter settings, the above hypergeometric formula (4.11) for the polynomials \(\{P_{\mathbf{m}}(\beta,\mathbf{u};\mathbf{x})\}\) could go wrong. By construction, the eigenvalues \(\{\lambda_{i}\}\) are positive and \(\{c_{i}\}\) are also positive. Therefore, if the situation \(\lambda_{j}=c_{i}^{-1}\) happens at some parameter setting, it leads to the breakdown of the generic theory as \(u_{ij}=\frac{\lambda_{j}}{\lambda_{j}-c_{i}^{-1}}\) (5.31) is ill-defined.
#### 5.6.1 \(n=2\) Case
The situation is most clearly seen when \(n=2\). In this case the two eigenvalues are the roots of the quadratic equation
\[\lambda^{2}+(2-c_{1}^{-1}-c_{2}^{-1})\lambda+(1-c_{1}^{-1})(1-c_{2 }^{-1})-1=0,\] \[\lambda_{1}=\frac{1}{2}(-2+c_{1}^{-1}+c_{2}^{-1}-\Delta),\quad \lambda_{2}=\frac{1}{2}(-2+c_{1}^{-1}+c_{2}^{-1}+\Delta),\] \[\Delta^{2}=4+(1/c_{1}-1/c_{2})^{2}.\]
When \(c_{1}=c_{2}=c\), the eigenvalues are rational,
\[c_{1}=c_{2}=c\implies\lambda_{1}=-2+1/c>0,\quad\lambda_{2}=1/c,\]
and the singular situation occurs. That is, the general formula (4.11) fails. In this case, the degree 1 solution of \(\widetilde{\mathcal{H}}\) (5.22) corresponding to the eigenvalue \(\lambda_{2}=1/c\) is
\[P_{\boldsymbol{e}_{1}}(\boldsymbol{x})=const\times\big{(}x_{1}-x_{2}).\]
That is the constant part is vanishing and the assumption that degree one solutions have unit constant part (5.25) simply fails. The situation is similar for general \(n\) as stated by the following
**Theorem 5.7**: _When some of the parameters \(\{c_{i}\}\) coincide, the hypergeometric formula (4.11) for the \(n\)-variate Meixner polynomials does not apply. But the solutions of the difference equations \(\widetilde{\mathcal{H}}\) (5.22) still constitute the \(n\)-variate orthogonal polynomials._
This is rather easy to see. If \(c_{j}=c_{k}=c\), the matrix \(c^{-1}I_{n}-F(\boldsymbol{c})\) (5.28) has the \(j\)-th column \(-(1,1,\ldots,1)^{T}\) and \(k\)-th column \(-(1,1,\ldots,1)^{T}\), thus the characteristic polynomial \(\mathcal{F}(\lambda)\) vanishes at \(\lambda=c^{-1}\). In this case it is easy to show
\[\widetilde{\mathcal{H}}(x_{j}-x_{k})=\frac{1}{c}(x_{j}-x_{k}).\]
When there exist \(k\) identical \(c_{i}\)'s, \(\mathcal{F}(\lambda)\) has a factor \((\lambda-c_{i}^{-1})^{k-1}\).
**Theorem 5.8**: **Distinct parameters \(\{c_{j}\}\) are necessary**
_All the parameters \(\{c_{j}\}\) must be distinct for the hypergeometric formula (4.11) for the \(n\)-variate Meixner polynomials to hold._
**Remark 5.9**: _It is a big challenge to derive a general formula of \(n\)-variate Meixner polynomials including all these exceptional cases._ |
2304.11544 | A short note on coproducts of abelian pro-Lie groups | The notion of conditional coproduct of a family of abelian pro-Lie groups in
the category of abelian pro-Lie groups is introduced. It is shown that the
cartesian product of an arbitrary family of abelian pro-Lie groups can be
characterized by the universal property of the conditional coproduct. | Wolfgang Herfort, Karl H. Hofmann, Francesco G. Russo | 2023-04-23T05:34:22Z | http://arxiv.org/abs/2304.11544v1 | # A short note on coproducts of abelian pro-Lie groups
###### Abstract.
The notion of _conditional coproduct_ of a family of abelian pro-Lie groups in the category of abelian pro-Lie groups is introduced. It is shown that the Cartesian product of an arbitrary family of abelian pro-Lie groups can be characterized by the universal property of the _conditional coproduct_.
Key words and phrases:Pro-Lie groups, coproducts _Mathematics Subject Classification 2020:_ 22E20; 22A05 In [1] the second author and S. Morris have provided criteria when the (co)product of a family of locally compact abelian groups exists. For profinite abelian groups the product of any family \((A_{i})_{i\in I}\) of profinite groups exists and agrees with the cartesian product \(P:=\prod_{i\in I}A_{i}\). J. Neukirch has shown in [11] that \(P\) has a universal property resembling that of a coproduct (direct sum) in the category of (discrete) abelian groups. In the present note we present a version of his result, valid for cartesian products of the much larger family of _abelian pro-Lie groups_ (see [1, Ch. 5]). For formulating our result, we need to adapt the concepts, originally introduced for families of profinite groups by J. Neukirch in [11] (see also [1, D.3]), to the category of abelian pro-Lie groups.
**Definition 1**.: Let \((A_{j})_{\in J}\) be a family of topological groups, \(H\) a topological group, and \(\mathbb{F}=(\phi_{j})_{j\in J}\), \(\phi_{j}\colon A_{j}\to H\), a family of continuous homomorphisms. We say that \(\mathbb{F}\) is _convergent_, if for every identity neighborhood \(U\) of \(H\) the set \(J_{U}:=\{j\in J:\phi_{j}(A_{j})\not\subseteq U\}\) is finite.
_Example 2_.: For any family \((A_{j})_{j\in J}\) of topological groups let \(H=\prod_{j\in J}A_{j}\) be the cartesian product with the Tychonov topology. Let the family \(\mathbb{F}=(\tau_{j})_{j\in J}\) of natural morphisms \(\tau_{j}\colon A_{j}\to H\) be given by
\[\tau_{j}(a)=(a_{k})_{k\in J}\text{ with }a_{j}=a\text{ and }a_{k}=0\text{ otherwise.}\]
Then \(\mathbb{F}\) _is convergent_.
This follows immediately from the definition of the product topology on \(H\). The morphisms \(\tau_{j}\) are called the _natural embeddings_.
We define the _conditional coproduct_ by means of a universal property, resembling the one of the coproduct (direct sum) of abelian discrete groups:
**Definition 3**.: In a category \(\mathcal{A}\) of topological groups we call \(G\) a _conditional coproduct_ of the family \((A_{j})_{j\in J}\) of objects if there is a convergent family \(\tau_{j}\colon A_{j}\to G\), \(j\in J\) of morphisms such that for every convergent family of morphisms \(\psi_{j}\colon A_{j}\to H\), \(j\in J\) in \(\mathcal{A}\) there is a unique morphism \(\omega\colon G\to H\) such that \(\psi_{j}=\omega\circ\tau_{j}\) for all \(j\in J\). The morphisms \(\tau_{j}\) are called the _coprojections_ of the conditional coproduct.
We shall prove the following Theorem:
**Theorem 4**.: _In the category of abelian pro-Lie groups, the conditional coproduct of a family \((A_{j})_{j\in J}\) of abelian pro-Lie groups is the cartesian product \(P:=\prod_{j\in J}A_{j}\) for the canonical embeddings \(\tau_{j}\colon A_{j}\to P\)._
But first we secure the uniqueness of the conditional coproduct:
**Proposition 1**.: _If \(G\) and \(G^{\prime}\) are conditional coproducts of a family \((A_{j})_{j\in J}\) of topological groups in a category \(\mathcal{A}\) for the convergent families \(\tau_{j}\colon A_{j}\to G\) and \(\tau_{j}^{\prime}\colon A_{j}\to G^{\prime}\), \(j\in J\) of morphisms in \(\mathcal{A}\), then there is a natural isomorphism \(\lambda\colon G\to G^{\prime}\) such that \(\tau_{j}^{\prime}=\lambda\circ\tau_{j}\) for all \(j\in J\)._
Proof.: By Definition 2, since \(G\) is a conditional coproduct of the family \((A_{j})_{j\in J}\) with the coprojections \(\tau_{j}\), \(j\in J\), there is a unique morphism \(\lambda\colon G\to G^{\prime}\) such that
\[\forall j\in J)\,\lambda\circ\tau_{j}=\tau_{j}^{\prime}\colon A_{j}\to G^{\prime}. \tag{1}\]
Likewise, since \(G^{\prime}\) is also a conditional coproduct of the family \((A_{j})_{j\in J}\) with the coprojections \(\tau_{j}^{\prime}\), \(j\in J\), there is a unique morphism \(\lambda^{\prime}\colon G^{\prime}\to G\) such that
\[(\forall j\in J)\,\lambda^{\prime}\circ\tau_{j}^{\prime}=\tau_{j}\colon A_{j} \to G. \tag{2}\]
Therefore, by (1) and (2), we have
\[(\forall j\in J)\,\tau_{j}=\lambda^{\prime}\circ\tau_{j}^{\prime}=\lambda^{ \prime}\circ\lambda\circ\tau_{j}\colon G\to G. \tag{3}\]
However, trivially we also have,
\[(\forall j\in J)\,\tau_{j}=\operatorname{id}_{G}\circ\tau_{j}\colon G\to G. \tag{4}\]
Therefore, by the uniqueness in Definition 3, from (3) and (4) we have
\[\lambda^{\prime}\circ\lambda=\operatorname{id}_{G}. \tag{5}\]
Now by exchanging the roles of \(G\) and \(G^{\prime}\) we also have
\[\lambda\circ\lambda^{\prime}=\operatorname{id}_{G^{\prime}}. \tag{6}\]
Hence by (5) and (6), \(\lambda\) is an isomorphism, which we had to show.
We note that for profinite groups the _conditional coproduct_ agrees with the _free pro-\(\mathcal{C}\) product_ for \(\mathcal{C}\) the variety of abelian profinite groups, see [12, 13].
Let \(\mathcal{A}\) be the category of topological abelian pro-Lie groups (i.e. groups which are projective limits of Lie groups: see [10, pp. 160ff. and Chapter 5]). Each pro-Lie group \(G\) has a filterbasis \(\mathcal{N}(G)\) of closed normal subgroups such that \(G/N\) is a Lie group, and \(G\cong\lim_{N\in\mathcal{N}(G)}G/N\). (See e.g. [10, p.160, Definition A.])
Recall that every locally compact abelian group is a pro-Lie group, every almost connected locally compact group is a pro-Lie group by Yamabe's Theorem. Trivially, then, every profinite group is a pro-Lie group. Every cartesian product \(P=\prod_{j\in J}A_{j}\) of pro-Lie groups \(A_{j}\) is itself a pro-Lie group.
**Lemma 5**.: _Let \(H\) be an abelian pro-Lie group and \(\mathbb{F}\) be a convergent family of morphisms \(\psi_{j}:A_{j}\to H\). Then, for each \(N\in\mathcal{N}(H)\), the set \(\{j\in J:\psi_{j}(A_{j})\not\subseteq N\}\) is finite._
Proof.: Let \(N\in\mathcal{N}(H)\). The Lie group \(H/N\) has an identity neighborhood \(V\) in which \(\{0\}\) is the only subgroup of \(H/N\). Now let \(p\colon H\to H/N\) be the quotient morphism and set \(U=p^{-1}(V)\).
Therefore \(\psi_{j}(A_{j})\not\subseteq N\) implies \(\psi_{j}(A_{j})\not\subseteq U\). However the set of \(j\) satisfying this condition is finite by Definition 1 applying to the conditional coproduct of the family \((A_{j})_{j\in J}\). This completes the proof of the Lemma.
Proof of Theorem 4.: The uniqueness, up to isomorphism, of the conditional coproduct follows from Proposition 1.
Thus, according to Definition 3, we need to show that given an abelian pro-Lie group \(H\) and a convergent family of morphisms \(\psi_{j}:A_{j}\to H\) then there exists a unique morphism \(\omega:P\to H\) with \(\psi_{j}=\omega\circ\tau_{j}\) for all \(j\in J\).
We note first that every \(x\in P\) has a presentation
\[x=(\tau_{j}(a_{j}))_{j\in J} \tag{7}\]
for unique elements \(a_{j}\in A_{j}\). Denote by \(\mathcal{N}(H)\) the set of all closed subgroups of \(H\) such that \(H/N\) is a Lie group. It is a consequence of
[10, Theorem 3.27] that \(\mathcal{N}(H)\) is a filter basis of closed subgroups of \(H\) and that
\[H\cong\varprojlim_{N\in\mathcal{N}}H/N \tag{8}\]
algebraically and topologically.
Fix \(N\in\mathcal{N}(H)\) and let let \(J_{N}:=\{j\in J:\psi_{j}(A_{j})\not\leq N\}\). Then, by Lemma 5, the set \(J_{N}\) is finite and, taking the presentation Eq. (7) for \(x\in P\) and \(\tau_{j}(A_{j})\leq N\) for all \(j\notin J_{N}\) into account, we obtain a well-defined morphism \(\omega_{N}:P\to H/N\) by letting
\[\omega_{N}(x):=\sum_{j\in J_{N}}\psi_{j}(a_{j})+N. \tag{9}\]
For subgroups \(M\leq N\) of \(H\), both in \(\mathcal{N}(H)\), let \(\pi_{NM}:H/M\to H/N\) denote the canonical epimorphism.
For \(M\leq N\) one obtains from Eq. (9) the compatibility relation
\[\omega_{N}=\pi_{NM}\circ\omega_{M}, \tag{10}\]
as depicted in the following diagram:
Taking the relations in Eq. (10) into account we see that the universal property of the inverse limit \(H=\varprojlim_{N\in\mathcal{U}}H/N\) implies the existence of a unique continuous homomorphism \(\omega:P\to H\), which satisfies the desired relations
\[(\forall j\in J)\quad\psi_{j}=\omega\circ\tau_{j}.\qed \tag{11}\]
### Notes
A _coproduct_ of a family of objects in a category \(\mathcal{A}\) is a _product_ in the category obtained by reversing all arrows. Curiously, while products are usually considered simple concepts, coproducts are often tricky in many categories \(\mathcal{A}\) other than the category of abelian groups. Therefore, in conclusion of this note, a few general comments may be in order.
One of the early surprises is that in the familiar _category of groups_, the coproduct of \(\mathbb{Z}(2)\) and \(\mathbb{Z}(3)\) is \(\operatorname{PSL}(2,\mathbb{Z})\).
In any category \(\mathcal{A}\) with a well-introduced dual category, such as _the category of locally compact abelian groups_, the coproduct \(\coprod_{j}A_{j}\) of a
family \(A_{j}\), \(j\in J\), is naturally isomorphic to the dual \(\widehat{P}\) of \(P:=\prod_{j}\widehat{A_{j}}\), the product of its duals.
Even in special cases, such as the case of _compact_ abelian groups \(A_{j}\), the result is a complicated coproduct, since the character group of an infinite product of discrete abelian groups may be hard to deal with.
If \(\mathcal{A}\) is _the category of profinite abelian groups_, then its dual is the category \(\mathcal{T}\) of abelian torsion groups. The product in \(\mathcal{T}\) of a family of torsion groups \(T_{j}\) is the torsion group \(\operatorname{tor}(\prod_{j}T_{j})\) of the cartesian product. So by the time we arrive at the coproduct of, say, an unbounded family of cyclic groups \(A_{j}\) in \(\mathcal{A}\), we may have a complicated object \(\coprod_{j\in J}A_{j}\) in our hands.
Therefore, any special situation may be welcome, where a coproduct is lucid-even when its scope of application may be restricted. An example of such a situation is our present _conditional coproduct_ in the rather large yet reasonably well-understood category of abelian pro-Lie groups (see Chapter 5 of [10]). The authors encountered such a coproduct in a study of certain locally compact abelian \(p\)-groups. Our conditional coproduct covers a somewhat restricted supply of families of morphisms which we call "convergent". Here we encounter the rather extraordinary event that for each of such families _their conditional coproduct agrees with their cartesian product_. Classically, one is familiar with a situation of coproducts agreeing with products in the category of finite abelian groups which, after all, is rather special.
|
2307.08545 | Electromagnetically Induced Transparency and Optical Pumping in the
Hyperfine Paschen-Back Regime | We report spectroscopy experiments of rubidium vapor in a high magnetic field
under conditions of electromagnetically induced transparency (EIT) and optical
pumping. The 1.1 T static magnetic field decouples nuclear and electronic spins
and shifts each magnetic state via the Zeeman effect, allowing us to resolve
individual optical transitions of the D$_2$ line in a Doppler-broadened medium.
By varying the control laser power driving one leg of a spectrally isolated
$\Lambda$ system we tune the vapor from the EIT regime to conditions of
Autler-Townes line splitting. The resulting spectra conform to simple
three-level models demonstrating the effective simplification of the energetic
structure. Further, we quantify the viability of state preparation via optical
pumping on nuclear spin-forbidden transitions. We conclude that the
``cleanliness'' of this system greatly enhances the capabilities of quantum
control in hot vapor, offering advantages in a broad variety of quantum
applications plagued by spurious light-matter interaction processes, such as
atomic quantum memories for light. | Roberto Mottola, Gianni Buser, Philipp Treutlein | 2023-07-17T15:05:14Z | http://arxiv.org/abs/2307.08545v2 | # Electromagnetically induced transparency and optical pumping in the hyperfine Paschen-Back regime
###### Abstract
We report spectroscopy experiments of rubidium vapor in a high magnetic field under conditions of electromagnetically induced transparency (EIT) and optical pumping. The 1.1 T static magnetic field decouples nuclear and electronic spins and shifts each magnetic state via the Zeeman effect, allowing us to resolve individual optical transitions of the D\({}_{2}\) line in a Doppler-broadened medium. By varying the control laser power driving one leg of a spectrally isolated lambda system we tune the vapor from the EIT regime to conditions of Autler-Townes line splitting (ATS). The resulting spectra conform to simple three-level models demonstrating the effective simplification of the energetic structure. Further, we quantify the viability of state preparation via optical pumping on nuclear spin-forbidden transitions. We conclude that the "cleanliness" of this system greatly enhances the capabilities of quantum control in hot vapor, offering advantages in a broad variety of quantum applications plagued by spurious light-matter interaction processes, such as atomic quantum memories for light.
## I Introduction
Since the advent of laser cooling, experiments in AMO physics have benefited from an unprecedented degree of control over matter [1]. With natural linewidth limited atomic transitions and comparably narrow lasers, it is easily possible to lift energetic degeneracies in small fields and exert precise control over quantum states when atoms are cold. Generally speaking, the property that enables this is resolution of individual transitions, i.e. the energetic splitting between next nearest transitions is significantly greater than the transitions' linewidths. In hot vapor, on the other hand, atomic lines are inhomogenously broadened by atomic motion, and commonly further subject to collisional broadening [2]. At room temperature, spectra in vapor cells typically have convolutional linewidths of at least 500 MHz [3]. This prevents cold-atom-level control over matter, as even the hyperfine structure of atomic excited states is often lost to line broadening [4]. On some alkali lines, polarization selection rules can yield desirable restrictions on possible light-matter interactions in hot vapor [5], but at the same time restrict the feasible operating regimes of applications such as atomic single-photon sources [6] or quantum memories [7]. By applying sufficiently high magnetic fields, however, another approach presents itself. The Zeeman effect lifts energetic degeneracy, and through further decoupling of the atom's nuclear spin states individual transitions once again become well resolved. This hyperfine Paschen-Back (HPB) regime is therefore a promising arena for atomic physics in hot vapor, as a significant hurdle to direct optical manipulation of atomic quantum states is removed.
Electromagnetically induced transparency (EIT) [8] and the related but distinct [9; 10] phenomenon of Autler-Townes splitting (ATS) [11] have been extensively studied in low and zero fields. These effects have wide ranging practical uses, for instance in precision metrology [12], quantum memories [13], and tailored photon generation [14]. Moreover, multi-level atomic structure can produce efficient non-linear effects [15]. In high magnetic fields, EIT has been investigated in ladder-schemes [16], including Rydberg based ones [17; 18], and V-scheme configurations [19], as well as in a diamond-scheme in the context of four-wave mixing [20]. Furthermore, studies of EIT in a lambda-scheme at intermediate fields (up to 170 mT for \({}^{85}\)Rb) have been reported on in [21]. Moreover, a detailed investigation of optically pumping Cs vapor in tesla-order magnetic fields, including the effect of forbidden transitions, was performed by Olsen _et al._ in [22]. Nevertheless, a thorough study of lambda-EIT/ATS in a hot, high optical depth ensemble deeply in the HPB regime is a critical prerequisite for putting this system to use in applications such as quantum memories, and has so far been outstanding.
In this article we study hot \({}^{87}\)Rb vapor in a tesla-order magnetic field, with the purpose to isolate a three-level system in the atomic energy structure of either D line. We investigate EIT and atomic polarizability on the D\({}_{2}\) line at 780 nm in this regime, characterizing the suitability of this system for quantum technological applications beyond sensing. In a concurrent article [23], we put our conclusions to the test with a quantum memory experiment in a microfabricated atomic vapor cell.
## II Hyperfine Paschen-Back regime
We consider the D lines of alkali atoms, which are frequently used to study light-matter interactions. Both the ground as well as the excited states present a hyperfine structure, which is usually only partially resolved
in simple absorption spectra, and the multiple magnetic sublevels are generally unresolved. Here we mainly focus on the \({}^{87}\)Rb D\({}_{2}\) line.
The Hamiltonian for an alkali atom in an external static magnetic field is given by
\[\hat{H}=\hat{H}_{0}+\hat{H}_{\mathrm{hfs}}+\hat{H}_{Z}. \tag{1}\]
The first term \(\hat{H}_{0}\) describes the coarse atomic structure, \(\hat{H}_{\mathrm{hfs}}=A_{\mathrm{hfs}}\,\hat{\mathbf{I}}\cdot\hat{\mathbf{J}}+ \hat{H}_{\mathrm{qp}}\) describes the hyperfine coupling, with the magnetic dipole constant \(A_{\mathrm{hfs}}\) and the electric quadrupole Hamiltonian \(\hat{H}_{\mathrm{qp}}\), which is non-zero for the \(5\,^{2}\mathrm{P}_{3/2}\) term. The last term of Eq. (1),
\[\hat{H}_{Z}=\mu_{B}\left(g_{J}\hat{\mathbf{J}}+g_{I}\hat{\mathbf{I}}\right) \cdot\mathbf{B},\]
describes the Zeeman interaction of the atom with a magnetic field. In this equation \(g_{J}\) and \(g_{I}\) are the g-factors for the total angular momentum of the electron and the nucleus, respectively.
By applying an external magnetic field the degeneracy of the hyperfine states can be lifted through the energy shift induced by the Zeeman effect. This splitting becomes larger as a function of the strength of the magnetic field \(\mathbf{B}\). For high magnetic fields, where the energy shift induced by the Zeeman interaction \(\Delta E_{\mathrm{Z}}\) becomes larger than the one caused by the hyperfine interaction \(\Delta E_{\mathrm{hfs}}\), we enter the HPB regime. Generally speaking, the condition
\[B\gg B_{0}=A_{\mathrm{hfs}}^{\mathrm{GS}}/\mu_{B} \tag{2}\]
delineates between regimes [24]. Here the ground-state hyperfine magnetic dipole constant \(A_{\mathrm{hfs}}^{\mathrm{GS}}\) is used in the condition as the ground states experience a larger hyperfine splitting compared to the excited states, ensuring that all atomic levels are in the HPB regime.
For atomic states with low principal quantum number \(n\), the Hamiltonian describing the interaction of the atoms with an external magnetic field can thus be reduced to \(\hat{H}_{Z}\) if condition (2) is fulfilled. In this limit the splitting between hyperfine levels grows linearly with \(\Delta E_{Z}=\left(g_{J}m_{J}+g_{I}m_{I}\right)\mu_{B}B\), and a change of \(1\,\mathrm{T}\) induces a frequency shift of \(\pm 14\,\mathrm{GHz}\) of the ground state sublevels.
In the HPB regime the nuclear spin \(\mathbf{I}\) and the total angular momentum of the electron \(\mathbf{J}\) decouple. Consequently, the eigenvalue \(F\) of the total angular momentum \(\mathbf{F}=\mathbf{J}+\mathbf{I}\) of the atom and its projection \(m_{F}\) do not represent a good choice of quantum numbers to describe the system anymore. A convenient representation is in the \(\left|m_{J},m_{I}\right\rangle\) basis. For every value of \(m_{J}\) there are \(2I+1\)\(m_{I}\)-sublevels (see Fig. 1). Every state of the coupled basis can be represented as a linear combination of states in the uncoupled basis as
\[\left|F,m_{F}\right\rangle=\sum_{\begin{subarray}{c}m_{J},\\ m_{I}=m_{F}-m_{J}\end{subarray}}C_{m_{J},m_{I}}^{m_{F}}\left|m_{J},m_{I} \right\rangle.\]
At zero field, the constants \(C_{m_{J},m_{I}}^{m_{F}}\) are the Clebs-Gordan coefficients. With increasing magnetic field one of the coefficients in the sum tends to unity while the others tend to zero. The states involved in each superposition are connected by gray lines in Fig. 1.
In Fig. 1 the energy levels of the \({}^{87}\)Rb \(5\,^{2}\mathrm{S}_{1/2}\), \(5\,^{2}\mathrm{P}_{1/2}\), and \(5\,^{2}\mathrm{P}_{3/2}\) states in the HPB regime are depicted with selected transitions. In this representation the allowed optical transitions are all vertical with \(\Delta m_{J}=0,\pm 1\) corresponding to \(\pi\) or \(\sigma^{\pm}\) polarized light, respectively. To a first approximation, the decoupling of \(\mathbf{J}\) and \(\mathbf{I}\) implies that optical transitions are only allowed between states within the same \(m_{I}\)-manifold, as the light does not couple to the nuclear spin. Together with the induced energy splittings, which are much larger than the Doppler-broadened linewidth of the vapor, a clean three-level system can be addressed.
Note that, even restricting the discussion to ground states, the required order of magnitude of magnetic field
Figure 1: Energy levels of \({}^{87}\)Rb in the HPB regime represented in the \(\left|m_{J},m_{I}\right\rangle\) basis. The light gray lines indicate the coupling between sublevels due to the hyperfine structure. The allowed transitions for one \(m_{I}\) manifold per D line are shown as solid lines. The ‘singly forbidden’ transitions arising from the residual coupling of the ground states are represented as dashed lines. The transitions coupling to the ground states involved in the superpositions with \(m_{F}=-1\) and \(m_{F}=1\) in the weak field scenario are visualized for the D\({}_{1}\) and the D\({}_{2}\) line, respectively. The energy splittings are not to scale.
strength for the Zeeman interaction to dominate varies widely. As an extreme example, consider that for the \({}^{7}\)Li D lines even the fine Paschen-Back regime, where the Zeeman interaction triumphs over the atomic fine structure, can be investigated with fields \(<1\) T [25]. In contrast, to realize such conditions in Rb, a magnetic field of 218 T would be required [26].
For any finite magnetic field, the states actually remain a superposition. Even at an applied field of 1 T, some residual coupling between \(\mathbf{J}\) and \(\mathbf{I}\) persists in the 5 \({}^{2}\)S\({}_{1/2}\) ground state (see, for instance, Ref. [27] for a precise quantification of the basis state superpositions). Indirect interaction of light with the nucleus through this residual \(\mathbf{J}\)-\(\mathbf{I}\) coupling allows transitions with \(\Delta m_{I}\neq 0\) to take place. We will refer to transitions with \(|\Delta m_{I}|=1\) as'singly forbidden'. They appear in manifolds of three in the spectrum and have a transition strength about 50-times weaker than the allowed transitions (\(\pi\)-polarization) at 1 T field strength. Following conservation of total angular momentum these transitions obey the relations \(\Delta m_{I}+\Delta m_{J}=0,\pm 1\) for \(\pi\) and \(\sigma^{\pm}\) polarized light, respectively. A complete representation of all allowed and singly forbidden transitions for the \({}^{87}\)Rb D\({}_{2}\) line can be found in the upper half of Fig. 2.
Figure 2 shows the computed spectrum of the D\({}_{2}\) transitions for a vapor cell with similar properties to the cell we used for the measurements presented below. The spectrum is computed analogously to how it is described in [28; 29]. The energy-level insets above the spectrum show the states involved in the corresponding line manifold, and are roughly ordered by transition frequency and strength.
## III Experimental apparatus
A Bruker B-E 10 electromagnet is used to generate the static, tesla-order magnetic field perpendicularly to the propagation axis of the light. In this geometry linearly horizontally (vertically) polarized light in the laboratory frame corresponds to \(\pi\) (an equal superposition of \(\sigma^{+}\) and \(\sigma^{-}\)) polarization. A microfabricated vapor
Figure 2: Computed spectrum of the \({}^{87}\)Rb D\({}_{2}\) line in a 1.06 T external magnetic field. The spectrum is simulated for a 2 mm thick cell, with 90 % enriched \({}^{87}\)Rb, 11 mbar of Ar buffer gas and an atomic temperature of 97 °C. The dipole-allowed transitions appear as quadruplets, while the singly forbidden transitions form manifolds of three. The insets in the upper half of the figure show the sublevels involved in the corresponding transition manifold. The insets are roughly arranged according to the transition frequency and OD. Solid and dashed lines illustrate allowed and singly forbidden transitions, respectively. The polarization of the transitions is color coded according to the legend.
cell with 2 mm internal thickness and a 5 mm diameter aperture is used for the experiments, see Fig. 3(a). The fabrication of this type of cells is described in [30, 31]. The cell is filled with \(\leq 90\,\%\) isotopically enriched \({}^{87}\)Rb and about 11 mbar of Ar buffer gas. To heat the cell we rely on infrared lasers and absorptive colored filter glass (Schott RG9), which is highly transmissive at the Rb D-line wavelengths. The vapor cell is sandwiched between two 2 mm thick pieces of this filter glass, as shown in Fig. 3(b). Each side is illuminated by a low-cost, multimode, telecom laser (Seminex 4PN-108). With this technique the atoms can be heated to temperatures \(>130\,\mathrm{\SIUnitSymbolCelsius}\). The atomic temperature is determined spectroscopically. This heating technique has previously been successfully implemented in vapor based magnetometers, where its efficiency was optimized to allow for low-power operation [32]. In our setup no effort was made so far to render the heating more efficient, and on the order of 1 W of optical power is required to reach operational temperatures.
The spectroscopic setup used for the measurements presented below is shown in Fig. 3(d). Widely tunable lasers (ECDL and DFB) on either D line are used as weak probes. The control light is generated by a DFB laser that is amplified with a tapered amplifier (TA). The resulting optical power gives us the option to explore a large range of Rabi frequencies. A neutral density gradient wheel is used to vary the control power. The control beam is focused to a \(1/e^{2}\)-diameter of approximately 550 um in the center of the cell.
Probe and control beams are combined on a polarizing calcite prism before the vapor cell. The overlap of the beams is ensured by coupling each of them into the same (auxiliary) single-mode fiber and regularly optimizing the alignment. A second calcite prism is used to discriminate the strong control beam from the probe by polarization. At least eight orders of magnitude of suppression can be achieved this way. Finally, the probe is detected by a photo diode, equipped with a bandpass filter to remove ambient light.
For state preparation a further DFB laser is dedicated to optical pumping. It is aligned to be counter-propagating with respect to the control beam and is coupled in through a Faraday circulator.
The frequencies of the various lasers are referenced with a wavelength meter (HighFinesse WS-7) through auxiliary fiber ports. Furthermore, a portion of the probe light is branched off directly after the laser for a Doppler-free saturation spectroscopy to yield an absolute frequency reference.
## IV Absorption spectroscopy
A simple spectroscopic measurement of the Rb vapor in an external field of 1.06 T already reveals that single transitions can be resolved in the Doppler broadened medium. The large energy splitting as well as the decoupling of **I** and **J** allow us to distinguish the hyperfine structure, even of the D\({}_{2}\) excited state, without needing to cancel the Doppler effect as in saturated absorption spectroscopy. The recorded Rb D line spectra are shown in Fig. 4. Panel (a) depicts the spectrum of the \({}^{87}\)Rb D\({}_{2}\) line. The blue trace shows the \(\pi\)-transitions, which are probed with a horizontally polarized laser. The \(\sigma^{+}\) and \(\sigma^{-}\) transitions, represented in red, are all driven by vertically polarized light. The asymmetries and incomplete transmission between the lines constituting one manifold are mostly due to residual \({}^{85}\)Rb from imperfect isotopic enrichment. In all cases the strongest singly forbidden transitions can be recognized. The full spectrum is manually stitched together from three separate measurements per polarization. In fact, the shown frequency range is larger than the tuning range achieved by a current scan of the DFB laser, and its operating temperature had to be changed to cover the whole range. Horizontally and vertically polarized probes are recorded separately. A Doppler-free saturation spectroscopy in a reference cell outside of the magnetic field serves as absolute frequency reference. This zero field spectrum covers a relatively small frequency range compared to the HPB spectra, making the frequency calibration most reliable near zero detuning. In order to account for frequency-dependent power variation, the transmission through the unheated
Figure 3: (a) Front and (b) top view of the vapor cell used in the experiments. The cell is sandwiched by two 2 mm-thick pieces of RG9 filter glass. (c) The ferromagnetic cores of the electromagnet constrain the optical and spatial access around the cell. (d) Schematic of the experimental spectroscopic setup. DFB, distributed feedback (laser); 90:10, beam sampler with the specified splitting ratio; WM, wavelength-meter; PBS, polarizing beam-splitter; ND, neutral density (filter); TA, tapered amplifier; \(\lambda/2\), half-wave plate; \(\lambda/4\), quarter-wave plate; BP, bandpass (filter); PD, photo diode. The labels \(P\) and \(C\) indicate the optical fiber connections for probe and control beams, respectively.
vapor cell is recorded as well.
In panel (b) of Fig. 4 the spectrum of the \({}^{87}\)Rb D\({}_{1}\) line is shown for the same conditions. Here a commercial \(795\,\mathrm{nm}\) DFB laser serves as probe. It too is brought to the optical setup through the probe's fiber coupler (label \(P\) in Fig. 3(d)). Due to the reduced number of possible \(m_{J}\)-values, only four allowed transitions each for \(\sigma^{+}\) and \(\sigma^{-}\) are present in the spectrum. Also note that the frequency splitting between the two \(\pi\)-polarized quadruplets is larger compared to the D\({}_{2}\) line. This fact is due to the smaller Lande g-factor of the \(5\,^{2}\mathrm{P}_{1/2}\) term, which results in a smaller energy splitting of the excited states. As the D\({}_{1}\) line covers a narrower frequency range than the D\({}_{2}\) line, it is possible to record the entire spectrum with a single scan per polarization.
## V From EIT to ATS
We studied EIT in order to assess how well a three-level system can be isolated in the HPB regime within the more complex Rb D\({}_{2}\) level scheme. Transparency is induced in the ensemble by adding the control beam. The lambda scheme we used for this purpose is illustrated in Fig. 5(a). We choose the probe to be \(\pi\)-polarized and near resonance with the \(\left|m_{J}=\frac{1}{2},m_{I}=\frac{3}{2}\right\rangle\leftrightarrow\left|m_{ J}^{\prime}=\frac{1}{2},m_{I}^{\prime}=\frac{3}{2}\right\rangle\) transition. The control 'leg' of the lambda is consequently chosen to be the \(\sigma^{+}\) transition \(\left|m_{J}=-\frac{1}{2},m_{I}=\frac{3}{2}\right\rangle\leftrightarrow\left|m_ {J}^{\prime}=\frac{1}{2},m_{I}^{\prime}=\frac{3}{2}\right\rangle\). Figure 5(c) shows the spectrum resulting from a scan of the probe frequency around the eight allowed \(\pi\)-polarized transitions while the control frequency is kept fixed. A deep EIT feature is generated in the first absorption peak. Notably, high transparency is induced in the atomic vapor, almost reaching the transmission level through the ensemble far from resonance, and indeed it becomes unambiguously complete at higher control powers. Even in cold atoms complete transparency is only achievable in certain level-schemes [33], in particular only in lambda-schemes. In hot vapor a "clean" three-level system where the degree of control is high enough to optically address individual transitions is required to avoid masking EIT windows [34]. Furthermore, note that the ground state addressed by the control field is efficiently depleted by optical pumping, resulting in a "missing" \(\pi\) transition and in an increase in the absorption for the transition from the populated ground state.
The width of the induced transparency window, as well as the underlying physical process, varies as a function of the applied control power. At low control powers destructive interference between absorption pathways induces the transparency. In this EIT regime, the FWHM of the transparency window is given by \(\delta\omega_{\mathrm{EIT}}=|\Omega_{c}|^{2}/(2\Gamma\sqrt{2\mathrm{OD}})\)[8].
For high control powers, on the other hand, the transparency is caused by dressing of the states in the strong coupling regime. Transparency manifests as a gap between two separated absorption lines. In this ATS scenario, the width of the transparency is directly proportional to the control Rabi frequency.
Figure 6(a) shows the absorption profile of the atoms as a function of the probe detuning from the D\({}_{2}\) line center for different control powers. The absorption profile transitions smoothly from the EIT regime to ATS, as expected for a textbook three-level system. Indeed, an initially narrow transparency window evolves into two distinct and well separated peaks with increasing control power, and at the maximal power of \(496(15)\,\mathrm{mW}\), a splitting of approximately \(1\,\mathrm{GHz}\) is reached. The consistent absence of the eighth spectral line illustrates that the control field depletes the ground state it addresses even at low powers (cf. Fig. 5(c)). Furthermore, with increasing power, the control field starts pumping the adjacent ground-state levels of the \(m_{J}=-\frac{1}{2}\) manifold as well. Well into the AT regime, the splitting, defined as the difference between the two absorption maxima, equals the Rabi frequency \(\Omega_{c}\). As \(\Omega_{c}\propto\sqrt{P}\), the dashed lines in the figure fit the measured splitting with a square root function.
By varying the control frequency we can investi
Figure 4: Measured spectra of the \({}^{87}\)Rb (a) D\({}_{2}\) and (b) D\({}_{1}\) line. The blue (red) traces show the spectrum for horizontally (vertically) linearly polarized light in the lab frame corresponding to \(\pi\) (\(\sigma^{+}\) and \(\sigma^{-}\)) transitions. Even though the atomic ensemble is Doppler-broadened, in the HPB regime the single transitions can be individually resolved. For both lines the strongest singly forbidden transitions, marked with arrows, can be recognized in the spectra. The arrows’ colors correspond to the trace of interest. In order to give a common sense of scale, the spectrum of a reference cell filled with natural Rb, outside of the magnetic field, is added as black trace (amplitude arbitrarily scaled). Note the difference in horizontal ranges between panels.
gate distinct three-level systems and individually address them. We scan the detuning of the control field with respect to the \(\sigma^{+}\) transition coupling to the \(|m_{J}=-\frac{1}{2},m_{I}=\frac{3}{2}\rangle\) ground state over several GHz. Experimental parameters are kept as in the previous measurement, with \(485\,\mathrm{mW}\) (near maximum) control power. The data presented in Fig. 6(b) show avoided crossings, the typical signature of dressed states [11].
For zero control detuning the minimal splitting is achieved and the scenario from Fig. 6(a) is reproduced, while at larger detunings the splitting increases. In the asymptotic limit, the doublet is composed of a large peak at the bare transition frequency, and a smaller peak approximately moving along a line of unity slope in the control frequency. The amplitude of this second peak decreases with detuning. Within the plotted control detuning range, the control field becomes resonant with three out of four \(\sigma^{+}\) transitions coupling to \(m_{J}=-\frac{1}{2}\) levels, closing lambda-systems with the probe as it is scanned. Thus, three avoided crossings and the asymptotic tail of a fourth one can be identified in Fig. 6(b). If the control frequency were red detuned further, the same behavior would be expected for the probe transitions coupling to the \(m_{J}=-\frac{1}{2}\)-manifold.
Each avoided crossing is modeled as an ideal three-level system by the two branches of a hyperbola given by \(\Delta_{\pm}=\frac{1}{2}\left(\Delta_{c}\pm\sqrt{\Delta_{c}^{2}+\Omega_{c}^{2}}\right)\), represented as dashed lines in Fig. 6(b). This simple model is in good agreement with the data, illustrating that the various lambda-schemes addressed by the lasers can be treated as separate three-level systems. When both the control and probe fields are tuned to the bare transition frequencies the minimum splitting, corresponding to \(\Omega_{c}\), is achieved. In the model we set \(\Omega_{c}=2\pi\times 950\,\mathrm{MHz}\). Along the horizon
Figure 5: (a) Energy scheme of the total electron angular momentum levels of the \(m_{I}=\frac{3}{2}\)-manifold. The lambda scheme used for the investigation of EIT and ATS in the HPB regime is shown. The detuning of the probe and control field are labeled \(\Delta_{p}\) and \(\Delta_{c}\), respectively. (b) Level scheme showing the scheme applied to polarize the nuclear spin of the ensemble. The transitions driven by the pump lasers are drawn as straight lines, differentiating between allowed (solid) and singly forbidden (dashed) transitions. The radiative decay channels are depicted by undulated lines. (c) Spectrum of the eight allowed \(\pi\)-polarized transitions while the control beam is turned on with a CW power of \(4.5\,\mathrm{mW}\). EIT can be observed in the first peak, while the eighth peak from the absorption spectra above is not present since the control depletes the ground state it addresses. The lower (higher) frequency manifold corresponds to the transitions involving the \(m_{J}=\frac{1}{2}\)\((-\frac{1}{2})\) ground states, while the labels on top indicate the \(m_{I}\) values of the ground states involved in the respective transitions. These data correspond to the lowest control power setting in Fig. 6(a).
Figure 6: (a) Measured probe absorption as a function of probe detuning for different control powers showing the transition from EIT to the AT regime. The maximum splitting has a width of approximately \(1\,\mathrm{GHz}\). The dashed lines represent a square root fit of the induced transparency width. (b) Measurement of the probe absorption as a function of the probe detuning for different control detunings. The structure of three avoided crossings, typical signature for strong coupling, can be recognized, while the tail of a fourth anti-crossing can be seen. The dashed lines represent the hyperbolas expected for a three level system. The control detuning is defined with respect to the \(\sigma^{+}\) transition coupling to the \(|m_{J}=-\frac{1}{2},m_{I}=\frac{3}{2}\rangle\) ground state. In both plots the horizontal lines represent the sampled control powers/detunings.
tal axis, which is calibrated with Doppler-free saturation spectroscopy, the hyperbolas are centered around the corresponding, unperturbed probe frequency. The vertical offset is chosen by maximizing the overlap of the data and the model. The model diverges from the data at low frequencies, most likely due to a nonlinear frequency scan of the laser. In fact, all available absolute frequency calibration points are close to the central region of the probe frequency scan, deteriorating the reliability of the horizontal axis at the edges of the scan range.
## VI Nuclear spin pumping
In the HPB regime the decoupling of \(\mathbf{J}\) and \(\mathbf{I}\) divides the atoms into \(2I+1\) separate manifolds, which are well isolated from each other. Furthermore, the induced energy shifts, mainly of the ground states, result in distinct transition frequencies for the equivalent optical transitions within different manifolds. The frequency of an incoming radiation field can thus be tuned to be resonant with and therefore address atoms within a single nuclear spin manifold.
Although a magnetic field above \(1\,\mathrm{T}\) puts the atomic ensemble well into the HPB regime by Eq. (2), some coupling of the ground state sublevels persists at such fields, with superposition probability amplitudes on the order of a few percent (see [27]). Under our conditions it is therefore possible to exploit singly forbidden transitions to partially polarize the nuclear spin of the atoms. In the dark, all ground states in atomic vapors are equally populated due to their thermal population distribution. In addition to standard optical pumping on dipole-allowed transitions within a nuclear spin manifold, atoms can be transferred between them by driving these forbidden transitions. Thus produced population imbalances increase the OD of a selected manifold for a fixed atomic temperature. This is advantageous for processes like coherent absorption, which rely on optical depth for efficiency but are disturbed by collisions [35]. With increasing field strength, the viability of this approach diminishes; however, fuller decoupling also ensures that, for whatever interaction is under study, atoms in "wrong" \(m_{I}\) manifolds effectively act as a mere background gas, which naturally mitigates the need for highly polarized ensembles with regards to the suppression of spurious optical processes.
DFB diodes, available at the Rb wavelength, and digital controllers for butterfly laser diodes constitute affordable and ready-to-use/plug-and-play devices, that allow for an easy state preparation with dedicated pump lasers for each different pump transition. Using dedicated pump lasers avoids involuntarily creating coherences between the optical fields, which could inadvertently prepare the atoms in dark states [36].
For a proof-of-principle test of a combined pumping scheme, as shown in Fig. 5(b), three pump lasers were utilized. The primary pump drives the same transition as the control in the previous sections (depleting the \(\ket{m_{J}=-\frac{1}{2},m_{I}=\frac{3}{2}}\) state with \(\sigma^{+}\) polarized light). A further DFB laser addresses the singly forbidden \(\sigma^{-}\) transition coupling the states \(\ket{m_{J}=\frac{1}{2},m_{I}=\frac{1}{2}}\leftrightarrow\ket{m_{J}^{\prime}=- \frac{3}{2},m_{I}^{\prime}=\frac{3}{2}}\) (corresponding to the leftmost transition in Fig. 2). This forbidden pump transfers atomic population from \(m_{I}=\frac{1}{2}\) to \(m_{I}=\frac{3}{2}\). Finally, a third pump laser drives the allowed \(\sigma^{+}\) transition coupling the states \(\ket{m_{J}=-\frac{1}{2},m_{I}=\frac{1}{2}}\leftrightarrow\ket{m_{J}^{\prime}= \frac{1}{2},m_{I}^{\prime}=\frac{1}{2}}\), which repopulates the state addressed by the forbidden pump. The two additional pump lasers are aligned under a small angle with respect to the counter-propagating primary pump by using D-shaped mirrors (not shown in the setup sketch). All three pump lasers were operated at an optical power of \(20\,\mathrm{mW}\), limited by the damage thresholds of the fiber-based, fast optical switches intended for future experiments.
Figure 7: Experimental implementation of the (partial) pumping scheme involving allowed as well as ‘singly forbidden’ transitions. The measurements were performed at an approximate applied magnetic field of \(1.06\,\mathrm{T}\). In all three panels the spectrum of the unpumped atoms is shown in blue as a reference. The background of the reference spectrum is modeled as second-order polynomial used to correct all shown traces. (a) The effect of each individual pump laser on the atoms is shown: \(\sigma^{+}\) polarized \(m_{I}=\frac{3}{2}\)-pump (red), or \(\sigma^{-}\) polarized forbidden pump (green), or \(\sigma^{+}\) polarized \(m_{I}=\frac{1}{2}\)-pump (black). (b) The effect of two lasers combined is shown. The effect of the forbidden pump and the \(m_{I}=\frac{1}{2}\)-pump are depicted in red. The green trace shows the spectrum obtained by using the \(m_{I}=\frac{3}{2}\)-pump and the forbidden pump. Turning on both allowed pumps (\(m_{I}=\frac{1}{2}\) and \(m_{I}=\frac{3}{2}\)) results in the black trace. (c) All three pump lasers are used (red), as depicted in the level-scheme in Fig.5(b).
Figure 7 shows the spectrum of the probe, which is scanned over all allowed \(\pi\) transitions, in different pumping conditions. An unpumped spectrum (blue) of the probe is added to each panel for comparison. In order to correct for frequency dependent changes of the emitted probe power, the background of the unpumped spectrum is fitted with a second-order polynomial and used to correct all traces shown in the figure.
In panel (a) the spectra for the scenarios in which only one pump laser is on are shown. The ground state addressed by the corresponding pumper is depleted, resulting in uniformly high transmission over the full line width of the respective transitions. The combined effect of each possible pair of pumping lasers applied together is plotted in panel (b), see caption for details. Finally, panel (c) shows the spectrum obtained when all three pump lasers are on. In this scenario, as expected, we observe the strongest absorption for the \(\left|m_{J}=\frac{1}{2},m_{I}=\frac{3}{2}\right\rangle\leftrightarrow\left|m ^{\prime}_{J}=\frac{1}{2},m^{\prime}_{I}=\frac{3}{2}\right\rangle\)\(\pi\)-transition, indicating that the produced OD is higher than what is possible by solely pumping within \(m_{I}=\frac{3}{2}\) (cf. red trace in Fig. 7(a)). For a hint towards limiting effects, note that in panel (c) the absorption of the \(\left|m_{J}=\frac{1}{2},m_{I}=\frac{1}{2}\right\rangle\) state, addressed by the forbidden pump, appears to be unchanged compared to the unpumped spectrum. This is also the case in the data presented in red in panel (b), where the pump lasers only address atoms with \(m_{I}=\frac{1}{2}\). This indicates the presence of a non-radiative nuclear spin relaxation processes, e.g. wall collisions, which compete with nuclear spin pumping.
From the measured spectra it appears that the \(\left|m_{J}=-\frac{1}{2},m_{I}=\frac{1}{2}\right\rangle\) ground state is depleted while \(\left|m_{J}=\frac{1}{2},m_{I}=\frac{1}{2}\right\rangle\) shows no significant difference with respect to the reference curve. We thus estimate that the implemented pumping scheme transfers about 50 % of the atomic population of \(m_{I}=\frac{1}{2}\) to \(m_{I}=\frac{3}{2}\). The presented pumping scheme can be expanded by pumping each manifold into the respective \(m_{J}=\frac{1}{2}\) state and by using the forbidden transitions to transfer the atomic population towards manifolds of higher \(m_{I}\)-values. With such an expansion, under the presented experimental conditions, and assuming that each manifold can be half depleted, it should thus be possible to achieve a total atomic polarization of \(\frac{15}{32}\), despite the thorough isolation of the nuclear states from one another.
## VII Summary and Outlook
In this article we have investigated EIT and optical pumping in warm Rb vapor in the HPB regime. We have demonstrated the isolation of a "clean" three-level lambda-system in a strong external magnetic field, and explored the atomic polarizability both within and across nuclear spin manifolds in these operating conditions. Furthermore, we studied the phenomena of EIT and AT line-splitting in this scheme, reproducing spectra matching the textbook ideal in typically far more complicated hot alkali vapor. Both aspects are of particular interest for the implementation of a broad class of quantum technologies in such hot vapors. These include, for instance, optical ground-state quantum memories, as memory efficiency is a function of optical depth [37] and an ideal three-level system suppresses noise processes [38; 35] and undesirable interference [39] that take place in the presence of more energy levels. Further, bi-photon generation schemes in ensembles, such as those based on spontaneous four wave mixing, would not only be realizable at the investigated optical depths (compare e.g. [40] and references therein), but cover an atypically large range of frequencies for atomic sources by transition selection and magnetic field tuning.
In summary our investigation yields positive prospects for putting hot atomic ensembles deep in the HPB regime to work in applications that require both well isolated lambda systems and moderate to high optical depths. Indeed, in a concurrent article [23], we perform a proof-of-principle quantum memory experiment in this system demonstrating low noise for storage at the single-photon level and an internal efficiency at the theoretical limit.
###### Acknowledgements.
The authors thank Gaetano Mileti for supplying us with the microfabricated vapor cell as well as Florian Gruet who helped us characterizing the cell's content. We acknowledge financial support from the Swiss National Science Foundation through NCCR QSIT and from the European Union through the Quantum Flagship project macQsimal.
|
2303.05596 | Distributed Design of Controllable and Robust Networks using Zero
Forcing and Graph Grammars | This paper studies the problem of designing networks that are strong
structurally controllable, and robust simultaneously. For given network
specifications, including the number of nodes $N$, the number of leaders $N_L$,
and diameter $D$, where $2 \le D \le N/N_L$, we propose graph constructions
generating strong structurally controllable networks. We also compute the
number of edges in graphs, which are maximal for improved robustness measured
by the algebraic connectivity and Kirchhoff index. For the controllability
analysis, we utilize the notion of zero forcing sets in graphs. Additionally,
we present graph grammars, which are sets of rules that agents apply in a
distributed manner to construct the graphs mentioned above. We also numerically
evaluate our methods. This work exploits the trade-off between network
controllability and robustness and generates networks satisfying multiple
design criteria. | Priyanshkumar I. Patel, Johir Suresh, Waseem Abbas | 2023-03-09T21:44:29Z | http://arxiv.org/abs/2303.05596v1 | # Distributed Design of Controllable and Robust Networks using Zero Forcing and Graph Grammars
###### Abstract
This paper studies the problem of designing networks that are strong structurally controllable, and robust simultaneously. For given network specifications, including the number of nodes \(N\), the number of leaders \(N_{L}\), and diameter \(D\), where \(2\leq D\leq N/N_{L}\), we propose graph constructions generating strong structurally controllable networks. We also compute the number of edges in graphs, which are maximal for improved robustness measured by the algebraic connectivity and Kirchhoff index. For the controllability analysis, we utilize the notion of zero forcing sets in graphs. Additionally, we present graph grammars, which are sets of rules that agents apply in a distributed manner to construct the graphs mentioned above. We also numerically evaluate our methods. This work exploits the trade-off between network controllability and robustness and generates networks satisfying multiple design criteria.
Strong structural controllability, zero forcing sets, network design, network robustness.
## I Introduction
The distributed design of networks satisfying multiple design criteria is generally a challenging problem. From a network control perspective, controllability and robustness to failures are two of the vital design attributes. Network controllability is the ability to manipulate and drive the network to desired configurations (states) due to external control signals (inputs), which are injected into the network through a subset of nodes called _leaders_ (e.g., [1]). Network robustness has many interpretations, which can be categorized as functional and structural robustness [2]. The former is related to the network's functioning in the presence of noise and perturbations and later describes the ability to preserve the network's structural attributes despite node/edge failures [3, 4]. Interestingly, these two interpretations are related to each other in the context of network control systems and can be measured through common graph metrics, such as algebraic connectivity and Kirchhoff index \(K_{f}\)[5, 6, 7].
It is well studied that network controllability and robustness can be conflicting, i.e., for a given set of network parameters, networks requiring few leaders for complete controllability might exhibit poor robustness properties [8, 9]. For instance, for a given number of nodes \(N\), path graphs require a single leader node for complete controllability; however, they have minimum robustness. Similarly, fixing \(N\) and diameter \(D\), networks with maximum robustness (as measured by the algebraic connectivity and \(K_{f}\)) are clique chains [5, 6]; however, they require many leaders \((N-D)\) for complete controllability [8]. So, an important issue is, _how can we design networks in a distributed manner such that networks can be controlled with few leaders (inputs) and exhibit high robustness simultaneously?_ This question becomes more intriguing when the network controllability is considered in the strong structural sense due to computational complexity issues (e.g., [10, 11, 12, 13]). Network controllability generally depends on the edge weights; however, edge weights are inconsequential in the case of strong structural controllability (SSC), which essentially depends on the network structure and the leader set.
In this paper, we propose distributed designs of networks that are strong structurally controllable for a given number of nodes \(N\) and leaders \(N_{L}\). At the same time, these networks are robust due to maximal edge sets. For distributed construction of such networks, we utilize graph grammars [14, 15, 16]. To ensure SSC, we use the relationship between the notion of zero forcing in graphs and SSC [17, 18]. Our proposed designs are flexible in the sense that for fixed \(N\) and \(N_{L}\), they can produce graphs with varying graph parameters such as the diameter \(D\) and robustness, as measured by the Kirchhoff index and algebraic connectivity of graphs while ensuring that graphs remain strong structurally controllable. Thus, the network constructions exploit the trade-off between network controllability and robustness. Our main contributions are summarized below:
* For given \(N\) (total number of nodes) and \(N_{L}\) (number of leaders), we construct strong structurally controllable graphs with \(N_{L}\) leaders and maximal edge sets. For SSC, we utilize the idea of zero forcing sets.
* Our designs enable generating graphs with diameter \(D\), where \(2\leq D\leq N/N_{L}\), while ensuring that each such graph has a maximal edge set and is strong structurally controllable with \(N_{L}\) leaders. Since network diameter influences its robustness, we can attain networks with various robustness. We also numerically evaluate the robustness of such graphs using algebraic connectivity and Kirchhoff index metrics.
* Furthermore, we provide distributed ways to construct the above graphs using graph grammars, a set of rules that nodes implement locally to achieve the desired network structure. Finally, we numerically evaluate the proposed schemes.
Our problem setting is similar to the one in [8], albeit with some significant differences. We use a simpler zero forcing method to analyze strong structural controllability in networks, whereas [8] utilizes graph distances for this purpose. Additionally, for given \(N\) and \(N_{L}\), the graphs generated in [8] are of fixed diameter. We provide multiple
constructions enabling graphs with different diameters and robustness. Finally, we provide distributed constructions of networks using graph grammars, which are not in [8].
The rest of the paper is organized as follows: Section II presents preliminary ideas and sets up the problem. Section III is the main section providing graph constructions for given specifications along with the controllability and robustness analysis of the constructions. Section IV provides graph grammars to construct the proposed graphs in a distributed manner. Finally, Section V concludes the paper.
## II Preliminaries
### _Notations_
We consider a _multi-agent network system_ as an _undirected graph_\(\mathcal{G}=(\mathcal{V},\mathcal{E})\). The _vertex set_\(\mathcal{V}=\{v_{1},v_{2},\ldots,v_{N}\}\) represents the agents (nodes), and the edge set \(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\) represents the edges between nodes. We denote the edge between nodes \(u\) and \(v\) by an unordered pair \((u,v)\). Node \(u\) is a _neighbor_ of node \(v\) if \((u,v)\in\mathcal{E}\). The number of nodes in the neighborhood of \(u\) is the _degree_ of \(u\). The _distance_ between nodes \(u\) and \(v\), denoted by \(d(u,v)\), is the number of edges in the shortest path between \(u\) and \(v\). The _diameter_ of \(\mathcal{G}\), denoted by \(D\), is the maximum distance between any two nodes in \(\mathcal{G}\). A _path_ of length \(k\) is a sequence of nodes that form a subgraph of \(\mathcal{G}\), \(P_{k}:=<u_{0},u_{1},u_{2},\cdots,u_{k}>\), where \((u_{i},u_{i+1})\in\mathcal{E},\;\forall i\in\{0,\cdots,k-1\}\). The _leader - follower_ system associated with graph \(\mathcal{G}\) is defined by the following state-space representation:
\[\dot{x}(t)=Mx(t)+Bu(t). \tag{1}\]
Here, \(x(t)\in\Re^{n}\) is the state of the system and \(M\in\mathcal{M}(\mathcal{G})\) is a system matrix, where \(\mathcal{M}(\mathcal{G})\) is a family of symmetric matrices associated with an undirected graph \(\mathcal{G}\) defined below.
\[\begin{split}\mathcal{M}(\mathcal{G})=\{M\in\Re^{n\times n}|M=M^ {T},\text{and for }i\neq j,\\ M_{ij}\neq 0\Leftrightarrow(i,j)\in\mathcal{E}(\mathcal{G})\}.\end{split} \tag{2}\]
Note that \(\mathcal{M}(\mathcal{G})\) includes the adjacency and Laplacian matrices of \(\mathcal{G}\). In (1), \(u(t)\in\Re^{m}\) is the input signal, and \(B\in\Re^{n\times m}\) is the input matrix containing information about the leader nodes through which inputs are injected into the network. For a set of leaders labelled \(\{\ell_{1},\ell_{2},\ldots,\ell_{m}\}\subseteq\mathcal{V}\), we define the input matrix as follows.
\[B_{ij}=\begin{cases}1&\text{if }v_{i}=\ell_{j},\\ 0&\text{otherwise}.\end{cases} \tag{3}\]
We are interested in designing networks with the above dynamics that are strong structurally controllable and maximally robust. Next, we discuss the controllability and robustness measures we utilize to evaluate our graphs.
### _Network Controllability Measure_
For the strong structural controllability analysis, we utilize the notion of _zero forcing sets_ in graphs. Considering a system defined on graph \(\mathcal{G}\), the pair \((M,B)\) is a _controllable pair_ if there exists an input \(u(t)\) that could drive the system from any initial state \(x(t_{0})\) to any final state \(x(t_{f})\) in a given time period \(t=t_{f}-t_{0}\).
**Definition** (_Strong Structural Controllability (SSC)_) A given graph \(\mathcal{G}\) with a set of leader nodes \(\{\ell_{1},\ell_{2},\ldots,\ell_{m}\}\subseteq\mathcal{V}\), and the corresponding \(B\) matrix is said to be _strong structurally controllable_ if and only if \((M,B)\) is a controllable pair \(\forall\;M\in\mathcal{M}\).
In [17], Monshizadeh et al. provides a graph-theoretic characterization of SSC in networks in terms of zero forcing in graphs explained below.
**Definition** (_Zero Forcing Process_) Consider a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) whose nodes are initially colored either black, or white. If a node \(v\in\mathcal{V}\) is black and has exactly one white neighbor \(u\), then \(v\) forces \(u\) to change its color to black. Zero forcing is a process of applying this color change rule until no black node exists with only one white neighbor.
For a given set of initial black nodes, there can be multiple ways to execute the zero forcing process; however, the set of black nodes at the end of the process will always be the same [19]. If there is a unique way of proceeding the zero forcing process in a graph \(\mathcal{G}\), we call it a _unique zero forcing process_. Moreover, the set of black nodes obtained at the end of the zero forcing process is called the _derived set_.
**Definition** (_Zero Forcing Set (ZFS)_) Consider a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) with an initial set of black nodes (leaders) \(\{\ell_{1},\ell_{2},\ldots,\ell_{m}\}\subseteq\mathcal{V}\). Let \(\mathcal{V}^{\prime}\) be the derived set at the of the zero forcing process, then \(\{\ell_{1},\ell_{2},\ldots,\ell_{m}\}\) is a ZFS if and only if \(\mathcal{V}^{\prime}=\mathcal{V}\).
Figure 1 illustrates the idea of a ZFS.
Monshizadeh et al. [17] characterizes the minimum leader set for SSC in terms of ZFS of the network graph, showing that the network is strong structurally controllable if and only if the leader set is a ZFS. In this work, since we aim to design strong structurally controllable networks with \(N_{L}\) leaders, the leader sets will always be zero forcing sets of the corresponding graphs.
### _Network Robustness Measures_
To analyze the robustness of the proposed graphs, we use widely used metrics, _algebraic connectivity_ and _Kirchhoff index_. _Algebraic connectivity_ of a graph \(\mathcal{G}\) (also known as the Fiedler value) is the second smallest eigenvalue of its Laplacian matrix. A higher value of algebraic connectivity indicates higher robustness.
Fig. 1: Set of nodes \(\{u_{1},u_{2},u_{3}\}\) is a ZFS as the corresponding derived set \(\mathcal{V}^{\prime}\) contains all the nodes in the graph.
_Kirchhoff index_ of a graph \(\mathcal{G}\), denoted as \(K_{f}(\mathcal{G})\), is
\[K_{f}(\mathcal{G})=N\sum_{i=2}^{N}\frac{1}{\lambda_{i}}, \tag{4}\]
where \(N\) is the total number of nodes in the graph \((\mathcal{G})\) and \(\lambda_{2}\leq\lambda_{3}\leq\cdots\lambda_{N}\) are the eigenvalues of the Laplacian of the graph \((\mathcal{G})\). Robustness and the value of Kirchhoff index of a graph are inversely related, i.e., lower Kirchhoff index implies higher robustness, and vice versa [4, 5, 7]. We note that network robustness, as measured by both of these measures, is a monotonically increasing function of edge additions [5, 6]. Thus, adding edges to a graph improves its robustness to failures and noise.
## III Designing Controllable and Robust Networks
In this section, we will design strong structurally controllable and maximally robust networks for a given number of nodes \(N\) and number of leaders \(N_{L}\). We will present three designs, each with different characteristics and performances. In the next section, we provide a distributed way of constructing the proposed graphs; wherein all the nodes follow a set of local rules (_graph grammars_) to make connections with their neighbors to achieve the desired graphs.
### _Network Design 1_
We know that a graph with a single leader can only be completely controllable if the graph is a path graph with the leader node being one of the end nodes. Therefore, given \(N_{L}\) number of leaders, and \(N=(N_{L}\times D)\) number of total nodes, where \(D\) is the diameter, we construct a graph \(\mathcal{G}_{1}\) with these specifications. For \(\mathcal{G}_{1}\), we create \(N_{L}\) path graphs, each with a diameter \(D-1\). Also, the end node of each path is a leader. We then make all leaders pair-wise adjacent, thus, inducing a complete graph among leaders. We describe this construction formally below. Consider the following vertex set for graph \(\mathcal{G}_{1}\):
\[V=\{\ell_{i}\}\cup\{u_{i,j}\},\]
where \(i\in\{1,2,\ldots,k\}\) and \(j\in\{1,2,\ldots,D-1\}\). Vertices with label \(\{\ell_{1},\ell_{2},\ldots,\ell_{k}\}\) are leaders and the rest \(\{u_{1,1},\ldots,u_{i,j},\ldots,u_{k,D-1}\}\) are followers. Connect the vertices in the following manner:
* All the leaders \(\ell_{i}\) have a link between them and generate a complete graph among them.
* For all \(i\in\{1,2,\ldots,k\}\), there exists a link between \(\ell_{i}\) and \(u_{i,1}\).
* For all \(i\in\{1,2,\ldots,k\}\) and \(j\in\{1,2,\ldots,D-2\}\), there is a link between \(u_{i,j}\) and \(u_{i,j+1}\).
Figure 2 illustrates the construction of Graph \(\mathcal{G}_{1}\).
The graph constructed above is strong structurally controllable with diameter \(D-1\). Note that we can add edges to the graph \(\mathcal{G}_{1}\) without affecting both, SSC of the graph and the distances between leaders and remaining nodes. Adding edges is useful to increase the graph robustness.
_Adding Maximum Edges to Graph \(\mathcal{G}_{1}\)_: Adding edges reduces Kirchhoff index and increases the algebraic connectivity, thus, improving robustness [5]. We refer the maximally robust graph constructed from \(\mathcal{G}_{1}\) as \(\bar{\mathcal{G}}_{1}\). Note that the addition of edges must not deteriorate SSC. We propose the following addition of edges for the construction of \(\bar{\mathcal{G}}_{1}\):
* All the leaders \(\ell_{i}\) has an edge with \(u_{q,1}\)\(\forall q<i\), where \(i,q\in\{1,2,\ldots,k\}\).
* Similarly, all the nodes in \(u_{i,j}\) has an edge with nodes in \(u_{q,j+1}\)\(\forall q<i\), where \(i,q\in\{1,2,\ldots,k\}\) and \(j\in\{1,2,\ldots,D-2\}\).
* Also, for a fixed \(j\), all nodes in \(u_{i,j}\) generate a complete graph, where \(i\in\{1,2,\ldots,k\}\) and \(j\in\{1,2,\ldots,D-1\}\)
Figure 3 illustrates the construction of \(\bar{\mathcal{G}}_{1}\) from \(\mathcal{G}\). The newly added edges are shown in blue and orange.
**Lemma III.1**: _The leader set \(\{\ell_{1},\ell_{2},\cdots,\ell_{N_{L}}\}\) is a ZFS of \(\bar{\mathcal{G}}_{1}\) (described above) with \(N\) nodes and \(D\) diameter._
Proof:: We observe that in \(\bar{\mathcal{G}}_{1}\), all leaders, except \(\ell_{1}\), have more than one white neighbor in their neighborhoods. So, only \(\ell_{1}\) can initiate the zero forcing process. Leader \((\ell_{2})\) adjacent to \(\ell_{1}\) has exactly two white neighbors one of which, \(u_{1,1}\), it shares with \(\ell_{1}\). Consequently, the third leader has exactly three white neighbors and shares one white neighbor each with \(\ell_{1}\) and \(\ell_{2}\). This continues for all \(N_{L}\) number of leaders.
As discussed, \(\ell_{1}\) starts the zero forcing process by coloring its only white neighbor, \(u_{1,1}\). As a result, \(\ell_{2}\) is now left with only one white neighbor, \(u_{2,1}\), and thus \(\ell_{2}\) colors it. Similarly, \(\ell_{3}\) is now left with only one white neighbor that is \(u_{3,1}\) because the other two white neighbors \(u_{2,1}\) and \(u_{1,1}\), which it had in common with \(\ell_{2}\) and \(\ell_{1}\), respectively, are now colored. This continues until all the nodes in \(u_{i,1}\) are colored. Note that there exists a complete graph between all the followers in \(u_{i,1}\), therefore, \(u_{1,1}\) will only be able to color \(u_{1,2}\) once all the other nodes in \(u_{i,1}\) are colored. We also know that the nodes in \(u_{i,1}\) and in \(u_{i,2}\) have similar connections between them as \(\ell_{i}\) and \(u_{i,1}\). So, it follows from
above discussion that the process continues until all the nodes are colored, implying that the given leader set is a ZFS of \(\vec{\mathcal{G}}_{1}\), which is the desired claim.
**Lemma III.2**: _For fixed number of nodes \(N\) and diameter \(D\), a graph of construction \(\vec{\mathcal{G}}_{1}\) has maximal edges, i.e., by adding any additional edge the leader set \(\{\ell_{1},\ell_{2},\cdots,\ell_{N_{L}}\}\) will no longer be a ZFS of \(\vec{\mathcal{G}}_{1}\)._
Let us show that the above statement holds for a subgraph \(\vec{\mathcal{G}}_{1}^{\prime}\) containing only the leader set and the first set of followers \(u_{i,1}\) and all the edges between them. As mentioned in Lemma III.1 the zero forcing process in \(\vec{\mathcal{G}}_{1}\) is unique and it propagates from first follower \(u_{1,1}\) to all the nodes in \(u_{i,1}\) till \(u_{k,1}\), in this particular order. Now, considering \(\vec{\mathcal{G}}_{1}^{\prime}\), we observe that adding any edge would disturb this zero forcing process because an additional edge would result in some leader having more than one white neighbor at a particular time step. This results in the leader set not being a ZFS of the subgraph \(\vec{\mathcal{G}}_{1}^{\prime}\).
Next, we assume that the above argument is true for all nodes in \(\vec{\mathcal{G}}_{1}\) until the set of nodes in \(u_{i,D-2}\). This means that all the nodes in \(u_{i,D-2}\) are colored. Since the nodes in \(u_{i,D-1}\) are further ahead in the zero forcing process than nodes in \(u_{i,D-2}\), we can say that nodes in \(u_{i,D-2}\) are not dependent on nodes in \(u_{i,D-1}\) for getting colored. Furthermore, we notice that edge set between \(u_{i,D-2}\) and \(u_{i,D-1}\) is same as that between the leader set and \(u_{i,1}\). We use the same reasoning as above to show that we cannot add any other edge between these two node sets without disrupting the zero forcing process, implying that not all the nodes will get colored. Therefore, by induction, adding any extra edge in \(\vec{\mathcal{G}}_{1}\) would result in the leader set not being a ZFS, which is the desired claim.
**Remark III.3**: _Graph \(\vec{\mathcal{G}}_{1}\) is same (isomorphic) as the graph produced in [8]. We call the graph constructed in [8] as \(\vec{\mathcal{G}}_{PMI}\). It is interesting to note that even though we constructed \(\vec{\mathcal{G}}_{1}\) using the zero forcing method, we arrive at the same result, whereas [8] uses the distance-based approach in their design._
In \(\vec{\mathcal{G}}_{1}\), the diameter is \(N/N_{L}\), i.e., by changing the total number of nodes \(N\) and the number of leaders \(N_{L}\), the diameter varies. So, the interesting question is, _can we design graphs with improved robustness while constraining/fixing the diameter without deteriorating controllability?_ In the following subsection, we answer this by designing strong structurally controllable graphs \(\vec{\mathcal{G}}_{2}\) with diameter \(D=2\), \(N\) total nodes, \(N_{L}\geq 2\) leaders, and improved robustness.
### _Network Design 2_
We construct a maximally robust graph \(\vec{\mathcal{G}}_{2}\) by fixing \(N_{L}\geq 2\) and adding \(N_{F}=(N-N_{L})\) number of other nodes (followers) one-by-one to the graph. This design is different from \(\vec{\mathcal{G}}_{1}\) in that the maximum distance between any two nodes is two. Next, we explain the construction of \(\vec{\mathcal{G}}_{2}\). Consider the following vertex set.
\[V=\{\ell_{i}\}\cup\{u_{j}\},\]
where \(i\in\{1,2,\ldots,k\}\) and \(j\in\{1,2,\ldots,m\}\), where \(k=N_{L}\) and \(m=N_{F}\). Vertices labeled \(\{\ell_{1},\ell_{2},\ldots,\ell_{k}\}\) are leaders and \(\{u_{1},u_{2},\ldots,u_{m}\}\) are followers.
We connect the vertices as follows,
* Leader \(\ell_{1}\) and followers \(\{u_{1},u_{2},\ldots,u_{m}\}\) are connected through a path graph starting from \(\ell_{1}\).
* All the leaders \(\ell_{i}\) are connected with all the nodes in \(u_{j}\), where \(i\in\{2,\ldots,k\}\) and \(j\in\{1,\ldots,m\}\)
Figure 4 illustrates the construction of \(\vec{\mathcal{G}}_{2}\).
**Lemma III.4**: _For a graph \(\vec{\mathcal{G}}_{2}\) (as described above) with \(N\) number of nodes, the proposed leader set \(\{\ell_{1},\ell_{2},\cdots,\ell_{N_{L}}\}\) is a ZFS._
From the construction of \(\vec{\mathcal{G}}_{2}\), we observe that all leaders, except \(\ell_{1}\), are pair-wise adjacent to all the followers, \(u_{j}\)\(\forall j\). This means that except \(\ell_{1}\), all leaders will generally have more than one white neighbor. Since \(\ell_{1}\) has only one white neighbor \(u_{1}\), it will start the zero forcing process by coloring \(u_{1}\).
Next, the rest of the leaders will still have multiple white neighbors in their neighborhoods. However, \(u_{1}\) has only one white neighbor \(u_{2}\). That will allow \(u_{1}\) to color \(u_{2}\). Subsequently, \(u_{2}\) will also have only one white neighbor \(u_{3}\). This stands true for rest of the follower nodes in \(u_{j}\), where \(1\leq j\leq(N-N_{L})\). We note that as a result of this unique zero forcing process, the entire graph gets colored, implying that the leader set is a ZFS of \(\vec{\mathcal{G}}_{2}\).
**Lemma III.5**: _For fixed number of nodes \(N\) and given leader set \(\{\ell_{1},\ell_{2},\cdots,\ell_{N_{L}}\}\) the graph generated using the construction \(\vec{\mathcal{G}}_{2}\) has maximal edge set, i.e., by adding any additional edge the leader set \(\{\ell_{1},\ell_{2},\cdots,\ell_{N_{L}}\}\) will no longer be a ZFS of \(\vec{\mathcal{G}}_{2}\)._
As shown in Lemma III.4, the Graph \(\vec{\mathcal{G}}_{2}\) has a unique zero forcing process. There are two types of edges that we can add to \(\vec{\mathcal{G}}_{2}\). They include,
* leader to follower (non-leader) edges, i.e., (\(\ell_{1},u_{j}\)), where \(j\neq 1\), and
* follower to follower edges, i.e., (\(u_{j},u_{j^{\prime}}\)), where \(j\neq j^{\prime}\).
Both categories of edges belong to a unique zero forcing process. Adding an edge between any two non-adjacent nodes in this unique zero forcing process will not allow the preceding node to continue the zero forcing process since it will now have more than one white neighbor. This means that the addition of any edge other than the ones already existing in \(\vec{\mathcal{G}}_{2}\) will result in a graph for which the given leader set is not a ZFS.
It is interesting to note that, the above two constructions, \(\vec{\mathcal{G}}_{1}\) and \(\vec{\mathcal{G}}_{2}\) generate equal number of edges for the same parameters (\(N\) and \(N_{L}\)). The number of edges in \(\vec{\mathcal{G}}_{1}\) is,
\[E_{\vec{\mathcal{G}}_{1}}=\underbrace{D\times\frac{N_{L}\times(N_{L}-1)}{2}}_{E_{1} }+\underbrace{(D-1)\times\frac{N_{L}\times(N_{L}+1)}{2}}_{E_{2}}. \tag{5}\]
There are \(D\) cliques \(\vec{\mathcal{G}}_{1}\), each of size \(N_{L}\). \(E_{1}\) is the total number of edges in these cliques, and \(E_{2}\) is the number of remaining edges in \(\vec{\mathcal{G}}_{1}\). Similarly, the number of edges in \(\vec{\mathcal{G}}_{2}\) is given by,
\[E_{\vec{\mathcal{G}}_{2}} =\underbrace{(N-N_{L})\times\ (N_{L}-1)}_{E_{3}}+\underbrace{N-N_{L}}_{E_{4}} \tag{6}\] \[+\underbrace{\frac{N_{L}\times(N_{L}-1)}{2}}_{E_{5}}.\]
Here, \(E_{3}\) is the number of edges between \(N_{L}-1\) leaders and \((N-N_{L})\) followers, \(E_{4}\) is the number of edges in the path induced by leader \(\ell_{1}\) and followers, and \(E_{5}\) is the number of edges in the complete graph induced by the leader nodes. Now, simplifying (5) and (6) gives
\[E_{\vec{\mathcal{G}}_{1}}=E_{\vec{\mathcal{G}}_{2}}=N_{L}\times\left(N-\frac{ (N_{L}+1)}{2}\right). \tag{7}\]
As discussed previously, the diameter of \(\vec{\mathcal{G}}_{1}\) depends on \(N_{L}\) and \(N\), whereas the diameter of \(\vec{\mathcal{G}}_{2}\) is constant regardless of \(N_{L}\) and \(N\). So, next, we explore a graph construction that combines \(\vec{\mathcal{G}}_{1}\) and \(\vec{\mathcal{G}}_{2}\) and affords the option of choosing the diameter \(D\) of the graph. Some applications might require particular diameter values, and by combining \(\vec{\mathcal{G}}_{1}\) and \(\vec{\mathcal{G}}_{2}\), we can have a graph where the diameter \(D\) is also a design parameter. We will see in Subsection III-D that in many cases \(\vec{\mathcal{G}}_{2}\) provide higher robustness than \(\vec{\mathcal{G}}_{1}\), but require the same amount of leaders to achieve SSC. So, it is advantageous to have \(\vec{\mathcal{G}}_{1}\) combined with \(\vec{\mathcal{G}}_{2}\) for achieving higher robustness while meeting a design requirement in terms of the diameter.
### _Network Design 3 (Combining Designs 1 and 2)_
In this section, we construct a graph that is a combination of \(\vec{\mathcal{G}}_{1}\) and \(\vec{\mathcal{G}}_{2}\), meaning partial nodes follow the rules of construction of \(\vec{\mathcal{G}}_{1}\) and rest of them follow the construction rules of \(\vec{\mathcal{G}}_{2}\) (as discussed in the previous subsections). We define the construction with three different parameters, \(N\) (total number of nodes), \(N_{L}\)(number of leaders), and \(D\) (diameter of the graph). For these given parameters, we construct a graph \(\vec{\mathcal{G}}_{3}\) that is strong structurally controllable, maximally robust and has equal number of edges as \(\vec{\mathcal{G}}_{1}\) or \(\vec{\mathcal{G}}_{2}\) for the same \(N\) and \(N_{L}\). Let \(\bar{\mathcal{V}}=\bar{\mathcal{V}}_{1}\cup\bar{\mathcal{V}}_{2}\), be the set of all the nodes in \(\vec{\mathcal{G}}_{3}\), where \(\bar{\mathcal{V}}_{1}\) and \(\bar{\mathcal{V}}_{2}\) are the subset of nodes that follow construction of \(\vec{\mathcal{G}}_{1}\) and \(\vec{\mathcal{G}}_{2}\), respectively. Then, the graph \(\vec{\mathcal{G}}_{3}\) is constructed as follows:
* The leader set \(\{\ell_{1},\ell_{2},\cdots,\ell_{N_{L}}\}\subset\bar{\mathcal{V}}_{1}\).
* Let the end nodes of the construction of \(\vec{\mathcal{G}}_{1}\), i.e., \(u_{i,D-2}\), where \(1\leq i\leq N_{L}\), be the pseudo-leaders of \(\vec{\mathcal{G}}_{2}\).
* Let \(u_{i,D-2}=\bar{\mathcal{V}}_{1}\cap\bar{\mathcal{V}}_{2}\), where \(1\leq i\leq N_{L}\), then the total number of nodes become \(|\bar{\mathcal{V}}|=|\bar{\mathcal{V}}_{1}|+|\bar{\mathcal{V}}_{2}|-(\bar{ \mathcal{V}}_{1}\cap\bar{\mathcal{V}}_{2})\).
* The first pseudo-leader of \(\vec{\mathcal{G}}_{2}\), i.e.,\(u_{1,D-2}\), belongs to the same zero forcing path as of the first leader (\(\ell_{1}\)) in \(\vec{\mathcal{G}}_{1}\).
* The edges between nodes \(x\) and \(y\), where \(x\in\{u_{i,D-2},\ \forall i\}\) and \(y\in\{u_{i,D-2},\ \forall i\}\), are according to the construction of \(\vec{\mathcal{G}}_{1}\). Similarly, the edges between nodes in \(\{u_{i,D-2},\ \forall i\}\) and \(\{v_{j}\}\), where \(1\leq j\leq(|\bar{\mathcal{V}}_{2}|-N_{L})\), are according to the construction of \(\vec{\mathcal{G}}_{2}\).
Figure 5 illustrates two examples of the construction of \(\vec{\mathcal{G}}_{3}\) for \(N=12\) and \(N_{L}=3\). The diameters of graphs in (a) and (b) are 3 and 4, respectively.
We make the following observations from the examples:
* Both graphs have the same number of edges, which is also equal to the number of edges in graphs generated according to \(\vec{\mathcal{G}}_{1}\) and \(\vec{\mathcal{G}}_{2}\) for the same \(N\) and \(N_{L}\).
* Both graphs are strong structurally controllable with the given leader sets.
* In general, changing \(|\bar{\mathcal{V}}_{1}|\) and \(|\bar{\mathcal{V}}_{2}|\) will result in graphs ranging from diameter \(D=2\) to diameter of \(\vec{\mathcal{G}}_{1}\) for the same \(N\) and \(N_{L}\), i.e., \[2=D(\vec{\mathcal{G}}_{2})\leq D(\vec{\mathcal{G}}_{3})\leq D(\vec{\mathcal{ G}}_{1})=N/N_{L}.\] (8) Thus, if \(|\bar{\mathcal{V}}_{1}|=N_{L}\), \(\vec{\mathcal{G}}_{3}=\vec{\mathcal{G}}_{2}\), and similarly, if \(|\bar{\mathcal{V}}_{2}|=N_{L}\), \(\vec{\mathcal{G}}_{3}=\vec{\mathcal{G}}_{1}\).
### _Numerical Evaluation and Robustness Analysis_
Here, we numerically evaluate the performance of graphs \(\vec{\mathcal{G}}_{1}\), \(\vec{\mathcal{G}}_{2}\), \(\vec{\mathcal{G}}_{3}\), in terms of robustness and controllability for \(N=60\). First, we analyze the algebraic connectivity of the proposed graphs while varying the number of leaders \(N_{L}\). From Figure 5(a), we observe that the algebraic connectivity of \(\vec{\mathcal{G}}_{2}\) is higher than \(\vec{\mathcal{G}}_{1}\) for any given \(N_{L}\). This implies that robustness of \(\vec{\mathcal{G}}_{2}\), as measured by the algebraic connectivity, is always better than \(\vec{\mathcal{G}}_{1}\). Also, the algebraic connectivity of \(\vec{\mathcal{G}}_{3}\) always lies between \(\vec{\mathcal{G}}_{2}\) and \(\vec{\mathcal{G}}_{1}\).
Figure 5(b) plots the robustness performance of the proposed graphs in terms of the Kirchhoff index. Interestingly, for a lower number of leaders, the Kirchhoff index of \(\vec{\mathcal{G}}_{2}\) is significantly lower (indicating improved robustness) than that of \(\vec{\mathcal{G}}_{1}\). However, for a higher number of leaders, this trend changes, and \(\vec{\mathcal{G}}_{1}\) has a lower Kirchhoff index than \(\vec{\mathcal{G}}_{2}\), though the difference between the values remains relatively small.
Finally, as discussed in the previous subsection, with \(\vec{\mathcal{G}}_{3}\) we can generate graphs whose diameters are lower bounded
Fig. 5: Examples of network design 3 (\(\vec{\mathcal{G}}_{3}\)).
by \(\vec{\mathcal{G}}_{2}\) and upper bounded by \(\vec{\mathcal{G}}_{1}\). Similarly, the algebraic connectivity and Kirchhoff index for \(\vec{\mathcal{G}}_{3}\) are also bounded by those of \(\vec{\mathcal{G}}_{1}\) and \(\vec{\mathcal{G}}_{2}\). This is shown in Figure (a)a and Figure (b)b as a gradient between robustness values of \(\vec{\mathcal{G}}_{1}\) and \(\vec{\mathcal{G}}_{2}\). This reveals an important design trade-off, i.e., for a specific diameter requirement, we can utilize the design \(\vec{\mathcal{G}}_{3}\) while affording more robustness than \(\vec{\mathcal{G}}_{1}\) for the same \(N\) and \(N_{L}\).
## IV Distributed construction using graph grammars
This section provides a distributed way of constructing the proposed graphs using graph grammars. Graph grammars are a set of local rules determining interactions between nodes to eventually achieve the desired graph [14, 15, 16]. In this method of construction, we provide each node with a label and define a set of rules that dictate how the nodes interact with each other. Moreover, the rules describe how a subset of nodes can create or remove edges amongst them and update their labels accordingly. We define a set of rules \(\mathcal{R}=\{r_{0},r_{1},\ldots,r_{n}\}\) through which nodes modify connections with other nodes and update their labels. A rule \(r_{i}\) of the form \(G_{k}\rightharpoonup G_{k+1}\) is applicable to a subgraph \(G_{k}\) representing the state of the subsystem at time step \(k\). After applying appropriate rules, the new subgraph \(G_{k+1}\) is obtained from \(G_{k}\). The graph grammars for \(\vec{\mathcal{G}}_{1}\) and \(\vec{\mathcal{G}}_{2}\) are denoted as \(\mathcal{R}_{1}\) and \(\mathcal{R}_{2}\), respectively. We note that all the nodes are initially labeled \(\alpha\) except a seed node, which is labeled \(S_{1}\). Here, we classify graph grammars into two parts,
* \(\Pi_{1}\)\(\rightarrow\) Rules that create edges required for zero forcing and also creates complete graph between leaders.
* \(\Pi_{2}\)\(\rightarrow\) Rules that maximize the edge set without compromising SSC.
It is important to note that even though the grammars are split into two parts, they can function concurrently with each other. Application of a specific rule only depends on the availability of nodes suitably labelled to apply the rule. Next, we present the graph grammars and also demonstrate their application through examples.
### _Graph Grammars 1 (\(\mathcal{R}_{1}\))_
In this subsection we provide the distributed rules \(\mathcal{R}_{1}\) that create graph \(\vec{\mathcal{G}}_{1}\) (as discussed in Section III-A) for given number of nodes \(N\) and leaders \(N_{L}\).
\(\Pi_{1}\) :
\((r_{0})\)\(S_{i}\)\(\alpha\)\(\rightharpoonup\)\(L_{i}\)\(S_{i+1}\)\(1\)\(\leq\)\(i<N_{L}\)\(i=N_{L}\)\(i=N_{L}\)\(i=N_{L}\)\(i=0\)\(i=0\)\(i=1\)\(i=N_{L}\)\(i=0\)\(i=1\)\(i=N_{L}\)\(i=0\)\(i=1\)\(i=N_{L}\)\(i=0\)\(i=1\)\(i=N_{L}\)\(i=0\)\(i=1\)\(i=N_{L}\)\(i=N_{L}\)\(i=0\)\(i=1\)\(i=N_{L}\)\(i=0\)\(i=1\)\(i=N_{L}\)\(i=0\)\(i=1\)\(i=N_{L}\)\(i=0\)\(i=1\)\(i=N_{L}\)\(i=0\)\(i=1\)\(i=N_{L}\)\(i=0\)\(i=1\)\(i=N_{L}\)\(i=0\)\(i=1\)\(i=N_{L}\)\(i=0\)\(i=1\)\(i=N_{L}\)\(i=0\)\(i=1\)\(i=N_{L}\)\(i=0\)\(i=1\)\(i=N_{L}\)\(i=0\)\(i=1\)\(i=N_{L}\)\(i=0\)\(i=1\)\(i=N_{L}\)\(i=0\)\(i=1\)\(i=N_{L}\)\(i=0\)\(i=1\)\(i=N_{L}\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=1\)\(i=N_{L}\)\(i=N_ |
2305.00760 | Breaks and Code Quality: Investigating the Impact of Forgetting on
Software Development. A Registered Report | Developers interrupting their participation in a project might slowly forget
critical information about the code, such as its intended purpose, structure,
the impact of external dependencies, and the approach used for implementation.
Forgetting the implementation details can have detrimental effects on software
maintenance, comprehension, knowledge sharing, and developer productivity,
resulting in bugs, and other issues that can negatively influence the software
development process. Therefore, it is crucial to ensure that developers have a
clear understanding of the codebase and can work efficiently and effectively
even after long interruptions. This registered report proposes an empirical
study aimed at investigating the impact of the developer's activity breaks
duration and different code quality properties. In particular, we aim at
understanding if the amount of activity in a project impact the code quality,
and if developers with different activity profiles show different impacts on
code quality. The results might be useful to understand if it is beneficial to
promote the practice of developing multiple projects in parallel, or if it is
more beneficial to reduce the number of projects each developer contributes. | Dario Amoroso d'Aragona, Luca Pascarella, Andrea Janes, Valentina Lenarduzzi, Rafael Penaloza, Davide Taibi | 2023-05-01T10:33:17Z | http://arxiv.org/abs/2305.00760v3 | # Breaks and Code Quality: Investigating the Impact of Forgetting on Software Development
###### Abstract
Developers interrupting their participation in a project might slowly forget critical information about the code, such as its intended purpose, structure, the impact of external dependencies, and the approach used for implementation. Forgetting the implementation details can have detrimental effects on software maintenance, comprehension, knowledge sharing, and developer productivity, resulting in bugs, and other issues that can negatively influence the software development process. Therefore, it is crucial to ensure that developers have a clear understanding of the codebase and can work efficiently and effectively even after long interruptions. This registered report proposes an empirical study aimed at investigating the impact of the developer's activity breaks duration and different code quality properties. In particular, we aim at understanding if the amount of activity in a project impact the code quality, and if developers with different activity profiles show different impacts on code quality. The results might be useful to understand if it is beneficial to promote the practice of developing multiple projects in parallel, or if it is more beneficial to reduce the number of projects each developer contributes.
Forgetting curve, Code Quality, Empirical Software Engineering
## I Introduction
When developers are not working for a long time on the same project, they might forget some details about the source code, including the purpose of some lines of code, the code structure, the effect of external dependencies, or the followed implementation strategy. The result, can can hinder software maintenance, comprehension, and developer productivity [1], with possible consequences on bugs, and other issues in software development [2, 3].
The Ebbinghaus curve is a well-known model for describing a) forgetting as a function of time and b) retaining as a function of repeated learning [4, 5]. Applied to software development (see Figure 1), we hypothesize that when time elapses and a developer does not repeatedly work on a project, he or she might forget some details and might be more prone to introducing mistakes. This was also observed by [6], where they reported that "several subjects each noted that he or she has to work with code [continuously] otherwise I forget after a while [1 month]".
Many studies analyze the activity of _learning_. However, the countermeasure to forgetting is not learning but _remembering_. Learning and remembering require different strategies, as learning from scratch (to remember) may be considered an inefficient effort and perceived as boring.
To investigate the phenomenon of code forgetting in more detail and to be able to develop countermeasures, in this registered report, we want to study whether we can observe a relationship between _interruptions during participation in a project_ (assuming that these interruptions cause forgetting) and a degradation of _source code quality_. Concretely, we operationalize participation as the "observable, performed activities on the source code repository" (e.g., commits, pull requests, etc.), interruptions as "the time that occurred between one activity and the next, performed by the same developer in the same project", and degradation of source code quality as "a worsening change in source code metrics".
**Paper Structure:** Section II describes the empirical study design and Sect. III presents the data collection and analysis protocols. Section IV outlines the execution plan and Sect. V identifies the threats to validity. Section VI identifies the threats to validity, and Sect. VII concludes the paper.
Fig. 1: Illustration of the forgetting curve [4], extended with examples that exemplify its application to software development.
## II Empirical Study Design
In this section, we describe our empirical study reporting the goal and research questions, the context, data collection, and data analysis. We designed our study based on the guidelines defined by Wohlin et al. [7]. In Figure 2, we describe the entire process we will adopt to answer our RQs.
We split our investigation into two different approaches, hereinafter called "Iterations". In Iteration 1, we aim at understanding whether, for a developer in general, the time that elapses between his/her activities correlates negatively with the code quality of the new contribution (considered at project and also at module level) when the developer gets back to the code (since it ignores the personal characteristics of the individual developer, we call this the Naive model).
In Iteration 2, we will study if the relationship between interruptions and source code quality degradation can be better explained if the degree of contribution of a developer is also taken into consideration (Advanced model). We assume that primary contributors (authored more than 50% of the code [8]) forget at a slower pace than secondary contributors.
### _Goal, Research Questions, Metrics, and Hypothesis_
We formalized the goal of this study according to the GQM approach [9] as _Investigate_ interruptions of development activities _for the purpose of_ evaluation _with respect to the_ impact of their length on source code quality _from the point of view of_ developers _in the context of_ open-source software.
To measure source code quality, we will consider readability and quality metrics, see Sect. III-A.
Based on the aforementioned goal, we defined two Research Questions (RQ).
**RQ\({}_{\mathbf{1}}\).**_How strong is the developer activity break duration correlated with a degradation of code quality metrics?_
As "activity break" we will consider the time that occurred between the previous activity in the project and the next activity performed by the same developer in the same package. We consider all activities, which we are able to measure and where we assume that knowledge of the code is required:
* Commits
* Opening/closing/reviewing/commenting pull requests
* Opening/closing/commenting issues
We will collect different **metrics**.
_Readability Metrics._ We will measure the readability by using the eight readability metrics defined by Scalabrino et al. [10], which are based on textual properties of the source code, described in Table III. Several studies highlighted that textual features are significant descriptors in the evaluation of code comprehension and, therefore, are meaningful indicators of the overall readability level of source code [11, 12, 13]. Moreover, Scalabrino et al. [10] demonstrated that their newly-defined metrics are indeed a proxy of the actual readability perceived by developers. In other words, the considered metrics are suitable to quantitatively assess the readability of source code and are qualitatively perceived as relevant by practitioners.
_Anti-Patterns and Code Smells._ We will consider the Code Smells defined by Fowler [14] and the anti-patterns defined by Brown [15] (Table I).
_Software Metrics and Technical Debt detected by SonarQube_. We will include software metrics computed by SonarQube as well as the information related to the Technical Debt. SonarQube includes the three categories of issues (Code Smells, Bugs, and Security Vulnerabilities) and the three Technical Debt types (Squale Index, Reliability Remediation Effort, and Security Remediation Effort). We must notice that the Code Smells detected by SonarQube are not the ones defined by Fowler [14] (Table II).
We **hypothesize** that the antipatterns, code smells, and SonarQube metrics (Table I and Table II) are directly related to activity break duration (H\({}_{1.1}\)). Instead, the readability metrics (Table III) are (mostly) inversely related to activity break duration (H\({}_{1.2}\)). In Table I, Table II, and Table III, we report if we expect an increase or a decrease of the relative metric in the rightmost column.
**RQ\({}_{\mathbf{2}}\).**_How strong is the developer activity break duration correlated with the degradation of code quality metrics for classes of developers created according to their participation to a given project?_
Fig. 2: Empirical Study Design Process
In this RQ we aim at understanding if developers with similar activity profiles (e.g. the super active, active, average, inactive, and super inactive) have a different impact on code quality.
As for **metrics**, we will consider the same ones adopted for RQ\({}_{1}\) but applied to clusters of developers with similar activity profiles. To cluster the developers according to their behavior in the project we will follow the same approach used by Calefato et al. [16], thus we will calculate for each developer the Truck Factor [8].
Compared with RQ\({}_{1}\), when clustering developers based on their median activity break duration, we hypothesize stronger correlations between the antipatterns, code smells, and Sonar-Qube metrics (Table I and Table II) and activity break duration (H\({}_{2.1}\)). The same behavior is expected for the readability metrics (Table III) with a stronger inversely proportional correlation with the activity break duration (H\({}_{2.2}\)).
### _Context_
We will use projects included in available datasets (e.g., Technical Debt Dataset [17] version 2.0, Pandora [18]) that fulfill our criteria: developed in Java, older than three years, more than 500 commits and 100 classes, and usage of an issue tracking system with at least 100 issues reported. In addition to capturing and depicting reality, we are interested in projects that are using SonarCloud in their development process to avoid launching SonarCloud afterward which can lead to inaccurate results because in that case, we would be analyzing problems that the developers would not be aware of. Finally, we are interested in projects that can be considered mature. In case the available datasets do not contain the information required we will consider the possibility to extend them or creating a new one.
### _Verifiability and Replicability_
To allow verifiability and replicability, we will make all the raw data available in our online appendix, including the different scripts we will use in the paper.
## III Data Collection and Data Analysis
### _Data collection_
To answer our RQs, we will find the projects that full fill our criteria and we will collect different software metrics. In particular, for this analysis, we aim to extract the proxy metrics described in Section II-A to estimate, for example, the correlation between the code complexity and the developer's cognitive perception of the code complexity as previously done by Arisholm _et al._[19] for the Line-of-Code (LOC) proxy metric or as in the case of Nagappan and Ball [20] regarding code churn. It is worth noticing that some of them could be already included in the selected dataset, while others must be evaluated project-wise.
### _Calculate project behavior towards code quality_
For each project and for each commit, we will compute the _delta_ (\(\Delta\)) of the aforementioned metrics between that commit and the commit immediately before, to establish whether there was an increase (\(\Delta>0\)), a decrease (\(\Delta<0\)), or no variation in the metric values (\(\Delta=0\)) caused by the actions carried out by the developer. The interpretation of the results depends on the specific metrics.
### _Extract activity breaks_
For each developer, we will extract the activity break time (in days) as defined in Section II-A. Days will be grouped along the _last_ commit of the day. This is justified by the assumption that a user committing several times in a day has not forgotten the code between those commits. Thus, activity breaks will always be positive natural numbers.
### _Iteration One_
Correlate activity breaks with metric valuesFor each developer, we will select the related commits and, for each metric, we will consider the \(\Delta\) computed between that commit and the commit immediately before (which may or may not have been made by the same developer). We will correlate the \(\Delta\) values with the activity break time.
In order to account for a non-linear forgetting rate, as justified by the Ebbinghaus curves presented before, we will compute _piecewise correlations_, based on a piecewise linear regression model [21]. In a piecewise regression model (also known as _segmented regression_) the independent variable is partitioned into a given number \(n\) of intervals, and a regression model is fit into each of the intervals to clarify its relationship to the dependent variable. We will use a linear regression using the least squares method to best fit the data on each of the segments or _bins_. A fundamental step of piecewise regression is the decision on where to separate the different segments, known as a _breakpoint_. The ideal breakpoint would maximize the difference in slopes between the regression models before and after the breakpoint. There are different strategies for finding such a breakpoint. A fast and robust approach is to group data points with a "similar" slope through a clustering method like a decision tree.
We will compute a piecewise regression model to understand the relationship between the activity break duration (independent variable) and the delta for each metric (dependent variable). In order to find the best descriptor, we will test different numbers of bins (from 3 up to a maximum of 10) and different clustering strategies, and will choose the model that presents the smallest error w.r.t. the data. In case there are too many data points with the same activity break (i.e., where the dependent variable is the same), we will group them in a representative set using centroid-based clustering to limit the cases to a pre-defined number of data points which is coherent with the remaining data. The choice of the number of centroids is made to explicitly regularise outliers in the data.
Yet, we are not interested in the regression models _per se_, but rather as a means to understand the impact of forgetting (longer activity breaks) on the quality of the code. An important piece of information is given by the activity breakpoints, which tell us the activity break lengths where the impact on the variable changes behaviors; in other words, they
can suggest the _critical_ break lengths where the consequences of forgetting the code become more obvious. To provide an adequate measure of the influence of the activity break on the metric value, the regression models will be used to compare the differences between the predictions for change in a 1-day break among the available segments.
Figure 3 depicts a dataset where the activity break time (independent variable) is partitioned into four segments, with a linear regression associated with each segment. On the left, we see the difference in the predicted \(\Delta\) at 1-day break between the model of the first bin (no forgetting observed) and each of the remaining bins. The breakpoints represent the moments where the behavior w.r.t. the break time changes.
After constructing the piecewise regression model and computing the differences in the segment behavior as described above, we will make a statistical analysis to verify whether the differences are statistically significant and whether the significance increases as the bin includes longer activity breaks, as our hypothesis suggests. We will also verify the differences between developers, following an inter-study analysis.
One can think of several confounding factors that may bias the analysis. For instance, the factors shown in Table IV--how much of the existing code was written by the developer, and how much of their previous work was modified by someone else--may greatly affect the quality of each commit, but one should not forget that there is no an established method for identifying a pre-specified set of important confounders and in practice, confounding is not fully overcome [22]. To alleviate the effect of these confounders, we will use a regression detection model and an analysis of covariances (ANCOVA) [23, 24].
### _Iteration Two_
We now describe the step(s) regarding Iteration 2.
#### Iv-E1 Classify developers according to their contribution in the project
Given the information about the activity breaks by each developer, as previously done by Calefato et al. [16] we will characterize different developers according to the Truck Factor [8]. Specifically, we will analyze the Truck Factor of each developer, and construct classes depending on their relative behavior _average_, _inactive_, and _super inactive_ classes are formed by each of the five quintiles, respectively. That is, super active developers are the 20% with the lowest average Truck Factor value, and so on.
#### Iv-E2 Correlate breaks with metric values for each developer classification
Following a strategy akin to Iteration One (Section III-D), we will find a correlation between the break time and the deltas for each metric value. However, in this case, rather than focusing on the specific break time of the developer, we take as the main feature their Truck Factor [8]. In this case, we will analyze the impact of a developer's commitment in each specific metric value, depending on their relative activity in the project. To achieve this, we will compute the correlation between the average activity break time (independent variable) and the associated delta (dependent
Fig. 3: Activity break influence measurement
variable). The bins in this case are constructed following the developer classification breaks. The remaining analysis is made through the same strategy described in the previous section.
## IV Execution Plan
We now explain the execution plan we scheduled according to the study design we defined in the previous sections.
### _Data Collection_
First of all, we will identify the most suitable projects dataset. We are aware that not all data will be available in the dataset, so we aim to apply the following methodology to calculate process metrics and code readability of projects where source code is developed relying on the versioning system Git and hosted on a publicly available hub like GitHub. First, for each project, we clone the online repository that includes code changes performed on all branches during the software development. Second, we parse the cloned repository with PyDriller[25], a lightweight Python framework designed to ease the mining of Git repositories. This framework simplifies the retrieval of the two versions of each file changed in a commit, i.e., the version before and after the committed change. By following this approach, we can calculate the process metrics as described in Table IV, considering the evolution of the changes applied on each file per developer. Finally, we will use the tool developed by Scalabrino et al. [10] to obtain the values of the readability metrics of these two versions of each file.
### _Calculate project readability and code quality_
For each commit in the dataset, we will perform the following steps:
1) query the dataset and group the commits by package;
2) for each group, query the dataset to get the quality information described in Table I and Table II;
3) collect the information described in Table IV and Table III calculated in the previous step in Section IV-A;
4) calculate the difference for each metric to obtain the \(\Delta\) and we will store the commit identifier and the \(\Delta\) in a new database.
### _Extract activity breaks_
We need to extract the elapsed time between each commit made by each developer. We will perform the following steps:
1) query the dataset to select all the commits performed by the same developer in the same package;
2) extract all the activities performed by the same developer in the project;
3) create a single list with all the activities and commits and sort them from oldest to newest;
4) calculate the differences in terms of the number of days between each commit and activity;
5) add the number of elapsed days to the dataset created in phase three of Section IV-B.
### _Iteration 1_
In this iteration we first correlate activity breaks with metric values: we will select a number \(n\) of bins, and then for each developer: 1) compute a piecewise regression model w.r.t. the dependent variable; 2) for each segment \(i\), given by a linear equation \(y=a_{i}x+b_{i}\) predict the behaviour \(\overline{y}_{i}(1)\) at value \(x=1\); and 3) the _impact of forgetting_ at bin \(i\) is the difference between the predicted value for bin \(i\) and for bin 1: \(\overline{y}_{i}(1)-\overline{y}_{1}(1)\). Subsequently, we will study the impact of confounding factors. An ANCOVA test will be used to understand the impact of the confounding factors from Table IV to the quality of the results.
### _Iteration 2_
This iteration consists of two steps: first, we classify developers according to their behavior toward commit frequency. We will compute Truck Factor [8] for each developer; the average Truck Factor; and calculate the five quintiles. Each developer will be assigned to one of five bins according to the quintile they belong to. Then, second, we correlate breaks with metric values for each developer classification. For the developers in each quintile, we will compute a (classical) linear regression model of the dependent variable (\(\Delta\)) w.r.t. the independent variable (break time). We will analyze these five models for statistical differences.
## V Threats to Validity
In this section, we discuss the threats that might affect the validity of our empirical study, following the structure suggested by Runeson and Host [26].
**Construct Validity**. The planned study uses two main constructs: interruptions during participation in a project and degradation of source code quality. Simply observing participation as source code commits might overestimate disruptions and underestimate participation, since developers may also comment on or review others' code. We, therefore, include other activities, such as pull requests. In addition, we are not able to capture other forms of interactions such as reading code. However, writing code requires a much higher level of "recall" than reading code, so we consider this aspect secondary.
To study the degradation of source code quality, we plan to use a set of validated measures from the Technical Debt Dataset and calculate additional readability [10] and source code metrics (Table IV) directly from the source code repository. Despite our efforts to minimize measurement errors, we cannot rule out the possibility of false positives or errors in the measures obtained using these tools. The selected projects did not use SonarQube or any of the metrics we are calculating during the analysis time frame. As well as for the vast majority of works on Fowler's code smells, developers did not use the rules adopted in the study. Our results reflect exactly what developers would obtain using SonarQube out of the box in their project, without customizing the rule-set.
**Internal Validity**. We are aware that static analysis tools detect a non-negligible amount of false positives [27]. However, since we aim at replicating the same conditions that are commonly adopted by practitioners when using the same tools, we will not modify or remove any possible false positives, to accurately reflect the results that developers can obtain by running the same tools. We are aware that we cannot claim a direct cause-effect relationship between the commit breaks and the selected software metrics, and that the quality of the code can be influenced by other factors. We are also aware that pieces of code with different purposes (e.g., classes controlling the business logic) can be more complex than others, and consequently harder to remember.
Last, we are aware that some human factors that can play a relevant role, cannot be measured (e.g., the developer's age, individual skills, etc.).
**External Validity.** We will select projects stem from a very large set of application domains, ranging from external libraries, frameworks, and web utilities to large computational infrastructures. The application domain was not an important criterion for the selection of the projects to be analyzed, but in any case, we tried to balance the selection and pick systems from as many contexts as possible. Choosing only one or a very small number of application domains, or projects with similar age or size, could have been an indication of the non-generality of our study, as only prediction models from the selected application domain would have been chosen. Since we are considering only open-source projects, we cannot directly speculate on industrial projects. Moreover, we only considered Java projects due to the limitation of the used tools (SonarQube provides a different set of Sonar issues for each language) and the results of projects developed on different languages might not be directly comparable.
## VI Related Work
Klammer and Gueldenberg [28] performed a systematic literature review about unlearning and forgetting in organizations, distinguish _unlearning_ (intentional) from _forgetting_ (unintentional), and examine positive and negative consequences of unlearning and forgetting.
Averell and Heathcote [29] study the form of forgetting curves with an experiment that measures different variables over 28 days to observe forgetting and conclude that exponential forgetting curves are the best fit for their participants.
A study conducted by Kruger et al. [2] surveyed developers of software projects on file familiarity, suggesting that the forgetting curve of Ebbinghaus[4, 5] is applicable in software development and that repetitions have a strong relationship with the familiarity of source code.
Fritz et al. [6] investigated if a programmer's activity can be used to build a model of what a programmer knows about a code base and through questions to 19 Java developers about files they worked on regularly or recently, they identified a significant correlation between regularly working on a file and familiarity. LaToza and Myers [30] gathered over 300 questions asked by programmers while developing software and categorize them, indicating that developers often ask specific questions about a scenario, highlighting the importance of being familiar with the source code.
Calefato et al. [16] investigated the life-cycle of developers in Open Source projects to delineate a pattern to identify if a developer is abandoning the project or is taking a break. For this purpose, Calefato et al. developed a methodology to identify the break time of developers. In addition, to calculate the risk of degradation of the project, the authors calculated the Truck Factor [8] for each developer.
To the best of our knowledge, there are no studies that delve deeper into the phenomenon of source code forgetting by examining whether a correlation exists between a change in the quality of source code and a developer activity break.
## VII Conclusion
In this registered report, we aim to examine the correlation between a developer's activity break and various code quality attributes. Our goal is to understand if 1) breaks between activities impact positively or negatively on code quality and 2) if developer activity profiles (super active, active, average, inactive, and super inactive) impact on code quality.
The results of this work will enable researchers and companies to understand if is beneficial to let developers work on multiple projects in parallel or if it is better to have them focus mainly on one project continuously. |
2304.00449 | Preservation of AD via forcings | We show that assuming $\mathsf{ZF}+\mathsf{AD}^+ +$ "$V = \mathrm{L}
\bigl(\wp (\mathbb{R})\bigr)$", any poset which increases $\Theta$ does not
preserve the truth of $\mathsf{AD}$. We also show that in $\mathsf{ZF} +
\mathsf{AD}$, any non-trivial poset on $\mathbb{R}$ does not preserve the truth
of $\mathsf{AD}$. This answers the question of Chan and Jackson. Furthermore,
we show that under the assumptions $\mathsf{ZF}+\mathsf{AD}^+ +$ "$V =
\mathrm{L} \bigl(\wp (\mathbb{R})\bigr)$" + "$\Theta \text{ is regular}$",
there is a poset on $\Theta$ which adds a new subset of $\Theta$ while
preserving the truth of $\mathsf{AD}$. This answers the question of Cunningham. | Daisuke Ikegami, Nam Trang | 2023-04-02T05:01:59Z | http://arxiv.org/abs/2304.00449v1 | # Preservation of Ad via Forcings
###### Abstract.
We show that assuming \(\mathsf{ZF}+\mathsf{AD}^{+}+{}^{\omega}V=\mathrm{L}\big{(}\wp(\mathbb{R})\big{)}\)", any poset which increases \(\Theta\) does not preserve the truth of \(\mathsf{AD}\). We also show that in \(\mathsf{ZF}+\mathsf{AD}\), any non-trivial poset on \(\mathbb{R}\) does not preserve the truth of \(\mathsf{AD}\). This answers the question of Chan and Jackson [2, Question 5.7]. Furthermore, we show that under the assumptions \(\mathsf{ZF}+\mathsf{AD}^{+}+{}^{\omega}V=\mathrm{L}\big{(}\wp(\mathbb{R}) \big{)}\)" \(+{}^{\omega}\Theta\) is regular", there is a poset on \(\Theta\) which adds a new subset of \(\Theta\) while preserving the truth of \(\mathsf{AD}\). This answers the question of Cunningham [3, Section 5].
Key words and phrases:Axiom of Determinacy, forcing, Descriptive Set Theory 2020 Mathematics Subject Classification: 03E60,03E40,03E15 The authors would like to thank W. Hugh Woodin for generously sharing his results and insight on the topic of this paper. They are also grateful to William Chan and Steve Jackson for their work in [2] that inspired the work in Section 4 of this paper. The first author would like to thank the Japan Society for the Promotion of Science (JSPS) for its generous support through the grant with JSPS KAKENHI Grant Number 19K03604. He is also grateful to the Sumitomo Foundation for its generous support through Grant for Basic Science Research. The second author is grateful to the National Science Foundation (NSF) for its generous support through CAREER Grant DMS-1945592.
is no non-trivial elementary embedding \(j\colon V\to V\) such that \((V,\in,j)\) is a model of \(\mathsf{ZFC}\).1 Furthermore, Hamkins, Kirmayer, and Perlmutter proved that for any set generic \(G\) over \(V\), there is no non-trivial elementary embedding \(j\colon V\to V[G]\) such that \((V[G],\in,j)\) is a model of \(\mathsf{ZFC}\).
Footnote 1: Here \(V\) is the class of all sets, \(j\) in the structure \((V,\in,j)\) is considered as the interpretation of a binary predicate on the universe, and the structure \((V,\in,j)\) satisfies Comprehension and Replacement for first-order formulas with the binary predicates for \(\in\) and \(j\).
One can then ask questions such as what if the structure \((V,\in,j)\) or \((V[G],\in,j)\) is a model of \(\mathsf{ZF}\) or \(\mathsf{ZF}+\mathsf{AD}\) instead of \(\mathsf{ZFC}\). Using the method of symmetric models, Woodin proved that there are a set generic \(G\) over \(V\) and a non-trivial elementary embedding \(j\colon V\to V[G]\) such that \((V[G],\in,j)\) is a model of \(\mathsf{ZF}+\mathsf{AD}\). However, in his example, \(j\upharpoonright\mathsf{Ord}\) is the identity map, so there is no critical point of \(j\).
As far as we know, it is still open whether there are a set generic \(G\) over \(V\) and an elementary embedding \(j\colon V\to V[G]\) such that \((V[G],\in,j)\) is a model of \(\mathsf{ZF}+\mathsf{AD}\) and \(j\upharpoonright\mathsf{Ord}\) is not the identity map. We are especially interested in the case when the critical point of \(j\) is \(\omega_{1}\) in \(V\) because if the critical point of \(j\) is \(\omega_{1}\) in \(V\), then the forcing to obtain \(V[G]\) must add new reals to \(V\) and \(\mathsf{AD}\) has influence on reals and sets of reals. To obtain such a \(j\), one needs to have a poset \(\mathbb{P}\) to produce such a model \(V[G]\), and the poset \(\mathbb{P}\) must add new reals while preserve the truth of \(\mathsf{AD}\) from \(V\) to \(V[G]\). Hence we have a test question: Is there any poset which adds a new real while preserving the truth of \(\mathsf{AD}\)? This is how we got interested in the relationship between forcing and \(\mathsf{AD}\).
We still do not know if there is any poset which adds a new real while preserving the truth of \(\mathsf{AD}\). Considering this question, we have observed that many forcings adding a new real do not preserve the truth of \(\mathsf{AD}\). A typical example is Cohen forcing. It is well-known that if \(G\) is \(V\)-generic for Cohen forcing, then in \(V[G]\), the set of reals in \(V\) does not have the Baire property. In particular, \(\mathsf{AD}\) must fail in \(V[G]\). On the other hand, there are posets which add a new set while preserving the truth of \(\mathsf{AD}\). By the result of Woodin [6, Section 3], if we assume \(\mathsf{ZF}+\mathsf{AD}+``V=\mathrm{L}(\mathbb{R})"\) and let \(\kappa\) be a sufficiently big cardinal and \(\mathbb{P}\) be the poset for adding a Cohen subset of \(\kappa\) in \(\mathrm{HOD}\), the class of hereditarily ordinal definable sets, then the poset \(\mathbb{P}\) adds a new set while preserving the truth of \(\mathsf{AD}\). Actually, the poset \(\mathbb{P}\) does not add any set of reals to \(V\) in this case.
We have been wondering what kind of forcings preserve the truth of \(\mathsf{AD}\). Our intuition was such a poset \(\mathbb{P}\) would not be able to change the structure of sets of reals drastically. The intuition was partially justified using the ordinal \(\Theta\), the supremum of ordinals which are surjective images of \(\mathbb{R}\), by the following theorem:
**Theorem 3.1**.: Assume \(\mathsf{ZF}+\mathsf{AD}^{+}+``V=\mathrm{L}\big{(}\wp(\mathbb{R})\big{)}"\). Suppose that a poset \(\mathbb{P}\) increases \(\Theta\), i.e., \(\Theta^{V}<\Theta^{V[G]}\) for any \(\mathbb{P}\)-generic filter \(G\) over \(V\). Then \(\mathsf{AD}\) fails in \(V[G]\) for any \(\mathbb{P}\)-generic filter \(G\) over \(V\).
However, the assumption \(``V=\mathrm{L}\big{(}\wp(\mathbb{R})\big{)}"\) is essential in Theorem 3.1:
**Theorem 3.2**.: It is consistent relative to \(\mathsf{ZF}+\mathsf{AD}_{\mathbb{R}}\) that \(\mathsf{ZF}+\mathsf{AD}\) holds and there is a poset \(\mathbb{P}\) increasing \(\Theta\) while preserving \(\mathsf{AD}\), i.e., for any \(\mathbb{P}\)-generic filter \(G\) over \(V\), we have \(\Theta^{V}<\Theta^{V[G]}\) and that \(\mathsf{AD}\) holds in \(V[G]\).2
Footnote 2: For an expert on determinacy, the assumption \(\mathsf{ZF}+\mathsf{AD}_{\mathbb{R}}\) is an overkill. The proof of Theorem 3.2 shows that the assumption \(\mathsf{ZF}+\mathsf{AD}^{+}+``\Theta>\Theta_{0}"\) is enough.
In particular, there is a poset which adds a new set of reals (but does not add a new real) while preserving the truth of \(\mathsf{AD}\).
After we announced Theorem 3.1 and Theorem 3.2, Chan and Jackson [2] worked on the question what kind of forcings do not preserve the truth of \(\mathsf{AD}\). They proved that assuming \(\mathsf{ZF}+\mathsf{AD}\), if a non-trivial poset \(\mathbb{P}\) is a wellorderable forcing of cardinality less than \(\Theta\), then \(\mathbb{P}\) does not preserve the truth of \(\mathsf{AD}\) ([2, Theorem 3.2]). They also proved that assuming \(\mathsf{ZF}+\mathsf{AD}+``\Theta\) is regular", if a non-trivial poset \(\mathbb{P}\) is a surjective image of \(\mathbb{R}\), then \(\mathbb{P}\) does not preserve the truth of \(\mathsf{AD}\) ([2, Theorem 5.5]). Then they asked whether \(\mathsf{ZF}+\mathsf{AD}\) only (i.e., without assuming the regularity of \(\Theta\)) implies that if a non-trivial poset \(\mathbb{P}\) is a surjective image of \(\mathbb{R}\), then \(\mathbb{P}\) does not preserve the truth of \(\mathsf{AD}\) ([2, Question 5.7]). We give a positive answer to their question:
**Theorem 4.1**.: Assume \(\mathsf{ZF}+\mathsf{AD}\). Let \(\mathbb{P}\) be any non-trivial poset which is a surjective image of \(\mathbb{R}\) and \(G\) be any \(\mathbb{P}\)-generic filter over \(V\). Then \(\mathsf{AD}\) fails in \(V[G]\).
We now turn to positive results on the question what kind of forcings preserve the truth of \(\mathsf{AD}\). As was mentioned in a previous paragraph, By the result of Woodin [6, Section 3], if we assume \(\mathsf{ZF}+\mathsf{AD}+``V=\mathrm{L}(\mathbb{R})"\) and let \(\kappa\) be a sufficiently big cardinal and \(\mathbb{P}\) be the poset for adding a Cohen subset of \(\kappa\) in \(\mathrm{HOD}\), the class of hereditarily ordinal definable sets, then the poset \(\mathbb{P}\) adds a new set while preserving the truth of \(\mathsf{AD}\). A natural question would be how small one can take \(\kappa\) for this result. Cunningham [3] worked on this question. He proved that \(\kappa\) can be taken as any regular cardinal larger than \(\Theta^{+}\) ([3, Subsection 4.1]). Then he asked whether \(\kappa\) can be taken as \(\Theta\) ([3, Section 5]). We answer his question positively. In fact, we prove a more general theorem as follows:
**Theorem 5.1**.: Assume \(\mathsf{ZF}+\mathsf{AD}^{+}+``V=\mathrm{L}\big{(}\wp(\mathbb{R})\big{)}"\). Suppose that \(\Theta\) is regular. Then there is a poset \(\mathbb{P}\) on \(\Theta\) which adds a subset of \(\Theta\) while preserving \(\mathsf{AD}\), i.e., for any \(\mathbb{P}\)-generic filter \(G\) over \(V\), there is a subset of \(\Theta^{V}\) which belongs to \(V[G]\setminus V\) and \(\mathsf{AD}\) holds in \(V[G]\).3
Footnote 3: The proof of Theorem 5.1 shows that in both Case 1 and Case 2, the poset \(\mathbb{P}\) does not add any new set of reals to \(V\). In particular, the poset \(\mathbb{P}\) preserves the truth of \(\mathsf{AD}^{+}\) as well.
Notice that \(\mathsf{ZF}+\mathsf{AD}+``V=\mathrm{L}(\mathbb{R})"\) implies the assumptions of Theorem 5.1 including the regularity of \(\Theta\). Also, in case of \(``V=\mathrm{L}(\mathbb{R})"\), the poset \(\mathbb{P}\) is the one for adding a Cohen subset of \(\Theta\) in \(\mathrm{HOD}\) as in Case 1 in the proof of Theorem 5.1. Therefore, the arguments for Theorem 5.1 answer the question of Cunningham [3, Section 5].
We also note that Theorem 5.1 is optimal in the following two senses: In one sense, the size of the poset \(\mathbb{P}\) cannot be smaller than \(\Theta\). As was mentioned in a previous paragraph, by the result of Chan and Jackson [2, Theorem 3.2], any wellorderable forcing of cardinality less than \(\Theta\) cannot preserve the truth of \(\mathsf{AD}\). Also, by Theorem 4.1 (or by the result of Chan and Jackson [2, Theorem 5.5] in case \(\Theta\) is regular), any poset on \(\mathbb{R}\) (or a surjective image of \(\mathbb{R}\)) cannot preserve the truth of \(\mathsf{AD}\). In the other sense, unless the poset \(\mathbb{P}\) adds a new real,
the poset \(\mathbb{P}\) cannot add a new bounded subset of \(\Theta\) while preserving the truth of \(\mathsf{AD}\). This is because if \(\mathbb{P}\) does not add any real and both \(V\) and \(V[G]\) are models of \(\mathsf{AD}\), then by the Moschovakis Coding Lemma, \(V\) and \(V[G]\) have the same bounded subsets of \(\Theta\), leading to the situation that the poset \(\mathbb{P}\) cannot add a bounded subset of \(\Theta\).
After looking at Theorem 5.1, it is natural to ask whether the assumption "\(\Theta\) is regular" is essential there. We do not know the answer to this question. However, in case \(\Theta\) is singular, we have \(\mathsf{AD}_{\mathbb{R}}\) under the assumptions of Theorem 5.1. In case \(\mathsf{AD}_{\mathbb{R}}\) holds, which is Case 2 in the proof of Theorem 5.1, the poset \(\mathbb{P}\) in Theorem 5.1 is for adding a Cohen subset of \(\Theta\) in \(\mathrm{HOD}\). We show that this particular poset cannot preserve the truth of \(\mathsf{AD}\) if \(\Theta\) is singular:
**Theorem 5.2**.: Assume \(\mathsf{ZF}+\mathsf{AD}^{+}+\text{``}V=\mathrm{L}\big{(}\wp(\mathbb{R})\big{)}\)". Suppose that \(\Theta\) is singular and let \(\mathbb{P}\) be \(\mathrm{Add}(\Theta,1)\) in \(\mathrm{HOD}\), where \(\mathrm{Add}(\Theta,1)=\{p\mid p\colon\gamma\to 2\text{ for some }\gamma<\Theta\}\). Then \(\mathsf{AD}\) fails in \(V[G]\) for any \(\mathbb{P}\)-generic filter \(G\) over \(V\).
## 2. Basic definitions, theorems, and lemmas
In this section, we introduce basic definitions, theorems, and lemmas we will use in later sections of the paper. We assume that readers are familiar with the basics of forcing and descriptive set theory. For basic definitions not given in this paper, see Jech [4] and Moschovakis [9]. When we say "reals", we mean elements of the Baire space \(\omega^{\omega}\) or of the Cantor space \(2^{\omega}\).
We start with some basic definitions which will be used throughout the paper:
**Definition 2.1**.:
1. The ordinal \(\Theta\) is the supremum of ordinals which are surjective images of \(\mathbb{R}\).
2. A set \(x\) is _OD from sets_\(y_{1},\ldots,y_{n}\) if \(x\) is definable by a first-order formula with an ordinal and \(y_{1},\ldots,y_{n}\) as parameters.
3. Let \(Y\) be a set. We say a set \(x\) is _hereditarily_\(\mathrm{OD}_{Y}\) if any element of the transitive closure of \(\{x\}\) is OD from some elements of \(Y\).
4. For a set \(Y\), we write \(\mathrm{HOD}_{Y}\) for the collection of sets which are hereditarily \(\mathrm{OD}_{Y}\). When \(Y\) is the empty set, we simply write \(\mathrm{HOD}\) for \(\mathrm{HOD}_{Y}\).
**Definition 2.2**.: Let \(A\) and \(B\) be sets of reals (or subsets of the Baire space \(\omega^{\omega}\)). We say \(A\) is _Wadge reducible to \(B\)_ if there is a continuous function \(f\colon\omega^{\omega}\to\omega^{\omega}\) such that \(A=f^{-1}(B)\). When \(A\) is Wadge reducible to \(B\), we write \(A\leq_{\mathrm{W}}B\). The order \(\leq_{\mathrm{W}}\) is called the _Wadge order on sets of reals_.
**Lemma 2.3** (Wadge's Lemma).: Assume \(\mathsf{ZF}+\mathsf{AD}\). Then for any sets \(A,B\) of reals, we have either \(A\leq_{\mathrm{W}}B\) or \(B\leq_{\mathrm{W}}\omega^{\omega}\setminus A\).
Proof.: See e.g., [14, Lemma 2.1].
The following theorems will be used in Section 3.
**Theorem 2.4** (Woodin).: Assume \(\mathsf{ZF}+\mathsf{AD}^{+}+\text{``}V=\mathrm{L}\big{(}\wp(\mathbb{R})\big{)}\)". Then the model \(\mathrm{HOD}\) is of the form \(\mathrm{L}[Z]\) for some subset \(Z\) of \(\Theta\), and there are a poset \(\mathbb{Q}\) in \(\mathrm{HOD}\) and a \(\mathbb{Q}\)-generic filter \(H\) over \(\mathrm{HOD}\) such that \(\mathrm{HOD}\subseteq V\subseteq\mathrm{HOD}[H]\).
Proof.: See e.g., [13, Theorem 3.1.9].
**Theorem 2.5** (Moschovakis).: Assume \(\mathsf{ZF}+\mathsf{AD}\). Then \(\Theta\) is a limit of measurable cardinals.
Proof.: For a proof without assuming \(\mathsf{DC}_{\mathbb{R}}\), one could first prove that \(\Theta\) is a limit of strong partition cardinals under \(\mathsf{ZF}+\mathsf{AD}\) as in [7] and then verify that every strong partition cardinal is measurable under \(\mathsf{ZF}\) as in [5, 28.10 Theorem].
**Theorem 2.6** (Solovay).: Assume \(\mathsf{ZF}+\mathsf{AD}_{\mathbb{R}}\). Then for any set \(A\) of reals, there is a set \(B\) of reals which is not OD from \(A\) and any real.
Proof.: See [10, Lemma 2.2].
The following theorem will be used in Section 4:
**Theorem 2.7** (Chan and Jackson).: Assume \(\mathsf{ZF}+\mathsf{AD}\) and \(\Theta\) is regular. Then for any non-trivial poset \(\mathbb{P}\) on \(\mathbb{R}\) and any \(\mathbb{P}\)-generic filter \(G\) over \(V\), the axiom \(\mathsf{AD}\) fails in \(V[G]\).
Proof.: See [2, Theorem 5.5].
The following theorems will be used in Section 5:
**Theorem 2.8** (Moschovakis).: Assume \(\mathsf{ZF}+\mathsf{AD}\). Then for any non-zero ordinal \(\gamma<\Theta\), there is a set \(A\) of reals such that there is a surjection from \(\mathbb{R}\) to \(\wp(\gamma)\) which is OD from \(A\).
Proof.: For any surjection \(\rho\colon\mathbb{R}\to\gamma\), the arguments in [5, 28.15 Theorem] give us a surjection from \(\mathbb{R}\) to \(\wp(\gamma)\) which is OD from \(\rho\). If \(A\) is a prewellordering on \(\mathbb{R}\) of length \(\gamma\), then the surjection \(\rho\colon\mathbb{R}\to\gamma\) induced from \(A\) is clearly OD from \(A\). Hence there is a surjection from \(\mathbb{R}\) to \(\wp(\gamma)\) which is OD from \(A\), as desired.
**Theorem 2.9** (Woodin).: Assume \(\mathsf{ZF}+\mathsf{AD}^{+}+``V=\mathrm{L}\big{(}\wp(\mathbb{R})\big{)}"\). Suppose also that \(\mathsf{AD}_{\mathbb{R}}\) fails. Then there is a set \(T\) of ordinals such that \(V=\mathrm{L}(T,\mathbb{R})\).
Proof.: By the results of Woodin [7], the axiom \(\mathsf{AD}^{+}\) and the failure of \(\mathsf{AD}_{\mathbb{R}}\) imply that the set of Suslin cardinals is closed below \(\Theta\) while not cofinal in \(\Theta\). Hence there is a largest Suslin cardinal in \(\Theta\). By the result of Woodin [12, Corollary 6], the assumptions \(\mathsf{AD}^{+}\) and \(``V=\mathrm{L}\big{(}\wp(\mathbb{R})\big{)}"\) imply that the ultrapower \(V^{\mathcal{D}}/\mu\) is well-founded where \(\mathcal{D}\) is the set of Turing degrees and \(\mu\) is the Martin measure on \(\mathcal{D}\). Using the result of Woodin [7], it follows that there is a set \(T\) of ordinals such that \(\wp(\mathbb{R})\subseteq\mathrm{L}(T,\mathbb{R})\). Since we assume \(V=\mathrm{L}\big{(}\wp(\mathbb{R})\big{)}\), we have \(V=\mathrm{L}(T,\mathbb{R})\), as desired.
**Theorem 2.10** (Woodin).: Assume \(\mathsf{ZF}+\mathsf{AD}^{+}+``V=\mathrm{L}(T,\mathbb{R})"\) for some set \(T\) of ordinals. Then
1. for some subset \(Z\) of \(\Theta\), we have \(\mathrm{HOD}_{\{T\}}=\mathrm{L}[T,Z]\), and
2. for any real \(x\), we have \(\mathrm{HOD}_{\{T,x\}}=\mathrm{HOD}_{\{T\}}[x]\).
Proof.: For (1), one can argue in the same way as in [1, Corollary 7.21].
For (2), see [7].
We next introduce Vopenka algebras and their variants we will use in this paper:
**Definition 2.11**.: Let \(\gamma\) be a non-zero ordinal and \(T\) be a set of ordinals.
1. Let \(n\) be a natural number with \(n\geq 1\) and \(\mathcal{O}_{n}\) be the collection of all nonempty subsets of \((\gamma^{\omega})^{n}\) which are OD from \(T\). Fix a bijection \(\pi_{n}\colon\eta\to\mathcal{O}_{n}\) which is OD from \(T\), where \(\eta\) is some ordinal. Let \(\mathbb{Q}_{n}\) be the poset on \(\eta\) such that for each \(p,q\) in \(\mathbb{Q}_{n}\), we have \(p\leq q\) if \(\pi_{n}(p)\subseteq\pi_{n}(q)\). We call \(\mathbb{Q}_{n}\) the _Vopenka algebra for adding an element of \((\gamma^{\omega})^{n}\) in \(\mathrm{HOD}_{\{T\}}\)_.
2. For all natural numbers \(\ell\) and \(m\) with \(1\leq\ell\leq m\), let \(i_{\ell,m}\colon\mathbb{Q}_{\ell}\to\mathbb{Q}_{m}\) be the inclusion map induced from \(\pi_{\ell}\) and \(\pi_{m}\), i.e., for all \(p\in\mathbb{Q}_{\ell}\), \(\pi_{m}\big{(}i_{\ell,m}(p)\big{)}=\{x\in(\gamma^{\omega})^{m}\mid x\upharpoonright \ell\in\pi_{\ell}(p)\}\). Then each \(i_{\ell,m}\) is a complete embedding between posets. Let \(\big{(}\mathbb{Q}_{\omega},(i_{n}\colon\mathbb{Q}_{n}\to\mathbb{Q}_{\omega} \mid n<\omega)\big{)}\) be the direct limit of the system \((i_{\ell,m}\colon\mathbb{Q}_{\ell}\to\mathbb{Q}_{m}\mid 1\leq\ell\leq m<\omega)\). We call \(\mathbb{Q}_{\omega}\) the _finite support direct limit of Vopenka algebras for adding an element of \(\gamma^{\omega}\) in \(\operatorname{HOD}_{\{T\}}\)_.
The following lemmas will be useful in Section 5:
**Lemma 2.12**.: Assume \(\mathsf{ZF}+\mathsf{AD}^{+}+``V=\operatorname{L}(T,\mathbb{R})"\) for some set \(T\) of ordinals.
1. Let \(\mathbb{Q}_{1}\) be the Vopenka algebra for adding an element of \(2^{\omega}\) in \(\operatorname{HOD}_{\{T\}}\). Then the poset \(\mathbb{Q}_{1}\) is of size at most \(\Theta\) and \(\mathbb{Q}_{1}\) has the \(\Theta\)-c.c. in \(\operatorname{HOD}_{\{T\}}\).
2. Let \(\mathbb{Q}_{\omega}\) be the finite support limit of the Vopenka algebras for adding an element of \(2^{\omega}\) in \(\operatorname{HOD}_{\{T\}}\).Then \(\mathbb{Q}_{\omega}\) has the \(\Theta\)-c.c. in \(\operatorname{HOD}_{\{T\}}\).
3. (Woodin) There is a \(\mathbb{Q}_{\omega}\)-generic filter \(H\) over \(\operatorname{HOD}_{\{T\}}\) such that \(V=\operatorname{L}(T,\mathbb{R})\subseteq\operatorname{HOD}_{\{T\}}[H]\) and the set \(\mathbb{R}^{V}\) is countable in \(\operatorname{HOD}_{\{T\}}[H]\).
Proof.: For (1), we first show that the poset \(\mathbb{Q}_{1}\) is of size at most \(\Theta\) in \(\operatorname{HOD}_{\{T\}}\). Recall from Definition 2.11 that \(\mathbb{Q}_{1}\) is a poset on some ordinal \(\eta\) and \(\pi_{1}\) is a bijection from \(\eta\) to \(\mathcal{O}_{1}\) which is OD from \(T\), where \(\mathcal{O}_{1}\) is the collection of all subsets of \(2^{\omega}\) which are OD from \(T\). We will argue that the ordinal \(\eta\) is at most \(\Theta\). For each \(\alpha<\Theta\), let \(W_{\alpha}\) be the collection of sets of reals in \(\mathcal{O}_{1}\) of Wadge rank \(\alpha\). Then we have \(\mathcal{O}_{1}=\bigcup_{\alpha<\Theta}W_{\alpha}\) and each \(W_{\alpha}\) is a surjective image of \(\mathbb{R}\). Since the set \(\mathcal{O}_{1}\) is well-ordered, so is each \(W_{\alpha}\) and there is a surjection from \(\Theta\) to \(W_{\alpha}\) which is OD from \(T\). Hence there is a surjection from \(\Theta\times\Theta\) to \(\mathcal{O}_{1}\) which is OD form \(T\), and therefore the set \(\mathbb{Q}_{1}\) is of size at most \(\Theta\) in \(\operatorname{HOD}_{\{T\}}\), as desired.
We next show that the poset \(\mathbb{Q}_{1}\) has the \(\Theta\)-c.c. in \(\operatorname{HOD}_{\{T\}}\). To derive a contradiction, suppose that there is an antichain \((p_{\alpha}\mid\alpha<\Theta)\) in \(\mathbb{Q}_{1}\) in \(\operatorname{HOD}_{\{T\}}\). Then the family \(\{\pi_{1}(p_{\alpha})\mid\alpha<\Theta\}\) is a pairwise disjoint family of nonempty subsets of \(2^{\omega}\), which would easily induce a surjection from \(\mathbb{R}\) to \(\Theta\), contradicting the definition of \(\Theta\). Therefore, the poset \(\mathbb{Q}_{1}\) has the \(\Theta\)-c.c. in \(\operatorname{HOD}_{\{T\}}\), as desired.
For (2), we first note that for all natural numbers \(n\) with \(n\geq 1\), the poset \(\mathbb{Q}_{n}\) has the \(\Theta\)-c.c. in \(\operatorname{HOD}_{\{T\}}\) by the same argument as in (1). Then using the facts that \(\Theta\) is regular in \(V=\operatorname{L}(T,\mathbb{R})\) and that \(\mathbb{Q}_{\omega}\) is the direct limit of \(\mathbb{Q}_{n}\)s, it follows that the poset \(\mathbb{Q}_{\omega}\) has the \(\Theta\)-c.c. in \(\operatorname{HOD}_{\{T\}}\) as well.
For (3), one can argue in the same way as in [11, Lemma 3.4 and Lemma 3.5] by replacing \(\mathcal{M}\) with \(V\), and \(\mathcal{H}\) with \(\operatorname{HOD}_{\{T\}}\).
**Lemma 2.13**.: Assume \(\mathsf{ZFC}\). Let \(\lambda\) be a regular uncountable cardinal, \(\mathbb{P}\) be a \(<\!\!\lambda\)-closed poset, and \(\mathbb{Q}\) be a \(\lambda\)-c.c. poset. Then for any \(\mathbb{P}\)-generic filter \(G\) over \(V\), the poset \(\mathbb{Q}\) still has the \(\lambda\)-c.c. in \(V[G]\). Furthermore, if \(H\) is a \(\mathbb{Q}\)-generic filter over \(V\), then \(H\) is \(\mathbb{Q}\)-generic over \(V[G]\) as well.
Proof.: Let \(G\) be a \(\mathbb{P}\)-generic filter and \(A\) be an antichain in \(\mathbb{Q}\) in \(V[G]\). We will show that \(A\) is of size less than \(\lambda\) in \(V[G]\).
Towards a contradiction, we assume that \(A\) is of size at least \(\lambda\) in \(V[G]\).
Let \(\dot{A}\) be a \(\mathbb{P}\)-name with \(\dot{A}^{G}=A\). Let \(p\) be a condition in \(G\) with \(p\Vdash_{\mathbb{P}}``\dot{A}\) is an antichain in \(\breve{\mathbb{Q}}\) of size at least \(\breve{\lambda}"\). Using the \(<\!\!\lambda\)-closure of \(\mathbb{P}\) in \(V\), one can construct a decreasing sequence \((p_{\alpha}\mid\alpha<\lambda)\) in \(\mathbb{P}\) and a sequence \((a_{\alpha}\mid\alpha<\lambda)\) in \(\mathbb{P}\) with the following properties:
1. \(p_{0}=p\),
2. for all \(\alpha,\beta<\lambda\) with \(\alpha\neq\beta\), we have \(a_{\alpha}\neq a_{\beta}\), and
3. for all \(\alpha<\lambda\), \(p_{\alpha}\Vdash_{\mathbb{P}}``\dot{a}_{\alpha}\in\dot{A}"\).
Since \(p\Vdash_{\mathbb{P}}``\dot{A}\) is an antichain in \(\breve{\mathbb{Q}}"\), by (1), (2), and (3) above, for all \(\alpha,\beta<\lambda\) with \(\alpha<\beta\), the condition \(p_{\beta}\) forces that \(a_{\alpha}\) and \(a_{\beta}\) are incompatible in \(\mathbb{P}\). Therefore, the set \(B=\{a_{\alpha}\mid\alpha<\lambda\}\) is an antichain in \(\mathbb{Q}\) of size \(\lambda\) in \(V\). This contradicts the assumption that \(\mathbb{Q}\) has the \(\lambda\)-c.c. in \(V\). Therefore, the antichain \(A\) is of size less than \(\lambda\) in \(V[G]\), as desired.
Let \(H\) be a \(\mathbb{Q}\)-generic filter over \(V\). We will verify that \(H\) is \(\mathbb{Q}\)-generic over \(V[G]\) as well. Let \(A\) be a maximal antichain in \(\mathbb{Q}\) in \(V[G]\). We will see that \(H\cap A\neq\emptyset\). By the arguments in the previous paragraphs, \(A\) is of size less than \(\lambda\) in \(V[G]\). Since \(G\) is \(\mathbb{P}\)-generic over \(V\) and \(\mathbb{P}\) is \(<\!\!\lambda\)-closed in \(V\) while \(\mathbb{Q}\) is in \(V\), there is no subset of \(\mathbb{Q}\) of size less than \(\lambda\) in \(V[G]\setminus V\). Hence the antichain \(A\) is in \(V\) as well. By the genericity of \(H\) over \(V\), we have that \(H\cap A\neq\emptyset\), as desired.
**Lemma 2.14**.: Assume \(\mathsf{ZF}+\mathsf{AD}^{+}+\mathsf{AD}_{\mathbb{R}}\). Then for any set \(C\) of reals, there is an \(s\in\Theta^{\omega}\) such that \(C\) is OD from \(s\) and that \(C\) is in \(\mathrm{HOD}_{\{s\}}(\mathbb{R})\).
Proof.: Let \(C\) be any set of reals. By the result of Woodin [7], under \(\mathsf{ZF}+\mathsf{AD}^{+}+\mathsf{AD}_{\mathbb{R}}\), every set of reals is Suslin. By the result of Martin and Steel [8], every Suslin and co-Suslin set of reals is homogeneously Suslin. In particular, the complement \(2^{\omega}\setminus C\) is homogeneously Suslin witnessed by the sequence \((\mu_{u}\mid u\in 2^{<\omega})\) of measures on \(\kappa^{<\omega}\) for some \(\kappa<\Theta\). By the result of Kunen [5, 28.21 Corollary], each measure \(\mu_{u}\) is OD. Using the Moschovakis Coding Lemma and \(\mathsf{AD}_{\mathbb{R}}\), one can show that each measure \(\mu_{u}\) is definable from an ordinal below \(\Theta\). Hence there is an \(s\in\Theta^{\omega}\) such that the sequence \((\mu_{u}\mid u\in 2^{<\omega})\) is definable from \(s\). Now from the sequence \((\mu_{u}\mid u\in 2^{<\omega})\), one can construct a Martin-Solovay tree \(T\) such that \(C=\mathrm{p}[T]\). By the construction of \(T\), it follows that \(T\) is OD from \((\mu_{u}\mid u\in 2^{<\omega})\). Hence \(T\) is OD from \(s\), which easily implies that the set \(C\) is OD from \(s\) and \(C\) is in \(\mathrm{HOD}_{\{s\}}(\mathbb{R})\), as desired.
**Lemma 2.15**.: Assume \(\mathsf{ZF}+\mathsf{AD}_{\mathbb{R}}\). Let \(\gamma<\Theta\) and \(\mathbb{Q}_{1}\) be the Vopenka algebra for adding an element of \(\gamma^{\omega}\) in \(\mathrm{HOD}\). Also let \(\mathbb{Q}_{\omega}\) be the finite support limit of the Vopenka algebras for adding an element of \(\gamma^{\omega}\) in \(\mathrm{HOD}\).
1. The posets \(\mathbb{Q}_{1}\) and \(\mathbb{Q}_{\omega}\) are of size less than \(\Theta\) in \(\mathrm{HOD}\).
2. Let \(s\in\gamma^{\omega}\) and \(h_{s}=\{p\in\mathbb{Q}_{1}\mid s\in\pi_{1}(p)\}\), where \(\pi_{1}\colon\mathbb{Q}_{1}\to\mathcal{O}_{1}\) is as in Definition 2.11. Then the set \(h_{s}\) is a \(\mathbb{Q}_{1}\)-generic filter over \(\mathrm{HOD}\) such that \(\mathrm{HOD}[h_{s}]=\mathrm{HOD}_{\{s\}}\).
3. (Woodin) There is a \(\mathbb{Q}_{\omega}\)-generic filter \(H\) over \(\mathrm{HOD}\) such that the set \((\gamma^{\omega})^{V}\) is countable in \(\mathrm{HOD}[H]\).
Proof.: For (1), we first show that the poset \(\mathbb{Q}_{1}\) is of size less than \(\Theta\) in \(\mathrm{HOD}\). Recall from Definition 2.11 that \(\pi_{1}\colon\mathbb{Q}_{1}\to\mathcal{O}_{1}\) is a surjection which is OD, where \(\mathcal{O}_{1}\) is the collection of all subsets of \(\gamma^{\omega}\) which are OD. Since \(\gamma<\Theta\), by Theorem 2.8, there is a set \(A\) of reals such that there is a surjection from \(\mathbb{R}\) to \(\wp(\gamma)\) which is OD from \(A\). In particular, there is a surjection \(\sigma\colon\mathbb{R}\to\gamma^{\omega}\) which is OD from \(A\). Hence for each \(b\in\mathcal{O}_{1}\), the set \(\sigma^{-1}(b)\) of reals is OD from \(A\). Since we assume \(\mathsf{AD}_{\mathbb{R}}\), by Theorem 2.6, there is a set \(B\) of reals which is not
OD from \(A\). By Wadge's Lemma under \(\mathsf{ZF}+\mathsf{AD}\), for each \(b\in\mathcal{O}_{1}\), the set \(\sigma^{-1}(b)\) is Wadge reducible to \(B\). In particular, there is a surjection from \(\mathbb{R}\) to the family \(\{\sigma^{-1}(b)\mid b\in\mathcal{O}_{1}\}\). Hence the family \(\mathcal{O}_{1}\) is also a surjective image of \(\mathbb{R}\) and the poset \(\mathbb{Q}_{1}\) is of size less than \(\Theta\) in \(V\). Since \(\Theta\) is a cardinal in \(V\), it follows that the poset \(\mathbb{Q}_{1}\) is of size less than \(\Theta\) in HOD as well.
We next show that the poset \(\mathbb{Q}_{\omega}\) is of size less than \(\Theta\) in HOD. Let \(C=A\oplus B=\{x*y\mid x\in A\text{ and }y\in B\}\), where \(x*y(2\ell)=x(\ell)\) and \(x*y(2\ell+1)=y(\ell)\) for all \(\ell<\omega\). Then the argument in the last paragraph shows that there is a surjection from \(\mathbb{R}\) to \(\mathbb{Q}_{1}\) which is OD from \(C\). Similarly, one can argue that for each natural numbers \(n\) with \(n\geq 1\), there is a surjection from \(\mathbb{R}\) to \(\mathbb{Q}_{n}\) which is OD from \(C\). Since all such surjections are OD from \(C\), one can pick a sequence \((\rho_{n}\colon\mathbb{R}\to\mathcal{O}_{n}\mid n\geq 1)\) of surjections, which would readily give us a surjection from \(\mathbb{R}\) to \(\mathbb{Q}_{\omega}\). Therefore, the poset \(\mathbb{Q}_{\omega}\) is of size less than \(\Theta\) in \(V\). Since \(\Theta\) is a cardinal in \(V\), it follows that the poset \(\mathbb{Q}_{\omega}\) is of size less than \(\Theta\) in HOD as well.
For (2), for the \(\mathbb{Q}_{1}\)-genericity of \(h_{s}\) over HOD, see e.g., [4, Theorem 15.46].
We will show the equality \(\operatorname{HOD}[h_{s}]=\operatorname{HOD}_{\{s\}}\). The inclusion \(\operatorname{HOD}[h_{s}]\subseteq\operatorname{HOD}_{\{s\}}\) is easy because \(h_{s}\) is OD from \(s\) and \(h_{s}\) is a set of ordinals. We will argue that \(\operatorname{HOD}_{\{s\}}\subseteq\operatorname{HOD}[h_{s}]\). Since \(\operatorname{HOD}_{\{s\}}\) is a model of \(\mathsf{ZFC}\), it is enough to see that every set of ordinals in \(\operatorname{HOD}_{\{s\}}\) is also in \(\operatorname{HOD}[h_{s}]\). Let \(X\) be any set of ordinals in \(\operatorname{HOD}_{\{s\}}\). We will verify that \(X\) is also in \(\operatorname{HOD}[h_{s}]\). Let \(\delta\) be an ordinal such that \(X\subseteq\delta\). Since \(X\) is in \(\operatorname{HOD}_{\{s\}}\), the set \(X\) is OD from \(s\). So there is a formula \(\phi\) such that for all \(\alpha<\delta\), we have that \(\alpha\in X\) if and only if \(\phi[\alpha,s]\) holds. For each \(\alpha<\delta\), let \(b_{\alpha}=\{x\in\gamma^{\omega}\mid\phi[\alpha,x]\}\). Then each set \(b_{\alpha}\) is a subset of \(\gamma^{\omega}\) which is OD. So each \(b_{\alpha}\) is in \(\mathcal{O}_{1}\). Now we have the following equivalences: For all \(\alpha<\delta\),
\[\alpha\in X\iff\phi[\alpha,s]\iff s\in b_{\alpha}\iff\pi_{1}^{-1}(b_{\alpha} )\in h_{s}.\]
Hence \(X=\{\alpha<\delta\mid\pi_{1}^{-1}(b_{\alpha})\in h_{s}\}\). Since the sequence \(\big{(}\pi_{1}^{-1}(b_{\alpha})\in\mathbb{Q}_{1}\mid\alpha<\delta\big{)}\) is OD and \(\mathbb{Q}_{1}\) is in HOD, the sequence \(\big{(}\pi_{1}^{-1}(b_{\alpha})\in\mathbb{Q}_{1}\mid\alpha<\delta\big{)}\) belongs to HOD. Hence the set \(\{\alpha<\delta\mid\pi_{1}^{-1}(b_{\alpha})\in h_{s}\}\) is in \(\operatorname{HOD}[h_{s}]\). Therefore, the set \(X\) is in \(\operatorname{HOD}[h_{s}]\), as desired.
For (3), one can argue in the same way as in [11, Lemma 3.4 and Lemma 3.5] by replacing \(\mathbb{R}\) with \(\gamma^{\omega}\), \(\mathcal{M}\) with \(V\), and \(\mathcal{H}\) with HOD.
## 3. On forcings increasing \(\Theta\)
In this section, we prove the following theorems:
**Theorem 3.1**.: Assume \(\mathsf{ZF}+\mathsf{AD}^{+}+``V=\operatorname{L}\big{(}\wp(\mathbb{R})\big{)}\)". Suppose that a poset \(\mathbb{P}\) increases \(\Theta\), i.e., \(\Theta^{V}<\Theta^{V[G]}\) for any \(\mathbb{P}\)-generic filter \(G\) over \(V\). Then \(\mathsf{AD}\) fails in \(V[G]\) for any \(\mathbb{P}\)-generic filter \(G\) over \(V\).
**Theorem 3.2**.: It is consistent relative to \(\mathsf{ZF}+\mathsf{AD}_{\mathbb{R}}\) that \(\mathsf{ZF}+\mathsf{AD}\) holds and there is a poset \(\mathbb{P}\) increasing \(\Theta\) while preserving \(\mathsf{AD}\), i.e., for any \(\mathbb{P}\)-generic filter \(G\) over \(V\), we have \(\Theta^{V}<\Theta^{V[G]}\) and that \(\mathsf{AD}\) holds in \(V[G]\).
Proof of Theorem 3.1.: Let \(G\) be a \(\mathbb{P}\)-generic filter over \(V\). We will show that \(\mathsf{AD}\) fails in \(V[G]\). Towards a contradiction, we assume that \(\mathsf{AD}\) holds in \(V[G]\).
Since we have \(\mathsf{AD}^{+}\) and \(V=\operatorname{L}\big{(}\wp(\mathbb{R})\big{)}\), by Theorem 2.4, the model HOD is of the form \(\operatorname{L}[Z]\) for some subset \(Z\) of \(\Theta\), and there are a poset \(\mathbb{Q}\) in HOD and a \(\mathbb{Q}\)-generic filter \(H\) over
HOD such that \(\operatorname{HOD}\subseteq V\subseteq\operatorname{HOD}[H]\). In particular, \(Z^{\#}\) does not exist in HOD. Since any poset does not add \(Z^{\#}\), it follows that \(Z^{\#}\) does not exist in \(V\) either.
We will argue that \(Z^{\#}\) exists in \(V[G]\), which would contradict the fact that \(Z^{\#}\) does not exist in \(V\). Since \(\mathbb{P}\) increases \(\Theta\), we have \(\Theta^{V}<\Theta^{V[G]}\). By assumption, we have \(\mathsf{AD}\) in \(V[G]\), so by Theorem 2.5, it follows that \(\Theta^{V[G]}\) is a limit of measurable cardinals in \(V[G]\). In particular, there is a measurable cardinal \(\kappa\) in \(V[G]\) such that \(\Theta^{V}<\kappa\). Let \(U\) be a \(<\!\!\kappa\)-complete nonprincipal ultrafilter on \(\kappa\) in \(V[G]\). Then letting \(M=\operatorname{L}[U,Z]\), the cardinal \(\kappa\) is measurable also in \(M\) witnessed by \(U\cap M\). Since \(M\) is a model of \(\mathsf{ZFC}\) and \(Z\) is a bounded subset of \(\kappa\) in \(M\), it follows that \(Z^{\#}\) exists in \(M\). By absolutness of \(Z^{\#}\), we have \(Z^{\#}\) in \(V[G]\), contradicting the fact that \(Z^{\#}\) does not exist in \(V\).
Therefore, the assumption that \(\mathsf{AD}\) holds in \(V[G]\) was wrong, and \(\mathsf{AD}\) fails in \(V[G]\).
This completes the proof of Theorem 3.1.
Proof of Theorem 3.2.: We assume \(\mathsf{ZF}+\mathsf{AD}_{\mathbb{R}}\) and will show that there is an inner model \(M\) of \(\mathsf{ZF}+\mathsf{AD}\) satisfying that there is a poset \(\mathbb{P}\) increasing \(\Theta\) while preserving \(\mathsf{AD}\).
Let \(M=\operatorname{HOD}_{\mathbb{R}}\), the class of all sets hereditarily ordinal definable from some real. We will show that \(M\) is the desired inner model.
First notice that \(M\) is a model of \(\mathsf{ZF}\). Also since \(M\) contains all the reals and \(V\) satisfies \(\mathsf{AD}\), we have that \(M\) is a model of \(\mathsf{AD}\) as well. Since we have \(\mathsf{AD}_{\mathbb{R}}\) in \(V\), by Theorem 2.6, there is a set \(B\) of reals which is not definable from any ordinal and any real. Hence the set \(B\) is not in \(M\).
We will show that \(M\) satisfies that there is a poset \(\mathbb{P}\) increasing \(\Theta\) while preserving \(\mathsf{AD}\). The idea is to consider a variant of Vopenka algebra in \(M\) adding the set \(B\) to \(M\). Let \(\mathcal{O}=\{b\subseteq\wp(\mathbb{R})\mid b\text{ is nonempty and OD from some real}\}\) ordered by inclusion. Then \(\mathcal{O}\) is a poset which is OD. Let \(\eta\) be a sufficiently large ordinal and let \(\pi\colon\eta\times\mathbb{R}\to\mathcal{O}\) be a surjection which is OD such that if a set \(b\) is in \(\mathcal{O}\) and OD from a real \(x\), then there is some \(\alpha<\eta\) such that \(\pi(\alpha,x)=b\). Let \(\mathbb{P}=\pi^{-1}(\mathcal{O})\) and for \(p_{1},p_{2}\in\mathbb{P}\), we set \(p_{1}\leq p_{2}\) if \(\pi(p_{1})\subseteq\pi(p_{2})\). Then since \(\pi\) is OD, the poset \(\mathbb{P}\) is in \(M\). For an \(r\) in \(\mathbb{P}\), let \(\mathbb{P}\upharpoonright r=\{p\in\mathbb{P}\mid p\leq r\}\).
We will show that there is some \(\mathbb{P}\)-generic filter \(G\) over \(M\) such that \(\Theta^{M}<\Theta^{M[G]}\) and \(M[G]\) is a model of \(\mathsf{AD}\). This is enough to end the arguments for the theorem because then there is some \(r\in\mathbb{P}\) forcing the desired two statements for \(M[G]\) over \(M\), and the poset \(\mathbb{P}\upharpoonright r\) is the desired poset in \(M\).
Let \(H=\{b\in\mathcal{O}\mid B\in b\}\) and \(G=\pi^{-1}(H)\). We will see that \(G\) is the desired filter.
We first verify that \(G\) is \(\mathbb{P}\)-generic over \(M\). Let \(D\) be a dense subset of \(\mathbb{P}\) in \(M\). We will argue that \(G\cap D\neq\emptyset\). Let \(E=\pi[D]\) and \(b_{E}=\bigcup E\). By the definition of \(\mathbb{P}\), the set \(E\) is dense in \(\mathcal{O}\). We claim that \(b_{E}=\wp(\mathbb{R})\). Suppose not. Then since \(D\) is in \(M\) and \(\pi\) is OD, the set \(b_{E}\) is OD from some real. So \(b_{E}\) is in \(\mathcal{O}\). But then \(\wp(\mathbb{R})\setminus b_{E}\) is a nonempty set which is in \(\mathcal{O}\) incompatible with any element of \(E\), contradicting that \(E\) is dense in \(\mathcal{O}\). Hence \(b_{E}=\wp(\mathbb{R})\). Since \(B\) is in \(\wp(\mathbb{R})\), we have that \(B\) is in \(b_{E}\), so there is a \(b^{\prime}\) in \(E\) such that \(B\) is in \(b^{\prime}\). By the definition of \(H\), the condition \(b^{\prime}\) is also in \(H\). Hence \(H\cap E\neq\emptyset\). Since \(G=\pi^{-1}(H)\) and \(E=\pi[D]\), it follows that \(G\cap D\neq\emptyset\), as desired. Therefore, \(G\) is \(\mathbb{P}\)-generic over \(M\).
We next verify that \(M[G]\) is a model of \(\mathsf{AD}\). Since \(B\) is in \(V\), \(H=\{b\in\mathcal{O}\mid B\in b\}\), and \(G=\pi^{-1}(H)\), it follows that \(G\) is in \(V\) and \(M[G]\) is a submodel of \(V\). Since \(M\) contains all the reals, so does \(M[G]\). Finally, since \(V\) is a model of \(\mathsf{AD}\), it follows that \(M[G]\) is also a model of \(\mathsf{AD}\), as desired.
Finally, we verify that \(\Theta^{M}<\Theta^{M[G]}\). Since both \(M\) and \(M[G]\) are models of \(\mathsf{AD}\) containing all the reals, by the Wadge lemma under \(\mathsf{ZF}+\mathsf{AD}\), it is enough to see that there is a set of reals in \(M[G]\setminus M\). Since \(B\) is not in \(M\), it suffices to argue that \(B\) is in \(M[G]\). For each real \(x\), let \(b_{x}=\{A\in\wp(\mathbb{R})\mid x\in A\}\). Then \(b_{x}\) is \(\mathsf{OD}\) from \(x\), so \(b_{x}\) is in \(\mathcal{O}\). By the choice of \(\pi\), for each real \(x\), there is an ordinal \(\alpha\) such that \(\pi(\alpha,x)=b_{x}\). For each real \(x\), let \(\alpha_{x}\) be the least ordinal with \(\pi(\alpha_{x},x)=b_{x}\). Then since \(\pi\) and \(\mathcal{O}\) are \(\mathsf{OD}\), the sequence \((\alpha_{x}\mid x\in\mathbb{R})\) is \(\mathsf{OD}\) and is in \(M=\mathrm{HOD}_{\mathbb{R}}\). From the sequence \((\alpha_{x}\mid x\in\mathbb{R})\) and \(G\), one can compute the set \(B\) as follows: for any real \(x\),
\[x\in B\iff B\in b_{x}\iff b_{x}\in H\iff(\alpha_{x},x)\in G.\]
Therefore, the set \(B\) is in \(M[G]\), as desired.
We have verified that \(G\) is the desired filter, and this completes the proof of Theorem 3.2.
## 4. On forcings on the reals
In this section, we prove the following theorem which answers a question by Chan and Jackson [2, Question 5.7]:
**Theorem 4.1**.: Assume \(\mathsf{ZF}+\mathsf{AD}\). Let \(\mathbb{P}\) be any non-trivial poset which is a surjective image of \(\mathbb{R}\) and \(G\) be any \(\mathbb{P}\)-generic filter over \(V\). Then \(\mathsf{AD}\) fails in \(V[G]\).
Proof of Theorem 4.1.: Let \(\mathbb{P}\) be any non-trivial poset which is a surjective image of \(\mathbb{R}\) and \(G\) be any \(\mathbb{P}\)-generic filter over \(V\). We will show that \(\mathsf{AD}\) fails in \(V[G]\). Since \(\mathbb{P}\) is a surjective image of \(\mathbb{R}\), there is a poset on \(\mathbb{R}\) which is forcing equivalent to \(\mathbb{P}\). Hence we may assume \(\mathbb{P}\) is a poset on \(\mathbb{R}\).
Towards a contraction, we assume that \(\mathsf{AD}\) holds in \(V[G]\).
**Case 1.** When the set \(\mathbb{R}^{V}\) is uncountable in \(V[G]\).
Here is the key point:
**Claim 1**.: There is a real \(r_{0}\) in \(V[G]\) such that \(\mathbb{R}^{V[G]}\subseteq\mathrm{L}(\mathbb{R}^{V},r_{0})\).
Proof of Claim 1.: Since \(V[G]\) satisfies \(\mathsf{AD}\), the set \(\mathbb{R}^{V}\) has the perfect set property in \(V[G]\). Since \(\mathbb{R}^{V}\) is uncountable in \(V[G]\), the set \(\mathbb{R}^{V}\) contains a perfect set \(C\) in \(V[G]\). Let \(r_{0}\) code a perfect tree \(T\) on \(2=\{0,1\}\) with \([T]=C\) in \(V[G]\).
We will show that \(\mathbb{R}^{V[G]}\subseteq\mathrm{L}(\mathbb{R}^{V},r_{0})\). Let \(x\) be any element of \(2^{\omega}\) in \(V[G]\). We will see that \(x\) is in \(\mathrm{L}(\mathbb{R}^{V},r_{0})\).
We say a node \(t\in T\) is _splitting in \(T\)_ if both \(t^{\frown}\langle 0\rangle\) and \(t^{\frown}\langle 1\rangle\) are in \(T\). Let \(\{t_{s}\in T\mid s\in 2^{<\omega}\}\) be the set of all splitting nodes in \(T\) such that if \(s_{1}\) is a subsequence of \(s_{2}\) in \(2^{<\omega}\), then \(t_{s_{1}}\) is a subsequence of \(t_{s_{2}}\) in \(T\). Let \(y=\bigcup\{t_{x\mid n}\mid n<\omega\}\). Then \(y\) is in \([T]\). Since \([T]=C\subseteq\mathbb{R}^{V}\), the real \(y\) is in \(\mathbb{R}^{V}\). However, for all \(n<\omega\) and \(k\in 2=\{0,1\}\),
\[x(n)=k\iff t_{(x\mid n)^{\frown}\langle k\rangle}\subseteq y.\]
Hence \(x\) can be simply computed from \(y\) and \(T\). So \(x\in\mathrm{L}[y,T]\). Since \(\mathrm{L}[y,T]\subseteq\mathrm{L}[y,r_{0}]\subseteq\mathrm{L}(\mathbb{R}^{V},r_{0})\), the real \(x\) is in \(\mathrm{L}(\mathbb{R}^{V},r_{0})\), as desired.
This completes the proof of Claim 1.
Continuing to argue in Case 1, let \(r_{0}\) be a real in \(V[G]\) such that \(\mathbb{R}^{V[G]}\subseteq\mathrm{L}(\mathbb{R}^{V},r_{0})\) as in Claim 1. Since the poset \(\mathbb{P}\) is on \(\mathbb{R}^{V}\), there is a \(\mathbb{P}\)-name \(\dot{x}\) such that \(\dot{x}^{G}=r_{0}\) and \(\dot{x}\) is coded by some set \(A\) of reals in \(V\). Then setting \(M=\mathrm{L}(\mathbb{R}^{V},\mathbb{P},A)\), we have that \(M\) is an inner model of \(V\) satisfying \(\mathsf{AD}\) and the statement "\(\Theta\) is regular". However, since \(\mathbb{R}^{V[G]}\subseteq\mathrm{L}(\mathbb{R}^{V},r_{0})\subseteq M[G] \subset V[G]\) and we assumed that \(V[G]\) satisfies \(\mathsf{AD}\), the model \(M[G]\) also satisfies \(\mathsf{AD}\), contradicting Theorem 2.7. Therefore, the assumption that \(V[G]\) satisfies \(\mathsf{AD}\) was wrong and \(\mathsf{AD}\) must fail in \(V[G]\), as desired.
This finsihes the arguments for Theorem 4.1 in Case 1.
**Case 2**.: When the set \(\mathbb{R}^{V}\) is countable in \(V[G]\).
Since \(\mathbb{R}^{V}\) is countable in \(V[G]\), any ordinal \(\alpha\) below \(\Theta^{V}\) is countable in \(V[G]\) as well. Hence \(\Theta^{V}\leq\omega_{1}^{V[G]}\).
We will show that \(\Theta^{V[G]}\leq(\Theta^{+})^{V}\), which would contradict the assumption that \(\mathsf{AD}\) holds in \(V[G]\), because \(\mathsf{AD}\) in \(V[G]\) would imply that \(\Theta^{V[G]}>\omega_{2}^{V[G]}\geq(\Theta^{+})^{V}\) since \(\Theta^{V}\leq\omega_{1}^{V[G]}\).
To see that \(\Theta^{V[G]}\leq(\Theta^{+})^{V}\), let \(f\colon\mathbb{R}^{V[G]}\to(\Theta^{+})^{V}\) be any function in \(V[G]\). We will show that \(f\) is not surjective. As in the arguments in Case 1, since \(\mathbb{P}\) is on \(\mathbb{R}\), any real in \(V[G]\) can be coded by a set of reals in \(V\). Hence we may assume that \(f\colon\wp(\mathbb{R})^{V}\to(\Theta^{+})^{V}\). Also, since \(\mathbb{P}\) is on \(\mathbb{R}^{V}\), there is a function \(g\colon\wp(\mathbb{R})^{V}\times\mathbb{R}^{V}\to(\Theta^{+})^{V}\) in \(V\) such that \(\mathrm{rng}(f)\subseteq\mathrm{rng}(g)\). Therefore, it is enough to see that \(g\) is not surjective in \(V\).
We now work in \(V\). To see that \(g\) is not surjective, for each \(\alpha<\Theta\), let \(W_{\alpha}=\{B\in\wp(\mathbb{R})\mid|B|_{\mathrm{W}}=\alpha\}\), where \(|B|_{\mathrm{W}}\) is the Wadge ordinal of \(B\). Then each \(W_{\alpha}\) is a surjective image of \(\mathbb{R}\) and so is the set \(R_{\alpha}=\{g(B,x)\mid B\in W_{\alpha},x\in\mathbb{R}\}\). Hence, for every \(\alpha<\Theta\), the order type of \(R_{\alpha}\) is less than \(\Theta\), and \(\mathrm{rng}(g)=\bigcup_{\alpha<\Theta}R_{\alpha}\) is a surjective image of \(\Theta\times\Theta\). Therefore, \(\mathrm{rng}(g)\) is of cardinality at most \(\Theta\) which is smaller than \(\Theta^{+}\). Hence \(g\) is not surjective in \(V\), as desired.
This finishes the arguments for Theorem 4.1 in Case 2.
This completes the proof of Theorem 4.1.
## 5. On forcings adding a subset of \(\Theta\)
In this section, we prove the following theorems:
**Theorem 5.1**.: Assume \(\mathsf{ZF}+\mathsf{AD}^{+}+\) "\(V=\mathrm{L}\big{(}\wp(\mathbb{R})\big{)}\)". Suppose that \(\Theta\) is regular. Then there is a poset \(\mathbb{P}\) on \(\Theta\) which adds a subset of \(\Theta\) while preserving \(\mathsf{AD}\), i.e., for any \(\mathbb{P}\)-generic filter \(G\) over \(V\), there is a subset of \(\Theta^{V}\) which belongs to \(V[G]\setminus V\) and \(\mathsf{AD}\) holds in \(V[G]\).
**Theorem 5.2**.: Assume \(\mathsf{ZF}+\mathsf{AD}^{+}+\) "\(V=\mathrm{L}\big{(}\wp(\mathbb{R})\big{)}\)". Suppose that \(\Theta\) is singular and let \(\mathbb{P}\) be \(\mathrm{Add}(\Theta,1)\) in \(\mathrm{HOD}\), where \(\mathrm{Add}(\Theta,1)=\{p\mid p\colon\gamma\to 2\) for some \(\gamma<\Theta\}\). Then \(\mathsf{AD}\) fails in \(V[G]\) for any \(\mathbb{P}\)-generic filter \(G\) over \(V\).
Proof of Theorem 5.1.: Throughout the proof of the theorem, we write \(\lambda\) for \(\Theta^{V}\).
We prove the theorem by considering the two cases whether \(\mathsf{AD}_{\mathbb{R}}\) holds or not.
**Case 1**.: When \(\mathsf{AD}_{\mathbb{R}}\) fails.
Since \(\mathsf{AD}_{\mathbb{R}}\) fails while we assume \(\mathsf{AD}^{+}\) and \(V=\mathrm{L}\big{(}\wp(\mathbb{R})\big{)}\), by Theorem 2.9, there is a set \(T\) of ordinals such that \(V=\mathrm{L}(T,\mathbb{R})\). We fix such a \(T\) throughout the arguments for Case 1.
Let \(\mathbb{P}\) be \(\operatorname{Add}(\lambda,1)\) in \(\operatorname{HOD}_{\{T\}}\), where \(\operatorname{Add}(\lambda,1)=\{p\mid p\colon\gamma\to 2=\{0,1\}\text{ for some }\gamma<\lambda\}\). Since \(\mathbb{P}\) is computed in \(\operatorname{HOD}_{\{T\}}\) and \(\lambda=\Theta^{V}\) is inaccessible in \(\operatorname{HOD}_{\{T\}}\), the poset \(\mathbb{P}\) can be considered as a poset on \(\lambda\).
We will show that \(\mathbb{P}\) is the desired poset in Case 1, i.e., \(\mathbb{P}\) adds a subset of \(\lambda=\Theta^{V}\) while preserving \(\operatorname{\mathsf{AD}}\) in Case 1.
Let \(G\) be any \(\mathbb{P}\)-generic filter over \(V\). Then the function \(\bigcup G\colon\lambda\to 2\) can be considered as a subset of \(\lambda\) and by the genericity of \(G\) over \(V\), the subset is not in \(V\). Hence the poset \(\mathbb{P}\) adds a new subset of \(\lambda\) to \(V\).
We will show that \(\operatorname{\mathsf{AD}}\) holds in \(V[G]\). We start with showing that the poset \(\mathbb{P}\) does not add any bounded subset of \(\lambda\):
**Claim 1**.: For any \(\gamma<\lambda\), we have \(\wp(\gamma)^{V}=\wp(\gamma)^{V[G]}\). In particular, \(\mathbb{R}^{V}=\mathbb{R}^{V[G]}\).
Proof of Claim 1.: Let \(\gamma\) be an ordinal less than \(\lambda\) and \(f\colon\gamma\to 2\) in \(V[G]\). We will show that \(f\) is in \(V\).
Let \(\dot{f}\) be a \(\mathbb{P}\)-name with \(\dot{f}^{G}=f\). Since \(\mathbb{P}\) can be seen as a poset on \(\lambda\) and \(f\colon\gamma\to 2\), we may asssume that \(\dot{f}\) is a subset of \(\lambda\times\gamma\times 2\). Since \(V=\operatorname{L}(T,\mathbb{R})\) and \(\dot{f}\) is in \(V\), we have that \(\dot{f}\) is \(\operatorname{OD}_{\{T,x\}}\) for some real \(x\). Then since \(\dot{f}\) is essentially a set of ordinals, \(\dot{f}\) is in \(\operatorname{HOD}_{\{T,x\}}\). By Theorem 2.10, we have that \(\operatorname{HOD}_{\{T,x\}}=\operatorname{HOD}_{\{T\}}[x]\) and \(\operatorname{HOD}_{\{T\}}=\operatorname{L}[T,Z]\) for some subset \(Z\) of \(\lambda\). Since \(f=\dot{f}^{G}\), it follows that \(f\) is in \(\operatorname{HOD}_{\{T,x\}}[G]=\operatorname{HOD}_{\{T\}}[x][G]\).
Let \(\mathbb{Q}_{1}\) be the Vopenka algebra for adding an element of \(2^{\omega}\) in \(\operatorname{HOD}_{\{T\}}\). Then the real \(x\) induces a \(\mathbb{Q}_{1}\)-generic filter \(h_{x}\) over \(\operatorname{HOD}_{\{T\}}\) such that \(x\in\operatorname{HOD}_{\{T\}}[h_{x}]\). Since \(G\) was chosen to be \(\mathbb{P}\)-generic over \(V\), it is also \(\mathbb{P}\)-generic over \(\operatorname{HOD}_{\{T\}}[h_{x}]\). Hence the filter \(G\times h_{x}\) is \(\mathbb{P}\times\mathbb{Q}_{1}\)-generic over \(\operatorname{HOD}_{\{T\}}\) and \(\operatorname{HOD}_{\{T\}}[x][G]\subseteq HOD_{\{T\}}[h_{x}][G]=\operatorname{HOD }_{\{T\}}[G][h_{x}]\).
Since \(\dot{f}\) is in \(\operatorname{HOD}_{\{T\}}[x]\subseteq\operatorname{HOD}_{\{T\}}[h_{x}]\), there is a \(\mathbb{Q}_{1}\)-name \(\tau\) in \(\operatorname{HOD}_{\{T\}}\) such that \(\tau^{h_{x}}=\dot{f}\). Let \(\nu\) be a sufficiently big cardinal in \(\operatorname{HOD}_{\{T\}}[G]\) and let \(N\) be \(V_{\nu}\) in \(\operatorname{HOD}_{\{T\}}[G]\). By the \(<\!\!\lambda\)-closure of \(\mathbb{P}\) in \(\operatorname{HOD}_{\{T\}}\), the ordinal \(\lambda\) is regular in \(\operatorname{HOD}_{\{T\}}[G]\). Since \(\operatorname{HOD}_{\{T\}}[G]\) is a model of \(\operatorname{\mathsf{ZFC}}\), there is an elementary substructure \(X\) of \(N\) in \(\operatorname{HOD}_{\{T\}}[G]\) such that \(\gamma+1\subseteq X\), \(X\cap\lambda\in\lambda\), \(X\) is of size less than \(\lambda\), and \(T,Z,G,\mathbb{P},\mathbb{Q}_{1},\tau\in X\). Let \(M\) be the transitive collapse of \(X\) and let \(\pi\colon M\to X\) be the inverse of the collapsing map. Then letting \(\kappa=X\cap\lambda\), we have that \(\kappa\) is the critical point of \(\pi\) and \(\pi(\kappa)=\lambda\). For any \(a\in X\), we write \(\bar{a}\) for \(\pi^{-1}(a)\), i.e., \(\pi(\bar{a})=a\).
We claim that \(M\) is in \(\operatorname{HOD}_{\{T\}}\). Let \(g=\bigcup G\). Then by the genericity of \(G\), we have \(g\colon\lambda\to 2\). Since \(G\) is simply definable from \(g\), we have \(\operatorname{HOD}_{\{T\}}[G]=\operatorname{HOD}_{\{T\}}[g]\). Recall that \(\operatorname{HOD}_{\{T\}}=\operatorname{L}[T,Z]\), so \(\operatorname{HOD}_{\{T\}}[G]=\operatorname{L}[T,Z][G]=\operatorname{L}[T,Z][g]\). Hence the model \(M\) is of the form \(\operatorname{L}_{\mu}[\bar{T},\bar{Z}][\bar{g}]\) for some \(\mu\). Since \(Z\) is a subset of \(\lambda\), we have \(\bar{Z}=Z\cap\kappa\) and hence \(\bar{Z}\in\operatorname{HOD}_{\{T\}}\). Since \(\bar{T}\) is a set of ordinals of size less than \(\lambda\) in \(\operatorname{HOD}_{\{T\}}[G]\), by the \(<\!\!\lambda\)-closure of \(\mathbb{P}\) in \(\operatorname{HOD}_{\{T\}}\), the set \(\bar{T}\) is in \(\operatorname{HOD}_{\{T\}}\). Since \(g\colon\lambda\to 2\), we have \(\bar{g}=g\upharpoonright\mu\), which is in \(\mathbb{P}\). So \(\bar{g}\) is in \(\operatorname{HOD}_{\{T\}}\). Since \(M=\operatorname{L}_{\mu}[\bar{T},\bar{Z},\bar{g}]\), the model \(M\) is in \(\operatorname{HOD}_{\{T\}}\), as desired.
Let \(\bar{h}_{x}=\{\bar{q}\mid q\in h_{x}\cap X\}\). We claim that \(\bar{h}_{x}\) is \(\bar{\mathbb{Q}}_{1}\)-generic over \(M\). Recall that \(\mathbb{Q}_{1}\) is the Vopenka algebra for adding an element of \(2^{\omega}\) in \(\operatorname{HOD}_{\{T\}}\). By Lemma 2.12, we may assume that \(\mathbb{Q}_{1}\) is on \(\Theta^{V}=\lambda\) and \(\mathbb{Q}_{1}\) has the \(\lambda\)-c.c. in \(\operatorname{HOD}_{\{T\}}\). Let \(A\) be a maximal antichain in \(\bar{\mathbb{Q}}_{1}\) such that \(A\) is in \(M\). We will verify that \(A\cap\bar{h}_{x}\neq\emptyset\). Since \(\mathbb{P}\) is \(<\!\!\lambda\)-closed and \(\mathbb{Q}_{1}\) has the \(\lambda\)-c.c. in \(\operatorname{HOD}_{\{T\}}\), by Lemma 2.13, the poset \(\mathbb{Q}_{1}\) still has the \(\lambda\)-c.c. in \(\operatorname{HOD}_{\{T\}}[G]\). By elementarity of \(\pi\), the poset \(\bar{\mathbb{Q}}_{1}\) has the \(\kappa\)-c.c. in \(M\). In particular, the antichain \(A\) is of size
less than \(\kappa\) in \(M\). Since \(\mathbb{Q}_{1}\) is on \(\lambda\), the poset \(\bar{\mathbb{Q}}_{1}\) is on \(\kappa\) in \(M\). So the antichain \(A\) is a bounded subset of \(\kappa\). Since \(\kappa\) is the critical point of \(\pi\), we have that \(\pi(A)=A\).
By elementarity of \(\pi\), the antichain \(\pi(A)=A\) is maximal in \(\mathbb{Q}_{1}\) in \(\mathrm{HOD}_{\{T\}}[G]\). Since \(M\) is in \(\mathrm{HOD}_{\{T\}}\) and \(A\) is in \(M\), the antichain \(A\) is maximal in \(\mathbb{Q}_{1}\) in \(\mathrm{HOD}_{\{T\}}\) as well. By the genericity of \(h_{x}\) over \(\mathrm{HOD}_{\{T\}}\), the set \(A\cap h_{x}\) is nonempty. Let \(q\) be an element of \(A\cap h_{x}\). Since \(A\) is in \(M\) and \(M\) is transitive, the condition \(q\) is in \(M\). Since \(\mathbb{Q}_{1}\) is on \(\lambda\), \(\pi(\kappa)=\lambda\), and \(q\in h_{x}\cap M\), we have that \(\pi(q)=q\) and hence \(q\in\bar{h}_{x}\). Therefore, \(q\in A\cap\bar{h}_{x}\) and the set \(A\cap\bar{h}_{x}\) is nonempty, as desired.
Since the poset \(\bar{\mathbb{Q}}_{1}\) has the \(\kappa\)-c.c. in \(M\), by a standard argument, one can lift the embedding \(\pi\colon M\to N\) to an elementary embedding \(\hat{\pi}\colon M[\bar{h}_{x}]\to N[h_{x}]\) such that \(\hat{\pi}(\bar{h}_{x})=h_{x}\).
We now argue that the function \(f\) is in \(V\). It is enough to verify that \(f\) is in \(M[\bar{h}_{x}]\) because \(M\) is in \(\mathrm{HOD}_{\{T\}}\), \(\mathrm{HOD}_{\{T\}}\subseteq V\), and \(\bar{h}_{x}=\{\bar{q}\mid q\in h_{x}\cap X\}=\{q\mid q\in h_{x}\cap X\}=\bar{ \mathbb{Q}}_{1}\cap h_{x}\). Recall that \(\tau\) is a \(\mathbb{Q}_{1}\)-name in \(\mathrm{HOD}_{\{T\}}\) such that \(\tau^{h_{x}}=\dot{f}\) and that \(\dot{f}\) is a \(\mathbb{P}\)-name in \(\mathrm{HOD}_{\{T\}}[h_{x}]\) such that \(\dot{f}^{G}=f\). Since \(\tau\) is in \(X\), letting \(\dot{g}=\bar{\tau}^{\bar{h}_{x}}\) and \(g=\dot{g}^{\bar{G}}\), we have that \(\hat{\pi}(g)=f\). By elementarity of \(\hat{\pi}\), the set \(g\) is a function from \(\pi^{-1}(\gamma)\) to \(2\). We now verify that \(f=g\), which would imply that \(f\) is in \(M[\bar{h}_{x}]\) because \(g\) is in \(M[\bar{h}_{x}]\). Since \(\gamma+1\subseteq X\), we have that \(\pi\restriction(\gamma+1)=\mathrm{id}\). Hence \(\pi^{-1}(\gamma)=\gamma\) and \(g\colon\gamma\to 2\). Also, since \(\hat{\pi}(g)=f\), by elementarity of \(\hat{\pi}\), for any \(\alpha<\gamma\) and \(i<2\), we have that \(g(\alpha)=i\) if and only if \(f(\alpha)=i\). Therefore, \(f=g\), as desired.
This completes the proof of Claim 1.
By Claim 1, we know that \(\mathbb{R}^{V}=\mathbb{R}^{V[G]}\). So we simply write \(\mathbb{R}\) for \(\mathbb{R}^{V}\) or \(\mathbb{R}^{V[G]}\). Recall that we write \(\lambda\) for \(\Theta^{V}\).
We now show that \(\mathbb{P}\) does not add any set of reals to \(V\):
**Claim 2**.: The equality \(\wp(\mathbb{R})^{V}=\wp(\mathbb{R})^{V[G]}\) holds.
Proof of Claim 2.: Let \(A\) be any set of reals in \(V[G]\). We will show that \(A\) is in \(V\) as well.
We first claim that \(\lambda\) is regular in \(V[G]\) and \(\lambda=\Theta^{V[G]}\). Let \(\mathbb{Q}_{\omega}\) be the finite support direct limit of Vopenka algebras for adding an element of \(2^{\omega}\) in \(\mathrm{HOD}_{\{T\}}\). Then by Lemma 2.12, the poset \(\mathbb{Q}_{\omega}\) has the \(\lambda\)-c.c. in \(\mathrm{HOD}_{\{T\}}\) and there is a \(\mathbb{Q}_{\omega}\)-generic filter \(H\) over \(\mathrm{HOD}_{\{T\}}\) such that \(V=\mathrm{L}(T,\mathbb{R})\subseteq\mathrm{HOD}_{\{T\}}[H]\) and the set \(\mathbb{R}\) is countable in \(\mathrm{HOD}_{\{T\}}[H]\). Since \(\mathbb{P}\) is \(<\!\!\lambda\)-closed in \(\mathrm{HOD}_{\{T\}}\) and \(\mathbb{Q}_{\omega}\) has the \(\lambda\)-c.c. in \(\mathrm{HOD}_{\{T\}}\), by Lemma 2.13, the poset \(\mathbb{Q}_{\omega}\) still has the \(\lambda\)-c.c. in \(\mathrm{HOD}_{\{T\}}[G]\) and the filter \(H\) is \(\mathbb{Q}_{\omega}\)-generic over \(\mathrm{HOD}_{\{T\}}[G]\) as well. Hence \(\lambda\) is still regular uncountable in \(\mathrm{HOD}_{\{T\}}[G][H]\), the filter \(G\times H\) is \(\mathbb{P}\times\mathbb{Q}_{\omega}\)-generic over \(\mathrm{HOD}_{\{T\}}\), and \(\mathrm{HOD}_{\{T\}}[G][H]=\mathrm{HOD}_{\{T\}}[H][G]\). Therefore, \(\lambda\) is still regular uncountable in \(\mathrm{HOD}_{\{T\}}[H][G]\). Since \(V[G]\subseteq\mathrm{HOD}_{\{T\}}[H][G]\), the ordinal \(\lambda\) is regular in \(V[G]\) as well. Also, since \(\mathbb{R}\) is countable in \(\mathrm{HOD}_{\{T\}}[H]\) and \(\mathbb{R}^{V}=\mathbb{R}^{V[G]}\) while \(V[G]\subseteq\mathrm{HOD}_{\{T\}}[H][G]\), the ordinal \(\Theta^{V[G]}\) is at most \(\omega_{1}\) in \(\mathrm{HOD}_{\{T\}}[H][G]\). Since \(\lambda\) is regular uncountable in \(\mathrm{HOD}_{\{T\}}[H][G]\), we have that \(\Theta^{V[G]}\leq\lambda\). Since \(V\subseteq V[G]\) and \(\lambda=\Theta^{V}\), the inequality \(\lambda\leq\Theta^{V[G]}\) also holds. Hence \(\lambda=\Theta^{V[G]}\), as desired.
Let \(\nu\) be a sufficiently large cardinal in \(V[G]\) and let \(N\) be \(V_{\nu}\) in \(V[G]\). Since \(V=\mathrm{L}(T,\mathbb{R})\), the model \(N\) is of the form \(\mathrm{L}_{\nu}(T,\mathbb{R})[G]\). Since every element of \(N\) is definable from \(T,G\), an ordinal, and some real while \(\lambda\) is regular in \(V[G]\) and \(\lambda=\Theta^{V[G]}\), one can find an elementary substructure \(X\) of \(N\) in \(V[G]\) such that \(\mathbb{R}\subseteq X\), \(\lambda\cap X\in\lambda\), the structure \(X\) is a surjective image of \(\mathbb{R}\), and \(T,\mathbb{P},G,A\in X\). Let \(M\) be the transitive collapse of \(X\) and let \(\pi\colon M\to X\)
be the inverse of the collapsing map. Then letting \(\kappa=\lambda\cap X\), the critical point of \(\pi\) is \(\kappa\) and \(\pi(\kappa)=\lambda\). For any \(a\) in \(X\), we write \(\bar{a}\) for \(\pi^{-1}(a)\), i.e., \(\pi(\bar{a})=a\).
We will finish arguing that the set \(A\) is in \(V\). Since \(\mathbb{R}\) is contained in \(M\) and \(\pi(\bar{A})=A\), we have \(\bar{A}=A\) and the set \(A\) is in \(M\). Hence it is enough to verify that the model \(M\) is in \(V\). Recall that \(g=\bigcup G\) and \(g\colon\lambda\to 2\). Since \(G\) is simply definable from \(g\), we have that \(N=\operatorname{L}_{\nu}(T,\mathbb{R})[G]=\operatorname{L}_{\nu}(T,\mathbb{R })[g]\). Since \(N\) is of the form \(\operatorname{L}_{\nu}(T,\mathbb{R})[g]\), \(X\) is a surjective image of \(\mathbb{R}\) in \(V[G]\), and \(\lambda=\Theta^{V[G]}\), it follows that \(M\) is of the form \(\operatorname{L}_{\mu}(\bar{T},\mathbb{R})[\bar{g}]\) for some ordinal \(\mu<\lambda\). Since \(\mu<\lambda\), the set \(\bar{T}\) is a bounded subset of \(\lambda\) in \(V[G]\). By Claim 1, the set \(\bar{T}\) is in \(V\) as well. Since \(g\colon\lambda\to 2\) and \(\pi(\kappa)=\lambda\), we have that \(\bar{g}=g\upharpoonright\kappa\) and \(\bar{g}\) is in \(\mathbb{P}\). So \(\bar{g}\) is in \(V\) as well. Since \(M=\operatorname{L}_{\mu}(\bar{T},\mathbb{R})[\bar{g}]\), the model \(M\) is in \(V\), and the set \(A\) is in \(V\), as desired.
This completes the proof of Claim 2.
By Claim 1 and Claim 2, we have that \(\mathbb{R}^{V}=\mathbb{R}^{V[G]}\) and \(\wp(\mathbb{R})^{V}=\wp(\mathbb{R})^{V[G]}\). Since we assume \(\mathsf{AD}\) in \(V\), the axiom \(\mathsf{AD}\) holds in \(V[G]\) as well.
This finishes the arguments for Theorem 5.1 in Case 1 when \(\mathsf{AD}_{\mathbb{R}}\) fails.
**Case 2**.: When \(\mathsf{AD}_{\mathbb{R}}\) holds.
Recall that we write \(\lambda\) for \(\Theta^{V}\). Let \(\mathbb{P}\) be \(\operatorname{Add}(\lambda,1)\) in \(\operatorname{HOD}\), where \(\operatorname{Add}(\lambda,1)=\{p\mid p\colon\gamma\to 2=\{0,1\}\) for some \(\gamma<\lambda\}\). Since \(\mathbb{P}\) is computed in \(\operatorname{HOD}\) and \(\lambda=\Theta^{V}\) is inaccessible in \(\operatorname{HOD}\), the set \(\mathbb{P}\) can be considered as a poset on \(\lambda\). Let \(G\) be a \(\mathbb{P}\)-generic filter over \(V\). We will show that \(\mathsf{AD}\) holds in \(V[G]\).
**Claim 3**.: The forcing \(\mathbb{P}\) does not add any new set of reals, i.e., \(\wp(\mathbb{R})^{V}=\wp(\mathbb{R})^{V[G]}\).
Proof of Claim 3.: We will show that for any \(f\colon\mathbb{R}^{V}\to 2\) in \(V[G]\), the function \(f\) is also in \(V\). Since any real in \(V[G]\) can be simply coded as a subset of \(\mathbb{R}^{V}\) in \(V[G]\), this will show that \(\mathbb{R}^{V}=\mathbb{R}^{V[G]}\) and \(\wp(\mathbb{R})^{V}=\wp(\mathbb{R})^{V[G]}\) as well.
From now on, we write \(\mathbb{R}\) for \(\mathbb{R}^{V}\).
**Subclaim 1**.: For some sequence \(s\in\lambda^{\omega}\), the function \(f\) is in \(\operatorname{HOD}_{\{s\}}(\mathbb{R})[G]\).
Proof of Subclaim 1.: Let \(\dot{f}\) be a \(\mathbb{P}\)-name with \(\dot{f}^{G}=f\). Since \(\mathbb{P}\) can be considered as a poset on \(\lambda\), we may assume that \(\dot{f}\) can be considered as a subset of \(\lambda\times\mathbb{R}\times 2\). To make it simpler, we regard \(\dot{f}\) as a subset of \(\lambda\times\mathbb{R}\).
Since \(V=\operatorname{L}\bigl{(}\wp(\mathbb{R})\bigr{)}\), there is a set \(A\) of reals such that \(\dot{f}\) is \(\operatorname{OD}\) from \(A\). For each \(\alpha<\lambda\), let \(X_{\alpha}=\{x\in\mathbb{R}\mid(\alpha,x)\in\dot{f}\}\) and let \(\xi_{\alpha}\) be the least ordinal \(\xi<\lambda\) such that \(X_{\xi}=X_{\alpha}\). For \(\alpha,\beta<\lambda\), we write \(\alpha\preceq\beta\) if \(\xi_{\alpha}\leq\xi_{\beta}\). Then the structure \((\lambda,\preceq)\) is a prewellordering. Let \(\pi\colon(\lambda,\preceq)\to(\gamma,\leq)\) be the Mostowski collapsing map. For each \(\delta<\gamma\), let \(\eta_{\delta}=\min\pi^{-1}(\delta)\) and \(Y_{\delta}=X_{\eta_{\delta}}\). Set \(Y=(Y_{\delta}\mid\delta<\gamma)\).
Since \(\dot{f}\) is \(\operatorname{OD}\) from \(A\), so is \(\pi\). Also \(\pi\) is essentially a set of ordinals, so \(\pi\) is in \(\operatorname{HOD}_{\{A\}}\).
We next verify that there is a set \(B\) of reals such that \(Y\) is in \(\operatorname{L}(B,\mathbb{R})\). Since we have \(\mathsf{AD}_{\mathbb{R}}\) in Case 2, by Theorem 2.6, there is a set \(B_{0}\) of reals which is not \(\operatorname{OD}\) from \(A\) and any real. Since \(\dot{f}\) is \(\operatorname{OD}\) from \(A\), so is \(Y\). So each set \(Y_{\delta}\) of reals is \(\operatorname{OD}\) from \(A\). By the Wadge Lemma under \(\mathsf{ZF}+\mathsf{AD}\), each \(Y_{\delta}\) is \(\operatorname{Wadge}\) reducible to \(B_{0}\). In particular, there is a surjection from \(\mathbb{R}\) to \(\{Y_{\delta}\mid\delta<\gamma\}\) in \(\operatorname{L}(B_{0},\mathbb{R})\). Since the sequence \(Y=(Y_{\delta}\mid\delta<\gamma)\) is injective, there is a surjection \(\rho\colon\mathbb{R}\to\gamma\) in \(\operatorname{L}(B_{0},\mathbb{R})\) as well. Let \(B_{1}=\{x*y\mid x\in Y_{\rho(y)}\}\), where
\(x*y\,(2n)=x(n)\) and \(x*y\,(2n+1)=y(n)\) for all \(n\in\omega\). Then \(Y\) is in \(\mathrm{L}(\rho,B_{1},\mathbb{R})\). So letting \(B=B_{0}\oplus B_{1}=\{x*y\mid x\in B_{0}\text{ and }y\in B_{1}\}\), we have that \(Y\) is in \(\mathrm{L}(B,\mathbb{R})\), as desired.
We now argue that for some sequence \(s\in\lambda^{\omega}\), the \(\mathbb{P}\)-name \(\dot{f}\) is in \(\mathrm{HOD}_{\{s\}}(\mathbb{R})\). Since \(\pi\) is in \(\mathrm{HOD}_{\{A\}}\) and \(Y\) is in \(\mathrm{L}(B,\mathbb{R})\), letting \(C=A\oplus B\), we have that \(\pi\) is in \(\mathrm{HOD}_{\{C\}}\) and \(Y\) is in \(\mathrm{L}(C,\mathbb{R})\). Since we have \(\mathsf{AD}_{\mathbb{R}}\) in Case 2, by Lemma 2.14, there is an \(s\in(\Theta^{V})^{\omega}=\lambda^{\omega}\) such that \(C\) is OD from \(s\) and that \(C\) is in \(\mathrm{HOD}_{\{s\}}(\mathbb{R})\). Hence both \(\pi\) and \(Y\) are in \(\mathrm{HOD}_{\{s\}}(\mathbb{R})\). Since \(\dot{f}\) is simply definable from \(\pi\) and \(Y\), we have that \(\dot{f}\) is in \(\mathrm{HOD}_{\{s\}}(\mathbb{R})\), as desired.
Since \(f=\dot{f}^{G}\) and \(\dot{f}\) is in \(\mathrm{HOD}_{\{s\}}(\mathbb{R})\), we have that \(f\) is in \(\mathrm{HOD}_{\{s\}}(\mathbb{R})[G]\).
This completes the proof of Subclaim 1.
Since we assume that \(\lambda=\Theta^{V}\) is regular in \(V\) and \(s\in\lambda^{\omega}\cap V\), we can pick an ordinal \(\gamma<\lambda\) such that \(s\in\gamma^{\omega}\). Let \(\mathbb{Q}_{1}\) be the Vopenka algebra for adding an element of \(\gamma^{\omega}\) in \(\mathrm{HOD}\) and let \(h_{s}=\{p\in\mathbb{Q}_{1}\mid s\in\pi_{1}(p)\}\), where \(\pi_{1}\colon\mathbb{Q}_{1}\to\mathcal{O}_{1}\) is as in Definition 2.11. Then by Lemma 2.15, we have that \(h_{s}\) is a \(\mathbb{Q}_{1}\)-generic filter over \(\mathrm{HOD}\) such that \(\mathrm{HOD}[h_{s}]=\mathrm{HOD}_{\{s\}}\).
**Subclaim 2**.: The ordinal \(\lambda\) is regular in \(\mathrm{HOD}_{\{s\}}(\mathbb{R})[G]\) and there is no surjection from \(\mathbb{R}^{V}\) to \(\lambda\) in \(\mathrm{HOD}_{\{s\}}(\mathbb{R})[G]\).
Proof of Subclaim 2.: Let \(\mathbb{Q}_{\omega}\) be the finite support limit of the Vopenka algebras for adding an element of \(\gamma^{\omega}\) in \(\mathrm{HOD}\). Since \(\mathbb{Q}_{1}\) is a complete suborder of \(\mathbb{Q}_{\omega}\), by Lemma 2.15, there is a \(\mathbb{Q}_{\omega}\)-generic filter \(H\) over \(\mathrm{HOD}\) such that \(h_{s}\in\mathrm{HOD}[H]\) and that the set \((\gamma^{\omega})^{V}\) is countable in \(\mathrm{HOD}[H]\). In particular, \(\mathrm{HOD}_{\{s\}}(\mathbb{R})\subseteq\mathrm{HOD}[h_{s}](\mathbb{R}) \subseteq\mathrm{HOD}[H]\) and \(\mathbb{R}^{V}\) is countable in \(\mathrm{HOD}[H]\). Since we have \(\mathsf{AD}_{\mathbb{R}}\) in Case 2, by Lemma 2.15, the poset \(\mathbb{Q}_{\omega}\) is of size less than \(\Theta^{V}=\lambda\). Since \(\mathbb{P}\) is \(<\!\!\lambda\)-closed in \(\mathrm{HOD}\) and \(G\) is \(\mathbb{P}\)-generic over \(\mathrm{HOD}\), we have that any subset of \(\mathbb{Q}_{\omega}\) in \(\mathrm{HOD}[G]\) is also in \(\mathrm{HOD}\). Hence the filter \(H\) is \(\mathbb{Q}_{\omega}\)-generic over \(\mathrm{HOD}[G]\) as well.
We now argue that \(\lambda\) is regular in \(\mathrm{HOD}_{\{s\}}(\mathbb{R})[G]\). Since \(\mathbb{P}\) is \(<\!\!\lambda\)-closed in \(\mathrm{HOD}\) and \(G\) is \(\mathbb{P}\)-generic over \(\mathrm{HOD}\), the ordinal \(\lambda\) is still regular in \(\mathrm{HOD}[G]\). Also, since \(\mathbb{Q}_{\omega}\) is of size less than \(\lambda\) in \(\mathrm{HOD}\) and \(H\) is \(\mathbb{Q}_{\omega}\)-generic over \(\mathrm{HOD}[G]\), the ordinal \(\lambda\) is also regular in \(\mathrm{HOD}[G][H]\). Since \(\mathrm{HOD}_{\{s\}}(\mathbb{R})[G]\subseteq\mathrm{HOD}[H][G]=\mathrm{HOD} [G][H]\), the ordinal \(\lambda\) is regular in \(\mathrm{HOD}_{\{s\}}(\mathbb{R})[G]\), as desired.
We next show that there is no surjection from \(\mathbb{R}^{V}\) to \(\lambda\) in \(\mathrm{HOD}_{\{s\}}(\mathbb{R})[G]\). Since \(\mathbb{R}^{V}\) is countable in \(\mathrm{HOD}[H]\) while \(\lambda\) is regular uncountable in \(\mathrm{HOD}[H][G]\), there is no surjection from \(\mathbb{R}^{V}\) to \(\lambda\) in \(\mathrm{HOD}[H][G]\). Since \(\mathrm{HOD}_{\{s\}}(\mathbb{R})[G]\subseteq\mathrm{HOD}[H][G]\), there is no surjection from \(\mathbb{R}^{V}\) to \(\lambda\) in \(\mathrm{HOD}_{\{s\}}(\mathbb{R})[G]\), as desired.
This completes the proof of Subclaim 2.
Recall that \(\mathbb{Q}_{1}\) is the Vopenka algebra for adding an element of \(\gamma^{\omega}\) and \(h_{s}\) is the \(\mathbb{Q}_{1}\)-generic filter over \(\mathrm{HOD}\) derived from \(s\) with \(\mathrm{HOD}[h_{s}]=\mathrm{HOD}_{\{s\}}\). Since we have \(\mathsf{AD}_{\mathbb{R}}\) in Case 2, by Lemma 2.15, the poset \(\mathbb{Q}_{1}\) is of size less than \(\Theta^{V}=\lambda\) in \(\mathrm{HOD}\). So the filter \(h_{s}\) is essentially a bounded subset of \(\lambda\). Since we assume \(\mathsf{ZF}+\mathsf{AD}^{+}+``V=\mathrm{L}\big{(}\varphi(\mathbb{R})\big{)}\)", by Theorem 2.4, there is a set \(Z\subseteq\Theta^{V}=\lambda\) such that \(\mathrm{HOD}=\mathrm{L}[Z]\). So the model \(\mathrm{HOD}_{\{s\}}(\mathbb{R})[G]\) is of the form \(\mathrm{L}(Z,h_{s},\mathbb{R})[G]\) where \(Z\) is a subset of \(\lambda\) and \(h_{s}\) is a bounded subset of \(\lambda\). By Subclaim 1, the function \(f\) is in the model \(\mathrm{HOD}_{\{s\}}(\mathbb{R})[G]\).
We are now ready to finish the arguments for Claim 3, which are similar to those for Claim 2. Let \(\nu\) be a sufficiently big cardinal in \(\mathrm{HOD}_{\{s\}}(\mathbb{R})[G]\) and let \(N\) be \(V_{\nu}\) in \(V[G]\). Let \(g=\bigcup G\). Then by the genericity of \(G\), we have \(g\colon\lambda\to 2\). Since \(G\) is simply definable from
\(g\), we also have \(\operatorname{HOD}_{\{s\}}(\mathbb{R})[G]=\operatorname{HOD}_{\{s\}}(\mathbb{R})[g]\). Since \(\operatorname{HOD}_{\{s\}}(\mathbb{R})=\operatorname{L}(Z,h_{s},\mathbb{R})\), the model \(N\) is of the form \(\operatorname{L}_{\nu}(Z,h_{s},\mathbb{R})[G]=\operatorname{L}_{\nu}(Z,h_{s}, \mathbb{R})[g]\). Since every element of \(N\) is definable from \(Z,h_{s},g\), an ordinal, and some real while \(\lambda\) is regular in \(\operatorname{HOD}_{\{s\}}(\mathbb{R})[G]\) and there is no surjection from \(\mathbb{R}\) to \(\lambda\) in \(\operatorname{HOD}_{\{s\}}(\mathbb{R})[G]\) by Subclaim 2, one can find an elementary substructure \(X\) of \(N\) in \(\operatorname{HOD}_{\{s\}}(\mathbb{R})[G]\) such that \(\mathbb{R}\subseteq X\), \(\lambda\cap X\in\lambda\), the structure \(X\) is a surjective image of \(\mathbb{R}\), and \(Z,\mathbb{P},h_{s},G,f\in X\). Let \(M\) be the transitive collapse of \(X\) and let \(\pi\colon M\to X\) be the inverse of the collapsing map. Then letting \(\kappa=\lambda\cap X\), the critical point of \(\pi\) is \(\kappa\) and \(\pi(\kappa)=\lambda\). For any \(a\) in \(X\), we write \(\bar{a}\) for \(\pi^{-1}(a)\), i.e., \(\pi(\bar{a})=a\).
We will finish arguing that the function \(f\) is in \(V\). Since \(\mathbb{R}\) is contained in \(M\) and \(\pi(\bar{f})=f\), we have \(\bar{f}=f\) and the set \(f\) is in \(M\). Hence it is enough to verify that the model \(M\) is in \(V\). Since \(N\) is of the form \(\operatorname{L}_{\nu}(Z,h_{s},\mathbb{R})[g]\), the set \(M\) is of the form \(\operatorname{L}_{\mu}(\bar{Z},\bar{H}_{s},\mathbb{R})[\bar{g}]\) for some ordinal \(\mu\). Since \(Z\) is a subset of \(\lambda\), we have \(\bar{Z}=Z\cap\kappa\), which is in \(V\). The filter \(h_{s}\) is essentially bounded subset of \(\lambda\) and \(h_{s}\) is in \(X\). So by elementarity of \(X\), we have that \(h_{s}\subseteq X\) and it follows that \(\bar{h}_{s}=h_{s}\), which is also in \(V\). Since \(g\colon\lambda\to 2\), we have \(\bar{g}=g\upharpoonright\kappa\) and so \(\bar{g}\) is in \(\mathbb{P}\). Hence we have \(\bar{g}\in V\). Since \(M=\operatorname{L}_{\mu}(\bar{Z},\bar{h}_{s},\mathbb{R})[\bar{g}]\), the model \(M\) is in \(V\), and hence the function \(f\) is in \(V\), as desired.
This completes the proof of Claim 3.
By Claim 3, we have \(\wp(\mathbb{R})^{V[G]}=\wp(\mathbb{R})^{V}\). Since \(\mathsf{AD}\) holds in \(V\), so does in \(V[G]\), as desired. This finishes the arguments for Theorem 5.1 in Case 2 when \(\mathsf{AD}_{\mathbb{R}}\) holds.
This completes of the proof of Theorem 5.1.
Proof of Theorem 5.2.: Let \(G\) be any \(\mathbb{P}\)-genericlc filter over \(V\). We will show that \(\mathsf{AD}\) fails in \(V[G]\). To derive a contradiction, we assume \(\mathsf{AD}\) in \(V[G]\).
Since we have \(\mathsf{ZF}+\mathsf{AD}^{+}+``V=\operatorname{L}\bigl{(}\wp(\mathbb{R})\bigr{)}\)" in \(V\), by Theorem 3.1, it is enough to show that \(\mathbb{P}\) increases \(\Theta\), i.e., \(\Theta^{V}<\Theta^{V[G]}\).
Let \(\gamma\) be the cofinality of \(\Theta\) in \(V\). Since \(\Theta\) is singular in \(V\), we have that \(\gamma<\Theta\).
We will show that there is an injection from \(\Theta^{V}\) to \(\wp(\gamma)^{V[G]}\) in \(V[G]\), which would imply \(\Theta^{V}<\Theta^{V[G]}\) as follows: Since we assumed \(\mathsf{AD}\) in \(V[G]\), by Theorem 2.8, there is a surjection from \(\mathbb{R}^{V[G]}\) to \(\wp(\gamma)^{V[G]}\) in \(V[G]\). By the existence of an injection from \(\Theta^{V}\) to \(\wp(\gamma)^{V[G]}\), there would be a surjection from \(\mathbb{R}^{V[G]}\) to \(\Theta^{V}\) in \(V[G]\). By the definition of \(\Theta^{V[G]}\), we would have that \(\Theta^{V}<\Theta^{V[G]}\), as desired.
We will construct a function \(\iota\colon\Theta^{V}\to\wp(\gamma)^{V[G]}\) in \(V[G]\) which is verified to be injective. Since \(\mathbb{P}=\operatorname{Add}(\Theta,1)\) in \(\operatorname{HOD}\) and \(G\) is \(\mathbb{P}\)-generic over \(V\), the set \(g=\bigcup G\) is a function from \(\Theta^{V}\) to \(2=\{0,1\}\). Since \(\gamma\) is the cofinality of \(\Theta\) in \(V\), we can fix a cofinal increasing sequence (\(\beta_{\alpha}\colon\alpha<\gamma\)) in \(\Theta\) in \(V\). For each \(\delta<\Theta^{V}\), let \(a_{\delta}\) be the sequence \((\beta_{\alpha}+\delta\mid\alpha<\gamma)\) in \(\Theta\) in \(V\). Now let \(\iota(\delta)=\{\alpha<\gamma\mid g\bigl{(}a_{\delta}(\alpha)\bigr{)}=1\}\). Then \(\iota(\delta)\) is a subset of \(\gamma\) for each \(\delta<\Theta^{V}\).
We will verify that the function \(\iota\colon\Theta^{V}\to\wp(\gamma)^{V[G]}\) is injective. Let \(\delta,\epsilon\) be distinct ordinals less than \(\Theta^{V}\). We will see that \(\iota(\delta)\neq\iota(\epsilon)\). First notice that the functions \(a_{\delta}\) and \(a_{\epsilon}\) are different everywhere: For all \(\alpha<\gamma\), we have \(a_{\delta}(\alpha)=\beta_{\alpha}+\delta\neq\beta_{\alpha}+\epsilon=a_{\epsilon}(\alpha)\). Now since \(a_{\delta}\) and \(a_{\epsilon}\) are different everywhere and both are in \(V\), by the genericity of \(G\), there is an \(\alpha<\gamma\) such that \(g\bigl{(}a_{\delta}(\alpha)\bigr{)}\neq g\bigl{(}a_{\epsilon}(\alpha)\bigr{)}\), and hence \(\alpha\in\iota(\delta)\bigtriangleup\iota(\epsilon)\). Therefore, we have \(\iota(\delta)\neq\iota(\epsilon)\), as desired.
This completes the proof of Theorem 5.2.
## 6. Questions
We close this paper by raising two questions.
**Question 6.1**.: Assume \(\mathsf{ZF}+\mathsf{AD}\). Let \(\mathbb{P}\) be a poset which adds a new real and let \(G\) be \(\mathbb{P}\)-generic over \(V\). Then must \(\mathsf{AD}\) fail in \(V[G]\)?
To answer 'No' to Question 6.1, one would need to find a poset which changes the structure of cardinals below \(\Theta\) drastically as follows: Woodin proved that if there is a poset which adds a new real while preserving the truth of \(\mathsf{AD}\), then the poset must collapse \(\omega_{1}\). Also, by Theorem 3.1, if such a poset exists in a model of \(\mathsf{ZF}+\mathsf{AD}^{+}+``V=\mathrm{L}\bigl{(}\wp(\mathbb{R})\bigr{)}"\), then the poset must preserve \(\Theta\). Furthermore, by the arguments for [2, Lemma 2.10] by Chan and Jackson, if a poset adds a new real while preserving the truth of \(\mathsf{AD}\), then any weak partition preperty of a cardinal in its generic extension cannot be witnessed by a club in the ground model. Hence, if such a poset preserves \(\Theta\) as well, then for cofinaly many cardinals \(\kappa\) below \(\Theta\), the poset must shoot a club in \(\kappa\) which does not contain any club in \(\kappa\) in the ground model.
There are many things we do not know on forcings over \(\mathsf{ZF}+\mathsf{AD}\) especially when \(\Theta\) is singular. One of them is whether the assumption "\(\Theta\) is regular" in Theorem 5.1 is essential or not:
**Question 6.2**.: Assume \(\mathsf{ZF}+\mathsf{AD}^{+}+``V=\mathrm{L}\bigl{(}\wp(\mathbb{R})\bigr{)}"\). Suppose that \(\Theta\) is singular. Then is there any poset which adds a new subset of \(\Theta\) while preserving \(\mathsf{AD}\)?
|
2308.11691 | Practical Insights on Incremental Learning of New Human Physical
Activity on the Edge | Edge Machine Learning (Edge ML), which shifts computational intelligence from
cloud-based systems to edge devices, is attracting significant interest due to
its evident benefits including reduced latency, enhanced data privacy, and
decreased connectivity reliance. While these advantages are compelling, they
introduce unique challenges absent in traditional cloud-based approaches. In
this paper, we delve into the intricacies of Edge-based learning, examining the
interdependencies among: (i) constrained data storage on Edge devices, (ii)
limited computational power for training, and (iii) the number of learning
classes. Through experiments conducted using our MAGNETO system, that focused
on learning human activities via data collected from mobile sensors, we
highlight these challenges and offer valuable perspectives on Edge ML. | George Arvanitakis, Jingwei Zuo, Mthandazo Ndhlovu, Hakim Hacid | 2023-08-22T16:40:09Z | http://arxiv.org/abs/2308.11691v1 | # Practical Insights on Incremental Learning of New Human Physical Activity on the Edge
###### Abstract
Edge Machine Learning (Edge ML), which shifts computational intelligence from cloud-based systems to edge devices, is attracting significant interest due to its evident benefits including reduced latency, enhanced data privacy, and decreased connectivity reliance. While these advantages are compelling, they introduce unique challenges absent in traditional cloud-based approaches. In this paper, we delve into the intricacies of Edge-based learning, examining the interdependencies among: (i) constrained data storage on Edge devices, (ii) limited computational power for training, and (iii) the number of learning classes. Through experiments conducted using our MAGNETO system, that focused on learning human activities via data collected from mobile sensors, we highlight these challenges and offer valuable perspectives on Edge ML.
Edge ML, Human Activity Recognition, Dynamic Class Integration, Incremental Learning
## I Introduction
The proliferation of Internet of Things (IoT) has led to an exploding demand for network resources. Furthermore, ensuring the security and privacy of users' data is becoming paramount across a multitude of applications. The paradigm of Edge Machine Learning (Edge ML), seeks to address these challenges by pushing ML pipelines to Edge devices [1]. One of the intuitive use cases where IoT sensors intersect with ML is Human Activity Recognition (HAR). This involves leveraging sensors from everyday commercial devices to deduce a user's activities. In fact, this case epitomizes many of the characteristics and constraints of the field, underscoring the imperative for ML on the Edge.
Traditional approaches for predicting human physical activity predominantly rely on training a classifier on a predefined set of activity classes in a centralized cloud environment. Subsequently, user measurements captured on devices are then sent to the cloud for inference [2]. However, this centralized, cloud-based learning approach suffers from three main drawbacks: high latency due to user-cloud communication, lack of flexibility and personalization to individual user's needs, and lower privacy control as user's raw data is consistently relayed over networks to the cloud.
In contrast with the conventional ML scenarios, where the main processing is performed on remote cloud servers even when applications run on local devices, Edge ML [3] brings the core processing tasks to the edge devices. This approach facilitates the deployment of optimized models and services directly onto user devices or the edge network, ensuring rapid real-time response, low latency, offline capability, and enhanced security and privacy.
However, moving the inference, or more ambitiously, the learning process to the Edge devices introduces a host of significant challenges, stem primarily from the inherent limitations of Edge devices, including (i) Model size, which should be small enough to fit within the Edge but also to operate efficiently, (ii) Data size, which should be very limited due to the low storage capabilities within the Edge, and (iii) Energy consumption, constraining the training process to be very efficient without excessive power consumption. Regarding on-device inference, recent advancements in Edge ML have been notable. Many studies [4, 5] emphasize human activity recognition on smart devices by training light ML models in terms of memory and complexity. While there's interest in flexible ML models [6] that can learn new human activities, like those using few-shots learning [7, 8], these often overlook that unique constraints of the edge constraints.
This paper introduces the essential elements of our contrastive learning methodology for building and continually updating ML models directly on the Edge. It demonstrates the feasibility of incrementally learning new classes on the fly directly on the Edge. Through our result analysis, we provide practical insights into the performance and the technical constraints that may govern this sensitive task of learning on the Edge, e.g., learning with limited observations or sequentially learning more than one task. The rest of this paper is organized as follows: Section II presents the MAGNETO system for human activity recognition at a high level. Section III discusses in more detail the different dimensions of the problem and the modeling assumptions. Section IV reviews the results and provides practical insights about learning on the Edge. Finally, we conclude and provide some future work in Section V.
## II The Magneto System
Smart devices, including smartphones and smartwatches, use built-in sensors to detect and predict user activities. This area has garnered substantial interest recently, leading to numerous studies and datasets like the Huawei-Sussex locomotion challenge [9] and the Transportation Mode Detection dataset [10]. Big tech firms, such as Google1, Samsung, and Apple2, have integrated these capabilities. Notably, most
research and applications have predominantly adopted centralized or cloud-based approaches.
MAGNETO, _sMArt sensuG for human activity rEcgni-TiOn_, is an implemented system that provides human activity recognition via sensor measurements from ordinary, commercial, smart devices (e.g., smartphones and smartwatches). Despite the large body of literature, MAGNETO provides inference of human activity on an Edge device, using a pre-trained model, without transferring user's data to the cloud.
Furthermore, MAGNETO is equipped with the ability to incrementally learn new activities, i.e., classes, by capturing extra user data in order to (i) re-calibrate an activity to be more accurate for user's personal style or (ii) re-train the model to learn a custom new activity according to user's habits, without any data exchange with the cloud. We believe that the activity recognition on the Edge combined with the capability of learning new actions relying on personalized needs can enable a new area in the health care, fitness or assistant applications. In the next section, we present in detail the approach and the followed nuances of the inference and learning on the Edge.
## III System Architecture
This section presents all the mandatory components of the system architecture (Figure 1) as well as their interconnections. The overall process can be split into two phases (i) cloud initialization, which aims at pre-training our model in order to avoid the usually high data and power demand of initial models construction, and after transfers to the Edge device all the functions and data that are mandatory for inference and learning on the Edge. ii) Inference and Learning on the Edge, which operates all the mandatory actions for inference and learning new activities on the Edge without any exchange of user's data with the cloud.
### _Cloud initialisation_
The main components of the cloud initialization are:
**Initial data set**, \(\mathcal{D}_{o}\): Those data, representing \(K\) initial classes, are stored in the cloud for our model's initial training, using activity data from our measurement campaign.
**The pre-processing function**, \(P(\cdot)\): This function processes raw sensor data to prepare it for the ML algorithm. In our implementation, the pre-processing function takes roughly 120 sequential measurements from 22 mobile sensors, such as accelerometers and gyroscopes, over a one-second period. It then calculates statistics like average, variance, average/variance of the jerk for each feature sensor. In total, a set of 86 features are extracted to represent the activity.
**The Initial ML Model**, \(F(\cdot|\Theta_{o})\): using all the available data \(\mathcal{D}_{o}\) and the preprocessing function \(P(\cdot)\), a small neural network \(F\cdot|\Theta_{o}\) of dimensions [\(1024\times 512\times 128\times 64\times 64\)], with a contrastive loss cost function [11] and ADAM optimizer is trained on the cloud. The choice of the contrastive loss is motivated by its ease of adaptation and ability to learn on new tasks with a very limited amount of data [12].
**The support set \(\mathcal{D}_{s}\)**: For Edge learning, a foundational set of observations is essential to facilitate the learning process, whether it's to create data pairs for contrastive or triplet loss, or to leverage "old" data to prevent catastrophic forgetting [13]). This is termed the support set \(\mathcal{D}_{s}\). Comprising a fraction of data samples from each class, \(\mathcal{D}_{s}\) is significantly smaller than the original dataset and is a subset of it, represented as: \(|\mathcal{D}_{s}|<<|\mathcal{D}_{o}|\) and \(\mathcal{D}_{s}\subset\mathcal{D}_{o}\). The size of the support set is pivotal to the learning process. Additionally, the exact selection of the support set can influence the quality of the learning process. However, determining the optimal makeup of the support set, while crucial, is beyond this paper's scope.
**Prototypes \(P_{o}\)**: Using the support set and the trained model, we calculate the prototypes of each class, on the latent space, that are used for inference.
At the end of the initialization phase, the Web cloud passes on the Edge device three mandatory items: (i) pre-processing function, (ii) the Initial ML model, and (iii) the support set.
### _Inference and Learning on the Edge_
#### Iii-B1 Inference
Using the transferred components, the Edge device is able to infer user's activity on the fly by reading its sensors and passing the captured measurements sequentially from the pre-processing function \(P(\cdot)\) to \(F(\cdot|\Theta_{o})\) and compare the output representation with the classes prototypes.
#### Iii-B2 Learning new activities on the Edge
the processing steps that take place on the Edge device are as follows:
* **Samples Collection \(\mathcal{D}_{n}\)**: The user records samples of a new activity not present in the initial dataset. This new annotated data is integrated into the Edge device's existing support set \(\mathcal{D}_{s}=\mathcal{D}_{s}\cup\mathcal{D}_{n}\).
* **Model Re-training \(F(\cdot|\Theta_{n})\)**: The extended support set is used to retrain the existing model, expanding the learned class count from \(K\) to \(K+1\).
* **Prototype Update \(P_{n}\)**: With the new support set and the re-trained model, the class prototypes are updated.
It's noteworthy that the process for _re-calibration_ of an existing class (tailored to a user's behavior) mirrors the one described earlier. The primary difference being, in re-calibration, the current activity data in the support set is swapped with new data, followed by retraining the model on the same activities.
## IV Experiments and Insights
In this section, we detail the experiments performed on a real-world dataset. These experiments not only demonstrate the viability of our proposed method but also shed light on the primary constraints influencing Edge-based learning.
Fig. 1: Illustration of the proposed architecture, showing the dependencies between Cloud and Edge for model’s inference and learning
### _Experimental Set-up_
In experiments used our human activity dataset \(\mathcal{D}\), consisted of more than 100GB of sensory data, annotated for 5 human activities _(Drive, E-scooter, Run, Still, Walk)_, that comes from our data collection campaign. The dataset is split into training and test sets \(\mathcal{D}=\{\mathcal{D}_{tr},\mathcal{D}_{ts}\}\). To make both subsets totally disjoint, we ensure that there is a time distance of at least 10 seconds between the training and the test observations.
We selected accuracy and convergence speed as our performance metrics for the learning process. Convergence speed is determined by counting the number of training samples (batchwise) required for the algorithm to approach its final accuracy within a margin of \(2\%\). As mentioned before, we employed a five-layer fully connected network with a contrastive loss and Adam optimizer. Each training batch includes 512 sample pairs from the designated training set, i.e., batch size is set to 512. The experimental process is the following:
1. From \(\mathcal{D}_{tr}\), one activity is excluded, forming the _initial_ dataset \(\mathcal{D}_{o}\) with \(K=4\) activities. This dataset is used to train \(F(\cdot|\Theta_{o})\).
2. A subset of \(\mathcal{D}_{o}\) is randomly sampled for each class as a support set, \(\mathcal{D}_{s}\).
3. A part of the excluded class is used as observations of the new class \(\mathcal{D}_{n}\). To eliminate any bias, the amount of the new observations equals to the amount of the \(K\) existing classes that are included in the support set, i.e., \(|\mathcal{D}_{n}|=|\mathcal{D}_{s}|/K\).
4. Using the support set and new observations, the model \(F(\cdot|\Theta_{n})\) is re-trained.
5. _Step 2_ is repeated with varying support set sizes: \(|\mathcal{D}_{s}|=[1000,500,100]\) per class.
6. _Steps 1-5_ are reiterated five times for each class.
### _Performance Analysis_
In this section, we discuss the different results with a focus on three specific areas: (i) the _feasibility of incremental learning_ on the Edge, (ii) the _impact of the support size_ on the quality of the learning, and (ii) the performance of _learning multiple classes_ sequentially.
#### Iv-B1 Learning Performance with respect to the new activity
At first, we infer the new class data in the concept of zero-shot learning, without re-training the initial model \(F(\cdot|\Theta_{o})\). Without any extra training of the model, the accuracy of the new class is almost random choice \(~{}20\%\). Figure 1(a) shows the embedding space of the initial model without training on the _Run_ activity. The model is mainly confusing _Run_ with the _Walk_ activity. However, Figure 1(b) shows the updated embedding space after a brief re-training of just 10 epochs using 200 _Run_ samples. This brief adaptation greatly enhances the distinction of the new class. Hence, while zero-shot learning doesn't align with theoretical expectations, even a limited re-training with minimal data from the new class can effectuate significant improvement on model's performance.
To delve deeper into how learning a new class affects the performance of both the new and existing classes, we show in Figure 1(d), 1(e), 1(f) the model's performance on different metrics per each new class, considering \(|D_{s}|=1000\). Notably, certain new classes, such as _Still_, are learned faster than others (see
Fig. 2: Performance results for single class learning on the Edge
Figure 2d). Although, a broader perspective reveals that the _overall learning performance_ of all the classes is converging similarly (see Figure 2f). Moreover, it's remarkable to note that the learning does not demand extensive iterations to converge (\(\approx\)40 batches). This agile convergence persists across varying the size of the support set (see Figure 2c).
The outcome shows clearly the possibility of operating incremental learning on the Edge despite the limited resources.
#### Iv-B2 Learning Performance Vs Support set size
The next important question arises concerning storage capacity required by the Edge device to hold the support set. The support set is a critical component for incremental learning as historical data is needed to reconstruct or refine models, it is important to discern the bounds and potentials of this parameter. Figure 2c shows the average performance across all classes, with varying sizes of the support set: \([5000,2500,500]\) total observations.
Two salient observations emerge from this: i) The size of the support set seems to not have an important effect on the _convergence speed_. All cases converge around the \(40^{th}\) training batch; ii) There is a slight degradation in _accuracy_ (less than 2%) between the case of \(|\mathcal{D}_{s}|=5000\) and \(|\mathcal{D}_{s}|=500\). This marginal trade-off, however, offers tangible benefits: the support set's size decreased by a factor of ten, leading to diminished processing demands and a lighter storage footprint. Both these facets are particularly advantageous within the Edge environment, given its inherent resource constraints.
#### Iv-B3 Learning Performance Vs Multiple new classes
The last question has to do with the feasibility of sequentially learning multiple activities directly on the Edge. To provide an answer to this question, an initial dataset that includes only two activities is first used to build the starting model. After that, the model is sequentially trained over the other three, i.e., one by one. To ensure a robust analysis, we generate 100 random realizations of activity combinations. For instance, one particular sequence might involve kick-starting with an initial set of _(Drive, Walk)_, thereafter learning the _Still_ activity first, followed by _E-scooter_ and concluding with _Run_. We report the model's average performance. Figure 3 shows the aggregated accuracy from all the realizations using a support set of 1000 observations per class.
The obtained results expose interesting outcomes: First, the sequential learning of multiple classes (activities) on the Edge is doable. This is particularly important for the Edge context where users dynamically increment their tasks. Second, there is a degradation of roughly \(30\%\) at the beginning. This could be explained by the fact that the initial model was trained on the simple task of learning 2 activities and has less generalization capabilities. Third, as new activities are learned, the degradation exists, but the overall rate is slow, maintaining very good overall performances of the system. Finally, the convergence of the newly added classes seem to become slower as we move forward in the sequence.
## V Conclusion
This paper explored and demonstrated the feasibility of incremental learning on the Edge for human activity recognition. We highlighted interesting findings around the impact of different factors, including the support set size and training epochs. Furthermore, our research showcased the effects of sequentially introducing multiple new classes. As for future work, one can consider the scenario with more classes, and explore more intricate machine learning tasks on the Edge.
|
2306.16452 | Exact description of transport and non-reciprocity in monitored quantum
devices | We study non-interacting fermionic systems undergoing continuous monitoring
and driven by biased reservoirs. Averaging over the measurement outcomes, we
derive exact formulas for the particle and heat flows in the system. We show
that these currents feature competing elastic and inelastic components, which
depend non-trivially on the monitoring strength $\gamma$. We highlight that
monitor-induced inelastic processes lead to non-reciprocal currents, allowing
to extract work from measurements without active feedback control. We
illustrate our formalism with two distinct monitoring schemes providing
measurement-induced power or cooling.~Optimal performances are found for values
of the monitoring strength $\gamma$ which are hard to address with perturbative
approaches. | João Ferreira, Tony Jin, Jochen Mannhart, Thierry Giamarchi, Michele Filippone | 2023-06-28T18:00:02Z | http://arxiv.org/abs/2306.16452v1 | # Exact description of transport and non-reciprocity in monitored quantum devices
###### Abstract
We study non-interacting fermionic systems undergoing continuous monitoring and driven by biased reservoirs. Averaging over the measurement outcomes, we derive exact formulas for the particle and heat flows in the system. We show that these currents feature competing elastic and inelastic components, which depend non-trivially on the monitoring strength \(\gamma\). We highlight that monitor-induced inelastic processes lead to non-reciprocal currents, allowing to extract work from measurements without active feedback control. We illustrate our formalism with two distinct monitoring schemes providing measurement-induced power or cooling. Optimal performances are found for values of the monitoring strength \(\gamma\) which are hard to address with perturbative approaches.
_Introduction -_ Non-unitary dynamics in quantum systems stems from interactions with the environment [1; 2; 3; 4], which induce dissipation and usually suppress quantum coherence [5; 6]. Nonetheless, non-unitary evolution caused by engineered dissipation [7; 8; 9; 10; 11] or measurements [12; 13] can stabilize target quantum states, many-body correlations [14; 15; 16; 17; 18; 19; 20; 21; 22; 23] and exotic entanglement dynamics [24; 25; 26; 27; 28; 29; 30; 31].
Of particular interest are the effects of non-unitarity on quantum transport. Environment-assisted processes can drive currents in coherent systems [32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42] and the impact of losses [43; 44; 45; 46; 47; 48; 49; 50; 51] is investigated in quantum simulators [22; 37; 40; 23]. Work extraction from dissipative environments [52; 53] or active monitoring [54; 55; 56; 57; 58; 59; 60; 61] may use quantum effects at the nanoscale to break the operational limits imposed by classical thermodynamics [62].
Quantum devices are usually driven by thermodynamic baths, whose large number of degrees of freedom challenges exact numerical [63] and analytical [64; 65; 66; 67] approaches, especially to capture the long-time or stationary dynamics of monitored or open settings. Local master equation approaches, based on weak coupling assumptions, may miss interesting effects [68] or imply apparent violations of the second law of thermodynamics [69; 70; 71; 72].
In this work, we derive exact formulas for the particle and heat currents driven by continuous monitoring of a single-particle observable \(\mathcal{O}\) and biased reservoirs in free fermion systems. We exploit an exact self-consistent Born scheme for 2-point correlation functions [73; 74] and rely on a generalized Meir-Wingreen's approach [49; 75] to account for biased reservoirs. Our main result is formula (5), which offers a simple and exact tool to address novel quantum transport phenomena in coherent systems under continuous monitoring.
We provide two illustrations of our approach showing monitor-assisted non-reciprocal effects in quantum systems. We consider first the continuous monitoring of a single level (Fig. 1). Under generic assumptions, we find that monitoring triggers a non-reciprocal current between reservoirs without external bias, and thus generates power. We then show that monitoring cross-correlations between two sites (Fig. 2) enables quantum measurement cooling [76]. For both cases, we highlight non-trivial dependencies on the measurement strength \(\gamma\), showcased by peaks of performances in regimes which are not encompassed by perturbative approaches. We also stress that the measurement-based engines described here do not rely on feedback-loops or Maxwell's demons [54; 55; 56; 57; 58; 59; 60; 61].
_Derivation of monitored currents -_ For simplicity, we consider 2-terminal setups [77] described by Hamiltonians of form \(\mathcal{H}=\mathcal{H}_{\text{res}}+\mathcal{H}_{\text{T}}+\mathcal{H}_{ \text{sys}}\). Left and right (\(r=L/R\)) reservoirs are ruled by \(\mathcal{H}_{\text{res}}=\sum_{r,k}\varepsilon_{r,k}c^{\dagger}_{r,k}c_{r,k}\), where \(c_{k,r}\) annihilates fermions of the reservoir \(r\) in mode \(k\) of energy \(\varepsilon_{r,k}\). Both reservoirs are in thermal equilibrium, with chemical potential \(\mu_{r}\), temperature \(T_{r}\), and mode occupation obeying Fermi's distribution \(f_{r}(\varepsilon)=[e^{(e-\mu_{r})/T_{r}}+1]^{-1}\). Free fermions in the system are described by \(\mathcal{H}_{\text{sys}}=\sum_{i,j}d^{\dagger}_{i}h_{ij}d_{j}\), where \(h_{ij}\) is a single-particle Hamiltonian with labels \(i,j\) re- referring to internal degrees of freedom (orbitals, spin...). The coupling between system and reservoirs reads \(\mathcal{H}_{\text{T}}=\sum_{r,k,i}t_{r,k}c^{\dagger}_{r,k}d_{i}+\text{H.c.}\), where \(t_{r,ki}\) are tunnel amplitudes.
When an observable of the system \(\mathcal{O}\) is continuously monitored with strength \(\gamma\), the averaged dynamics of the system density matrix \(\rho\) obeys Lindblad's equation \(\partial_{t}\rho=-i[\mathcal{H},\rho]+\mathcal{D}[\rho]\), where (\(\hbar=e=k_{\text{B}}=1\)) [78; 79; 80]
\[\mathcal{D}[\rho]=\gamma\left(2\mathcal{O}\rho\mathcal{O}-\left\{\mathcal{O}^{2 },\rho\right\}\right)\,. \tag{1}\]
We are interested in the particle (\(\zeta=0\)) and heat (\(\zeta=1\)) currents flowing into a reservoir \(i\), which read
\[J^{\zeta}_{r}=i\sum_{k,i}(\varepsilon_{r,k}-\mu_{r})^{\zeta}\left[t^{*}_{r,ki} \langle d^{\dagger}_{i}c_{r,k}\rangle-t_{r,ki}\langle c^{\dagger}_{r,k}d_{i} \rangle\right]. \tag{2}\]
When single-particle observables \(\mathcal{O}=\sum_{ij}d^{\dagger}_{i}O_{ij}d_{j}\) are monitored, calculating Eq. (2) becomes a difficult task, since Eq. (1) is non-quadratic. Even though, for quadratic Hamiltonians, correlation functions obey closed systems of equations [81; 82; 83], efficient numerical calculations can
be performed only for finite systems [74; 84; 85]. We show now that analytical solutions can be obtained with infinite reservoirs thanks to the validity of the self-consistent Born scheme for 2-point correlation functions, extensively discussed in Refs. [73; 74] and in the Supplemental Material (SM) [86].
We consider the retarded, advanced, and Keldysh Green's functions: \(\mathcal{G}^{R}_{ij}(t,t^{\prime})=-i\theta(t-t^{\prime})\langle\{d_{i}(t),d_{ j}^{\dagger}(t^{\prime})\}\rangle\), \(\mathcal{G}^{A}_{ij}(t,t^{\prime})=[\mathcal{G}^{R}_{ji}(t^{\prime},t)]^{*}\) and \(\mathcal{G}^{K}_{ij}(t,t^{\prime})=-i\langle[d_{i}(t),d_{j}^{\dagger}(t^{ \prime})]\rangle\), that we collect in the matrix \(\mathbf{\mathcal{G}}=\left(\begin{smallmatrix}\mathcal{G}^{R}&\mathcal{G}^{K}\\ 0&\mathcal{G}^{A}\end{smallmatrix}\right)\)[87]. The matrix \(\mathbf{\mathcal{G}}\) obeys Dyson's equation \(\mathbf{\mathcal{G}}^{-1}=\mathbf{\mathcal{G}}_{0}^{-1}-\mathbf{\Sigma}\), where \(\mathbf{\mathcal{G}}_{0}\) is the Green's function of the isolated system (\(t_{r,ki}=\gamma=0\)) and \(\mathbf{\Sigma}\) is the self-energy, encoding the effects of reservoirs and monitoring. The contribution of the reservoir \(r\) to \(\mathbf{\Sigma}\) is obtained by integration of the modes \(c_{r,k}\). In frequency space, \(\mathbf{\Sigma}_{r,ij}(\omega)=\sum_{k}t_{r,ki}^{*}t_{r,kj}\mathbf{\mathcal{C}}_{r,k}(\omega)\), where \(\mathbf{\mathcal{C}}_{r,k}\) is the Green's functions of the reservoir. Particle exchange with the system is described by the hybridization matrix \(\Gamma_{r}(\omega)=[\Sigma^{A}_{r}(\omega)-\Sigma^{R}_{r}(\omega)]/2\)[88]. The Keldysh component \(\Sigma^{K}_{r}(\omega)=-2i\Gamma_{r}(\omega)\tanh[(\omega-\mu_{r})/2T_{r}]\) carries information about the equilibrium state of the reservoirs.
Monitoring contributes to the self-energy following the self-consistent Born scheme [73; 74], which involves the full Green's matrix \(\mathbf{\mathcal{G}}\), including baths and monitoring:
\[\mathbf{\Sigma}^{\gamma}_{ij}(\omega)=2\gamma\sum_{pq}O_{ip}\mathbf{\mathcal{G}}_{pq} (t,t)O_{qj}\,. \tag{3}\]
To derive the retarded and advanced components of \(\mathbf{\Sigma}^{\gamma}\), we exploit the prescription \(\mathcal{G}^{R/A}_{ij}(t,t)=\mp i\delta_{ij}/2\)[87], and obtain \([\mathcal{G}^{R/A}]^{-1}_{ij}(\omega)=\omega-h_{ij}-\sum_{r}\Sigma^{R/A}_{r, ij}(\omega)\pm i\gamma\sum_{p}O_{ip}O_{pj}\). In this expression, monitoring appears as a frequency-independent life-time \(\gamma\sum_{p}O_{ip}O_{pj}\), in analogy with single-particle gains or losses [47; 40; 41; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 99; 11; 11; 12; 13; 14; 15; 16; 17; 18; 19; 12; 14; 16; 18; 19; 13; 15; 19; 16; 17; 19; 18; 19; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51].
The difference between monitoring and losses appears in the Keldysh component of Eq. (3). Inserting \(\mathcal{G}^{K}_{ij}(t,t)=2i\langle d_{j}^{\dagger}d_{i}\rangle-i\delta_{ij}\)[89] and inverting the Dyson equation, one finds a self-consistent equation for the correlation matrix \(\mathcal{D}_{ij}=\langle d_{j}^{\dagger}d_{i}\rangle\)
\[\mathcal{D}=\int\frac{d\omega}{\pi}\mathcal{G}^{R}(\omega)\left[\sum_{r}f_{r}( \omega)\Gamma_{r}(\omega)+\gamma\mathcal{O}\mathcal{D}\mathcal{O}\right] \mathcal{G}^{A}(\omega)\,. \tag{4}\]
The solution of Eq. (4) completes the full derivation of the Green's function \(\mathbf{\mathcal{G}}\). The knowledge of \(\mathbf{\mathcal{G}}\) is sufficient to derive currents [49; 75]. After straightforward algebra, detailed in the SM [86], we find closed, exact and non-perturbative expressions for the particle and heat currents:
\[J^{\zeta}_{r}=\frac{2}{\pi}\int d\omega\,\underline{\langle\omega -\mu_{r}\rangle^{\zeta}\left(f_{\bar{r}}-f_{r}\right)\mathrm{tr}\left[\Gamma_{r }\mathcal{G}^{R}\Gamma_{\bar{r}}\mathcal{G}^{A}\right]}\\ +\gamma\frac{2}{\pi}\int d\omega\,\underline{\langle\omega-\mu_{r} \rangle^{\zeta}\operatorname{tr}\left[\Gamma_{r}\mathcal{G}^{R}O\left( \mathcal{D}-f_{r}\mathbb{1}\right)O\mathcal{G}^{A}\right]}\,. \tag{5}\]
with \(\bar{r}=R\) if \(r=L\) and viceversa. This expression is the main result of our work. It allows us to draw general conclusions on monitor-assisted transport and, combined with Eq. (4), can be directly applied to all settings described by Lindbladians of the form (1).
Equation (5) appears as a sum of two distinct terms. The first term reproduces the Landauer-Buttiker formula for currents in non-interacting systems [88; 90]. It describes the energy-preserving transfer of particles between reservoirs at energy \(\omega\) with transmission probability \(\mathcal{T}(\omega)=4\mathrm{tr}\left[\Gamma_{r}\mathcal{G}^{R}\Gamma_{\bar{r} }\mathcal{G}^{A}\right]\). As \(\mathcal{T}(\omega)\) depends on \(\mathcal{G}^{R/A}\), where measurements only contribute to reducing life-times, monitoring affects elastic transport exactly as single-particle gains or losses [47; 40; 41; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 99; 11; 12; 13; 14; 15; 16; 17; 19; 101; 102; 103; 104; 105; 106; 107; 108; 109; 111; 12; 14; 15; 16; 17; 18; 19; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 49; 50; 51].
The second term in Eq. (5) is controlled by monitoring. The implicit dependence of the correlation matrix \(\mathcal{D}\) on additional energy integrals, see Eq. (4), indicates that measurements inelastically add or subtract energy to particles in the system. A rough inspection of Eq. (5) shows that the inelastic contribution has a peak as function of the observation rate \(\gamma\), interpolating between a linear growth for small \(\gamma\) and a \(\gamma^{-1}\) decay in the strong measurement limit, as \(\mathcal{G}^{R/A}\propto\gamma^{-1}\) for \(\gamma\to\infty\), see Figs. 1b-2c. Position and strength of this maximum depend on the details of the problem, but it is generally expected for values of \(\gamma\) comparable to the spectral width of the system and its coupling strength to the baths. These maxima are out of reach in perturbative approaches. Importantly, the inelastic current is not directly proportional to \(f_{L}-f_{R}\), and can thus be finite even without a bias. This mechanism describes the generation of non-reciprocal currents from measurement and can be exploited for work generation.
We provide below explicit illustrations of these considerations on two different monitor-assisted devices.
_Monitored density engine -_ We first consider a monitored setting, sketched in Fig. 1, where a single level of energy \(\varepsilon_{d}\), described by the Hamiltonian \(\mathcal{H}_{\mathrm{sys}}=\varepsilon_{d}\,d^{\dagger}d\), evolves under the continuous measurement of its occupation, associated with the operator \(\mathcal{O}=n=d^{\dagger}d\). Solving Eq. (4) gives the occupation of the level
\[\langle n\rangle=\frac{\int d\omega\,\mathcal{A}(\omega)[f_{L}(\omega)P_{L}( \omega)+f_{R}(\omega)P_{R}(\omega)]}{\int d\omega\,\mathcal{A}(\omega)[P_{L}( \omega)+P_{R}(\omega)]}\,, \tag{6}\]
where \(\mathcal{A}(\omega)=-\mathrm{Im}[\mathcal{G}^{R}_{dd}(\omega)]/\pi=\frac{1}{\pi} \frac{\Gamma_{L}+\Gamma_{R}+\gamma}{[\omega-\varepsilon_{d}-\Sigma_{L}-\Sigma_{R}+i \gamma]^{2}}\) is the spectral function of the level. We have introduced the quantity \(P_
\(J^{0}=J^{0}_{R}=-J^{0}_{L}\) flowing through the system
\[J^{0}=2\int d\omega\mathcal{A}\frac{\Gamma_{L}\Gamma_{R}}{\Gamma_{ L}+\Gamma_{R}+\gamma}\left(f_{L}-f_{R}\right) \tag{7}\] \[+\frac{2\gamma}{\int d\omega\mathcal{A}\left(P_{L}+P_{R}\right)} \int d\omega d\omega^{\prime}\mathcal{A}\ \mathcal{A}^{\prime}P_{L}P^{\prime}_{R}\left(f_{L}-f^{\prime}_{R}\right)\,,\]
where we omit all frequency dependency for compactness and use the shorthand notation \(f^{\prime}=f(\omega^{\prime})\).
The first term reproduces the well-known expression of the current flowing through a Breit-Wigner resonance [92; 93], with an additional suppression controlled by the monitoring rate \(\gamma\).
The inspection of the inelastic term in Eq. (7) directly shows that even without bias (\(\mu_{L,R}=\mu,\,T_{R,L}=T\)), monitoring can trigger the flow of a finite, non-reciprocal current through the system. This non-reciprocal current is finite provided that at least one of the hybridization functions \(\Gamma_{L/R}\) depends on energy, and that mirror and particle-hole symmetry are simultaneously broken [94; 95; 96]. Such conditions are satisfied when \(\Gamma_{L}\neq\Gamma_{R}\) and at least one function among \(\mathcal{A}\) or \(\Gamma_{L/R}\) is not symmetric around the chemical potential \(\mu\). The mechanism generating this current is sketched in Fig. 1a: electrons at energy \(\omega_{1}\) are emitted from one reservoir onto the level and the measurement provides the energy for the electron to exit into an empty state of the other reservoir at energy \(\omega_{2}\). The fact that the injection and emission rates depend asymmetrically on energy allows the generation of the current. The emergence of a non-reciprocal current can be also understood based on the fact that averaging over the measurement outcomes is equivalent, in this specific case, to coupling the system to an infinite-temperature bosonic bath (see SM [86]), which induces a thermoelectric flow in the system if mirror and particle-hole symmetry are broken [97; 98; 99].
Figure 1b shows that the inelastic current displays the aforementioned peak as a function of the measurement strength \(\gamma\) at zero bias \(\delta\mu=\mu_{L}-\mu_{R}=0\). For all numerical applications, we consider a minimal model where the level is coupled to two metallic reservoirs via two energy filters of energy \(\varepsilon_{L/R}\). In this case, \(\Sigma^{R}_{r}(\omega)=t^{2}/(\omega-\varepsilon_{r}+i\Delta)\), where \(t\) is the level-filter tunnel coupling and \(\Delta\) the hybridization constant of the filter with the reservoirs, see SM [86]. The resulting hybridization function \(\Gamma_{r}(\omega)=-\text{Im}\Sigma^{R}_{r}(\omega)\) is peaked around \(\varepsilon_{r}\), as sketched in Fig. 1a. We have found the maximum non-reciprocal current for \(\gamma\simeq t\) - that is out of weak coupling (\(\gamma\gg t\)) - when \(\varepsilon_{d}=0\) and when mirror and particle-hole symmetry are broken by antisymmetric reservoirs with \(\varepsilon_{L}=-\varepsilon_{R}\). The peak roughly follows \(\varepsilon_{d}\) and is suppressed by finite temperatures, see inset of Fig. 1b. Similar non-reciprocal effects and peaks were also discussed, from a real-time perspective, in Refs. [38; 39].
Figure 1c shows the differential conductance \(G=\partial J^{0}/\partial\delta\mu|_{\delta\mu=0}\) for the same system. \(G\) also features elastic and inelastic contributions [100; 101], scaling differently with \(\gamma\). For small rates, the elastic term dominates, showing as many peaks as resonances in the system - three in the application of Fig. 1. Because of the fact that only the central level is monitored, increasing \(\gamma\) strongly suppresses its associated resonance, while spectral weight is transferred to the filters (arrows in Fig. 1c). Consequently, for intermediate monitoring strengths \(\gamma\simeq t\), the conductance actually increases out of resonance (\(\mu\neq 0\)), before being also suppressed in the \(\gamma\gg t\) limit.
The fact that monitoring generates currents at zero
Figure 1: a) Monitored level of energy \(\varepsilon_{d}\), coupled to left and right reservoirs with asymmetric hybridization functions \(\Gamma_{L}(\omega)\neq\Gamma_{R}(\omega)\). The level occupation is measured with strength \(\gamma\), providing the inelastic mechanism promoting particles from energy \(\omega_{1}\) to \(\omega_{2}\) and inducing a current against the bias (arrows). The blue-shaded areas correspond to the finite-temperature Fermi distributions of the reservoirs. For all plots, we use the two-filter model discussed in the main text, with \(\varepsilon_{R}=1.48t=-\varepsilon_{L}\), \(\Delta=0.55t\), which we found to maximize the unbiased particle current at \(\mu_{L,R}=T_{L,R}=0\). b) Peaked structure of the unbiased particle current as a function of the measurement strength \(\gamma\) for varying \(\varepsilon_{d}\). Inset: The unbiased current decays monotonously for increasing temperatures (\(\gamma=1\)). c) Differential conductance \(G\) as a function of the chemical potential \(\mu\) at \(T=0\) for increasing \(\gamma\). The measurement suppresses the resonance associated to the single level and favors those from the filters, as highlighted by arrows. d) Electric power as function of a symmetric bias \(\mu_{R}-\mu_{L}\) around \(\mu=0\), for different values of \(\gamma\). Dashed lines correspond to linear response calculations.
bias, implies that they can flow against externally imposed biases to generate work. We consider here the generated power \(\mathcal{P}=\delta\mu\cdot J^{0}\) and show the importance of non-perturbative and out-of-equilibrium effects on this quantity. In linear response, \(J^{0}\simeq J^{0}|_{\delta\mu=0}-\delta\mu\,G\), and the power has a parabolic dependence on \(\delta\mu\), with a maximum \(\mathcal{P}_{\text{max}}=J^{0}|_{\delta\mu=0}^{2}/2G\) and a change of sign at the stopping voltage \(\delta\mu_{\text{stop}}=J^{0}|_{\delta\mu=0}/G\). Figure 1d shows that the maximum power generation is found for monitoring of strength \(\gamma>t\), that is out of the weak coupling regime. Moreover, we find that non-equilibrium effects associated to strongly biased reservoirs cannot be neglected. They can be exactly derived via Eq. (7), and the dashed lines in Fig. 1d clearly show that linear-response greatly overestimates \(\mathcal{P}_{\text{max}}\) and \(\delta\mu_{\text{stop}}\) when \(\gamma\simeq t\).
_Quantum measurement cooling -_ We consider two independent sites \(H_{\text{sys}}=\varepsilon_{L}d_{L}^{\dagger}d_{L}+\varepsilon_{R}d_{R}^{ \dagger}d_{R}\) that are coupled via the monitoring process, \(O_{ij}=\delta_{iL}\delta_{jR}+\delta_{iR}\delta_{jL}\), see Fig. 2a. This process can be in principle realized by adding an interferometer measuring cross-correlations between the two sites [102, 103]. Also in this case, we rely on Eq. (4) to find the occupation of the levels \(\langle n_{r}\rangle=\left\langle d_{r}^{\dagger}d_{r}\right\rangle\)
\[\langle n_{r}\rangle=\frac{\int d\omega\big{[}f_{F}P_{r}\mathcal{A}_{F}+\big{(} 1-\int d\omega^{\prime}P_{r}^{\prime}\mathcal{A}_{r}^{\prime}\big{)}f_{r}P_{ r}\mathcal{A}_{r}\big{]}}{\sum_{r^{\prime}}\int d\omega P_{r^{\prime}}\mathcal{A}_{r^{ \prime}}-\prod_{r^{\prime}}\int d\omega P_{r^{\prime}}\mathcal{A}_{r^{\prime}}}\,, \tag{8}\]
with modified notation \(P_{r}=\Gamma_{r}/(\Gamma_{r}+\gamma)\) and spectral functions \(\mathcal{A}_{r}(\omega)=-\text{Im}\mathcal{G}_{rr^{\prime}}^{R}/\pi=\frac{1}{ \pi}\frac{\Gamma_{r}+\gamma}{|\omega-\varepsilon_{r}-\Sigma_{r}+i\gamma|^{2}}\).
Because of the absence of coherent hopping between sites, \(\mathcal{G}_{LR}^{R/A}=0\) and only the inelastic component of the currents in Eq. (5) is finite, for which the knowledge of Eq. (8) is needed. We are interested in exact expressions for quantum measurement cooling (QmC) [76], we thus consider the heat current flowing in the right reservoir
\[J_{R}^{1}=\frac{2\gamma}{\mathcal{N}}\int d\omega d\omega^{\prime }(\omega-\mu_{R})\mathcal{A}_{R}P_{R}\Big{[}\mathcal{A}_{L}^{\prime}P_{L}^{ \prime}\left(f_{L}^{\prime}-f_{R}\right)\\ +\left(1-\int d\omega^{\prime\prime}\mathcal{A}_{L}^{\prime\prime }P_{L}^{\prime\prime}\right)\mathcal{A}_{R}^{\prime}P_{R}^{\prime}\left(f_{R} ^{\prime}-f_{R}\right)\Big{]}\,, \tag{9}\]
where \(\mathcal{N}\) is the denominator appearing in Eq. (8). To get physical insight on the physical requirements for QmC and the multiple processes described by Eq. (9), we first inspect the \(\gamma\to 0\) limit. To leading order in \(\gamma\), one can approximate \(P_{r}=1\) and only the first term in Eq. (9) remains. It can be cast in the compact form
\[J_{R}^{1}= 2\gamma\int d\omega\,(\omega-\mu_{R})\mathcal{A}_{R}(\omega) \big{[}\,\langle n_{L}\rangle-f_{R}(\omega)\big{]}\,. \tag{10}\]
If we further approximate the spectral function by \(\mathcal{A}_{R}(\omega)=\delta(\omega-\varepsilon_{R})\), we get \(J_{R}^{1}=2\gamma(\varepsilon_{R}-\mu_{R})(\langle n_{L}\rangle-\langle n_{R}\rangle)\). This expression makes explicit that the heat flow in the right reservoir is controlled by the position of the right level with respect to the chemical potential and the difference of occupation with respect to the left level. The condition for cooling the right reservoir is \(J_{R}^{1}<0\). In the absence of bias, such condition requires \(\mu_{R}\lessgtr\varepsilon_{R}\) and \(\varepsilon_{L}\lessgtr\varepsilon_{R}\), as sketched in Fig. 2a. Analogous conditions were found to achieve cooling by heating [104, 105], where the role of measurement is played by a third hot reservoir. The second term in Eq. (9) acts at order \(\gamma^{2}\) and describes the reinjection of heat in the right reservoir by particles hopping back and forth to the left level via the monitoring process.
In Figure 2, we explore QmC and its performances also for strong temperature biases and large values of \(\gamma\). For numerical applications, we consider \(\mu_{L/R}=0\) and take the same hybridization functions \(\Gamma_{r}(\omega)\) than in the previous section, with peaks aligned with \(\varepsilon_{r}\). Figure 2b shows
Figure 2: a) Two-level system under continuous monitoring of its cross-correlations, coupled to a left (hot) and right (cold) reservoir. For applications, we consider the same filters as in Fig. 1, aligned with the levels \(\varepsilon_{L/R}\). b) Parameter region where a reservoir at a temperature \(T_{R}=t\) can be cooled by measurement for different values of \(\gamma\) and \(\Delta=0.5t\). The range of parameters for which quantum measurement cooling is possible reduces by increasing \(\gamma\). The black dot corresponds to \(\varepsilon_{L}=10t\) and \(\varepsilon_{R}=3t\), where panels (c) and (d) are derived. c) Heat flowing into the right reservoirs for increasing temperature bias \(T_{L}>T_{R}\). Quantum measurement cooling (QmC) occurs in the colored region and below some critical temperature bias. d) Parametric plot of the coefficient of performance (COP) of QmC. Curves are obtained by varying the measurement strength \(\gamma\).
the regions where QmC occurs, in the absence of bias and for increasing monitoring strength \(\gamma\). QmC indeed occurs when \(\varepsilon_{L}\lessgtr\varepsilon_{R}\lessgtr 0\). Nonetheless, the parameter region for QmC shrinks the larger the monitoring strength \(\gamma\), reflecting the fact that more heat is injected in both reservoirs the stronger the measurement process is. Fig. 2c shows the behavior of \(J_{R}^{\mathrm{I}}\) for increasing temperature biases as function of \(\gamma\). Exactly as the non-reciprocal current discussed in the previous section (Fig. 1b), the heat current shows a peak for \(\gamma\approx t\). However, increasing the temperature bias leads to a change of sign of the heat current, signaling that the left reservoir is hot enough to heat the right one.
We conclude this study by discussing the efficiency of this process, which is characterized by the coefficient of performance, \(COP=|J_{R}^{\mathrm{I}}/(J_{R}^{\mathrm{I}}+J_{L}^{\mathrm{I}})|\), which measures how much heat can be extracted from monitoring [106]. We depict the COP in Fig. 2d as a parametric plot on the rate \(\gamma\). For fixed temperatures in the reservoirs, the maximum COP is found near the critical measurement strength \(\gamma\) at which the heat flow changes sign in Fig. 2c. This monitoring strength is also of order \(t\) and is not encompassed by the weak coupling limit.
_Conclusions_ - We have derived exact and analytic expressions for the particle and heat currents flowing in a large class of monitored systems. These formulas were applied to investigate power harvesting and cooling assisted by measurements. In particular, we have found current peaks as a function of the measurement strength \(\gamma\) out of the weak-coupling limit (Figs. 1b-2c). These peaks are clear features that could favor their observation in experiments. Our results can be readily generalized to different monitored setups and pave the way to the investigation of unexplored regimes which are not captured by standard, perturbative approaches. We have shown that these regimes are important, as they manifest the best performances in terms of power generation and quantum measurement cooling.
On a more fundamental level, we have provided exact expressions for quantum transport in the presence of non-elastic effects caused by monitoring. It would be of great interest to establish in the future whether formulas like Eq. (5) also apply for interacting quantum impurity models driven out of equilibrium, and/or for systems coupled to bosonic baths at finite or even zero temperature [42; 107; 108; 109; 110].
_Acknowledgments_ - We are grateful to Christophe Berthod, Daniel Braak, Geraldine Haack, Manuel Houzet, Rafael Sanchez, Kyrylo Snizhko, Clemens Winkelmann and Robert Whitney for helpful comments and discussions. This work has been supported by the Swiss National Science Foundation under Division II under grant 200020-188687. J.S.F. and M.F. acknowledge support from the FNS/SNF Ambizione Grant No. PZ00P2_174038. M. F. acknowledges support from EPiQ ANR-22-PETQ-0007 part of Plan France 2030.
|
2304.07326 | Microscopic theory of colour in lutetium hydride | Nitrogen-doped lutetium hydride has recently been proposed as a
near-ambient-conditions superconductor. Interestingly, the sample transforms
from blue to pink to red as a function of pressure, but only the pink phase is
claimed to be superconducting. Subsequent experimental studies have failed to
reproduce the superconductivity, but have observed pressure-driven colour
changes including blue, pink, red, violet, and orange. However, discrepancies
exist among these experiments regarding the sequence and pressure at which
these colour changes occur. Given the claimed relationship between colour and
superconductivity, understanding colour changes in nitrogen-doped lutetium
hydride may hold the key to clarifying the possible superconductivity in this
compound. Here, we present a full microscopic theory of colour in lutetium
hydride, revealing that hydrogen-deficient LuH$_2$ is the only phase which
exhibits colour changes under pressure consistent with experimental reports,
with a sequence blue-violet-pink-red-orange. The concentration of hydrogen
vacancies controls the precise sequence and pressure of colour changes,
rationalising seemingly contradictory experiments. Nitrogen doping also
modifies the colour of LuH$_2$ but it plays a secondary role compared to
hydrogen vacancies. Therefore, we propose hydrogen-deficient LuH$_2$ as the key
phase for exploring the superconductivity claim in the lutetium-hydrogen
system. Finally, we find no phonon-mediated superconductivity near room
temperature in the pink phase. | Sun-Woo Kim, Lewis J. Conway, Chris J. Pickard, G. Lucian Pascut, Bartomeu Monserrat | 2023-04-14T18:01:14Z | http://arxiv.org/abs/2304.07326v2 | # Microscopic theory of colour in lutetium hydride
###### Abstract
Nitrogen-doped lutetium hydride has recently been proposed as a near-ambient conditions superconductor. Interestingly, the sample transforms from blue to pink to red as a function of pressure, but only the pink phase is superconducting. Subsequent experimental studies have failed to reproduce the superconductivity, but have confirmed the existence of pressure-driven colour changes. However, these colour changes appear in different sequences and at different pressures depending on the experiment, with observed colours including blue, pink, red, violet, and orange. Given the relationship between colour and superconductivity, understanding colour changes in nitrogen-doped lutetium hydride may hold the key to clarifying the possible superconductivity in this compound. Here, we describe a full microscopic theory of colour in lutetium hydride. We find that hydrogen-deficient LuH\({}_{2}\) is the only phase which exhibits colour changes under pressure consistent with experimental reports, with a sequence blue-violet-pink-red-orange. We also find that the concentration of hydrogen vacancies controls the precise sequence and pressure of colour changes, rationalising seemingly contradictory experiments. Nitrogen doping also modifies the colour of LuH\({}_{2}\) but it plays a secondary role compared to hydrogen vacancies. Therefore, we propose hydrogen-deficient LuH\({}_{2}\) as the key phase for exploring the superconductivity claim in the lutetium-hydrogen system. Finally, we find no phonon-mediated superconductivity near room temperature in the pink phase.
## Introduction
The proposal by Ashcroft that hydrogen-rich compounds could host high temperature phonon-mediated superconductivity under pressure [1; 2] has stimulated a profusion of theoretical proposals for high pressure superconducting hydrides [3; 4; 5; 6; 7; 8; 9] and the subsequent experimental discovery of some of these [10; 11; 12]. This new class of hydride superconductors has re-ignited the search for superconductivity at ambient conditions, and Dasenbrock-Gammon and co-workers have recently reported superconductivity in nitrogen-doped lutetium hydride with a maximum critical temperature of 294 K at a moderate pressure of 10 kbar [13]. Interestingly, superconductivity is reported to coincide with drastic colour changes in the reflectivity of the sample: increasing pressure transforms a non-superconducting blue metal to a superconducting pink metal at 3 kbar, and a further transformation to a non-superconducting red metal above 30 kbar.
This remarkable report has sparked a growing number of experimental [14; 15; 16; 17; 18; 19; 20; 21; 22; 23] and theoretical [24; 25; 26; 27; 28; 29] investigations, none of which have so far succeeded at reproducing or explaining near-ambient superconductivity. On the experimental front, measurements of resistivity and magnetic susceptibility find no superconductivity near ambient conditions. Puzzlingly, multiple studies report pressure-driven colour changes, but these include a wide range of seemingly incompatible colour sequences and pressure conditions: blue-to-pink at 3 kbar and pink-to-red at 30 kbar in the original report [13]; blue-to-pink at 22 kbar and pink-to-red at 40 kbar [14]; blue-to-violet upon contact with a diamond culet, violet-to-red at 30 kbar and red-to-orange at 120 kbar [15]; blue-to-violet at 94 kbar [15]; blue-to-violet at 120 kbar, violet-to-pink-to-red gradually between 160 kbar and 350 kbar and red persisting up to 420 kbar [16; 17]; blue-to-violet-to-pink-to-red with transition pressures differing by up to 60 kbar depending on the pressure medium used [18]; and persistent blue colour up to 65 kbar [19]. Growing evidence suggests that the colour changes are significantly affected by the initial compression procedure [15] and by the pressure medium used in the diamond anvil cell [18].
On the theoretical front there have been multiple reports of structure searches in the Lu-H binary and the Lu-H-N ternary systems [24; 25; 26; 27; 28]. Most studies only report metastable ternary structures, but Ferreira and co-workers report a ternary Lu\({}_{4}\)N\({}_{2}\)H\({}_{5}\) stable structure [28]. The roles of pressure and nitrogen doping [29] and of quantum and thermal ionic vibrations [30] in stabilising the cubic LuH\({}_{3}\) structure have also been studied. None of the predicted stable or metastable structures are found to be phonon-mediated superconductors near room temperature.
Given the association between superconductivity and colour changes in the original supercon
ductivity report, understanding colour changes in nitrogen-doped lutetium hydride holds the key to clarifying the possible superconductivity in this compound. However, experimental reports provide an inconsistent picture regarding pressure-driven colour changes, and there are no theoretical studies yet. In this work, we provide a full microscopic theory of colour in lutetium hydride.
## Results
### LuH\({}_{2}\) under ambient conditions
Lutetium hydride under ambient conditions crystallises in the LuH\({}_{2}\) stoichiometry with cubic space group \(Fm\overline{3}m\). As shown in Fig. 1, LuH\({}_{2}\) adopts the fluorite structure with the lutetium atoms occupying the sites of an fcc lattice, and the hydrogen atoms occupying the tetrahedral interstitial sites.
LuH\({}_{2}\) is a metal whose reflectivity endows it with a blue appearance. We demonstrate the validity of our computational approach by reporting the calculated colour of LuH\({}_{2}\) at ambient conditions in Fig. 1. We show results using three distinct computational models to explore the potential role of electron correlation due to the presence of lutetium \(5d\) electrons and the potential role of strong electron-phonon coupling due to the presence of hydrogen.
The first model we consider uses semilocal density functional theory (DFT) in the generalised gradient approximation, labelled DFT in Fig. 1. This model provides a basic description of the electronic structure without a detailed treatment of electron correlation and without the inclusion of electron-phonon effects. The calculated reflectivity is large in the infrared region above 800 nm, is strongly suppressed in the red part of the visible spectrum with a calculated minimum at 710 nm, and increases gradually towards the blue part of the visible spectrum. The overall shape of the reflectivity is consistent with that observed experimentally [15] and directly leads to the blue colour of LuH\({}_{2}\).
Lutetium has an electronic configuration with a partially filled \(5d\) shell, which suggests that electronic correlation beyond that captured by standard DFT may contribute to the electronic properties of LuH\({}_{2}\). To explore the possible role of electron correlation, we repeat our calculations using DFT corrected with a Hubbard \(U\) term, labelled as DFT+\(U\) in Fig. 1, which captures static correlation. The reflectivity curve has a similar shape to that obtained at the DFT level, but the minimum of the reflectivity has a lower value and occurs at a slightly shorter wavelength of 690 nm. Combined with a slighly larger reflectivity in the blue part of the visible spectrum, we
obtain a slightly brighter blue colour for LuH\({}_{2}\) using the DFT+\(U\) model. These results indicate that static electron correlation arising from lutetium only plays a minor role in LuH\({}_{2}\). We have also performed dynamical mean field theory calculations that capture dynamical correlation and also find that they can be neglected (Supplementary Information). We rationalise these results by noting that \(5d\) orbitals have a large electron bandwidth spanning multiple eV (see Supplementary Information for the band structure) which implies that the spatial extent of the orbitals is large and the corresponding local correlations weak.
Hydrogen is the lightest of all elements, and as such it exhibits large quantum fluctuations that can lead to strong electron-phonon coupling. Indeed, this is the prime motivation behind the proposal that hydrogen-rich compounds could be high temperature phonon-mediated superconductors.
Figure 1: **Structure, reflectivity, and colour of LuH\({}_{2}\).****a.** Crystal structure of \(Fm\overline{3}m\) LuH\({}_{2}\). **b.** Reflectivity of LuH\({}_{2}\) calculated using semilocal density functional theory (DFT), DFT corrected with a Hubbard \(U\) term (DFT+\(U\)), and DFT including electron-phonon coupling (DFT+EPC). The experimental reflectivity is taken from Ref. [15]. **c.** Colour and photorealistic rendering of LuH\({}_{2}\) calculated using DFT, DFT+\(U\), and DFT+EPC. The photorealistic rendering is shown as LuH\({}_{2}\) surrounding a grey ball with an opening in the centre.
To explore the possible role of electron-phonon interactions in LuH\({}_{2}\), we repeat our calculations including contributions from both quantum fluctuations at 0 K and thermal fluctuations at finite temperature, labelled DFT+EPC in Fig. 1. The reflectivity curve has a similar shape to those obtained with DFT and DFT+\(U\), but exhibits a lower value in the infrared region and a larger value in the red region. We again obtain a blue colour, indicating that electron-phonon coupling does not significantly modify the reflectivity of LuH\({}_{2}\).
Overall, we find that electronic correlation and electron-phonon interactions make a small contribution, and that the main features of the reflectivity curve and the resulting blue colour of LuH\({}_{2}\) are correctly captured by semilocal DFT. Therefore, our subsequent discussion neglects electron correlation and electron-phonon interactions, but further details about these contributions are included in the Supplementary Information.
### Lutetium hydride colour changes under pressure
To build a microscopic theory of colour in lutetium hydride, we explore the pressure-driven colour changes in LuH, LuH\({}_{2}\), and LuH\({}_{3}\). The focus on LuH\({}_{x}\) compounds is motivated by the experimental reports that observe cubic phases tentatively assigned to LuH\({}_{2}\) and LuH\({}_{3}\) stoichiometries, but we also provide the pressure-driven colour changes of other stable and metastable compounds of the lutetium-hydrogen-nitrogen system in the Supplementary Information.
We show the reflectivity as a function of pressure for LuH, LuH\({}_{2}\), and LuH\({}_{3}\) in Figure 2, together with the corresponding colours. Under ambient conditions, LuH adopts the \(F\overline{4}3m\) symmetry and has a relatively constant reflectivity across the visible spectrum, with a small peak around the blue region that leads to a grey colour with a blueish hue. Upon increasing pressure, the reflectivity peak in the blue region diminishes and the reflectivity grows towards the infrared region, leading to the grey colour acquiring a magenta hue. At a pressure of 105 kbar, LuH undergoes a phase transition to a structure with \(Fm\overline{3}m\) space group that exhibits a light grey colour (reflectivity shown in Supplementary Information).
At ambient conditions LuH\({}_{2}\) adopts the cubic \(Fm\overline{3}m\) space group and has the reflectivity described in Fig. 1 and repeated in Fig. 2 with a minimum in the red part of the spectrum that leads to an overall blue colour. With increasing pressure, the reflectivity minimum shifts towards shorter wavelengths, and the reflectivity from the red part of the spectrum increases, in agreement with experiment [15]. This leads to a gradual colour change from blue to violet with increasing pressure. LuH\({}_{2}\) undergoes a structural phase transition at 732 kbar to a phase of space group
\(P4/nmm\), which has a grey colour with an orange-red hue (reflectivity shown in Supplementary Information).
LuH\({}_{3}\) adopts a \(P\overline{3}c1\) space group at ambient conditions that has a yellow-green hue (reflectivity shown in Supplementary Information). It undergoes a structural phase transition to a cubic \(Fm\overline{3}m\) phase at 258 kbar, a structure that is observed in some experiments as it can be recovered at lower pressures. The reflectivity of the cubic structure is relatively constant across the visible range of the spectrum at all pressures, leading to a grey colour.
Overall, the results depicted in Fig. 2 show that the only compound that is blue at ambient conditions is cubic LuH\({}_{2}\). Additionally, LuH\({}_{2}\) undergoes a gradual colour change towards violet
Figure 2: **Pressure dependence of the reflectivity and colour of lutetium hydrides.****a-c.** Reflectivity as a function of pressure (in kbar) for **a**\(F\overline{4}3m\) LuH, **b**\(Fm\overline{3}m\) LuH\({}_{2}\), and **c**\(Fm\overline{3}m\) LuH\({}_{3}\). **d.** Colour of LuH, LuH\({}_{2}\), and LuH\({}_{3}\) as a function of pressure. We include the colour evolution for two different structures for each compound because in all cases there is a structural phase transition at \(p^{*}\) (vertical dashed line). The thermodynamically stable phase in each pressure range is highlighted with a red box.
starting at a pressure of about 400 kbar. This blue-to-violet colour change is also observed in multiple experiments [15; 16; 17; 18], but it occurs at different pressures in different experiments, ranging from 0 to at least 190 kbar, but possibly higher as some experiments only observe a blue phase. The experimental results are found to be strongly dependent on experimental details such as the initial compression protocol [15] and the pressure medium used [18], so our calculations provide a reference for the expected bulk colour changes in LuH\({}_{2}\).
The conclusion that only cubic LuH\({}_{2}\) is consistent with experimental observations holds when other structures in the lutetium-hydrogen-nitrogen system are considered, as described in the Supplementary Information. Therefore, we can discard all compounds other than LuH\({}_{2}\) as playing a role in the experimentally reported colour changes. In particular, we discard the cubic LuH\({}_{3}\) structure proposed by Dasenbrock-Gammon and co-workers to explain high temperature superconductivity [13] as this structure has a grey colour at all pressures.
### Hydrogen deficient LuH\({}_{2-\delta}\)
The calculated blue-to-violet colour change of LuH\({}_{2}\) under pressure is consistent with multiple experimental observations [15; 16; 17; 18] but overestimates the transition pressure. Furthermore, some experimental observations reveal additional colour changes with increasing pressure, which include pink [13; 14; 18], red [13; 14; 15; 18], and orange [15]. The pink colour is particularly important as it is associated with the superconducting phase in the original report [13]. However, our calculations show that these additional colour changes are not present in pure LuH\({}_{2}\), and we therefore explore the role of hydrogen vacancies and nitrogen doping in the colour of LuH\({}_{2}\), as multiple experiments suggest off-stoichiometric compounds are present in the samples.
We show the pressure evolution of the reflectivity and colour of hydrogen-deficient LuH\({}_{1.875}\) in Fig. 3, simulated using a \(2\times 2\times 2\) supercell with a single hydrogen vacancy. At ambient conditions, LuH\({}_{1.875}\) exhibits a reflectivity with a shape similar to that of LuH\({}_{2}\) but with the minimum occurring at somewhat shorter wavelenghts of 600 nm. The resulting colour is still blue. Similar to LuH\({}_{2}\), increasing pressure leads to an overall shift of the reflectivity minimum to shorter wavelengths and to an increase in the reflectivity in the red part of the spectrum. As a result we observe a blue-to-violet colour change at a pressure of about 100 kbar, significantly lower than the corresponding colour change in pure LuH\({}_{2}\) and in the experimentally observed pressure range. Increasing pressure further leads to a gradual transition to pink (peaking at about 300 kbar), followed by red (peaking at about 500 kbar) and tending towards orange approaching 1000 kbar. Therefore, hydrogen-deficient
Figure 3: **Pressure dependence of the reflectivity and colour of hydrogen-deficient lutetium hydrides.****a-b.** Reflectivity as a function of pressure (in kbar) for **a** LuH\({}_{1.875}\) and **b** LuH\({}_{1.750}\). **c-d.** Colour and photorealistic rendering of **c** LuH\({}_{1.875}\) and **d** LuH\({}_{1.750}\) as a function of pressure.
LuH\({}_{2}\) exhibits a sequence of colour changes that includes all colours reported experimentally.
The sequence and pressure of colour changes in hydrogen-deficient LuH\({}_{2}\) is strongly dependent on the concentration of hydrogen vacancies. Figure 3 also depicts the pressure evolution of the reflectivity and colour changes of LuH\({}_{1.750}\), with a higher concentration of hydrogen vacancies simulated with a \(2\times 2\times 2\) supercell with two hydrogen vacancies. In this case, the reflectivity minimum occurs at a wavelength of 550 nm at ambient conditions, giving a pink colour. Increasing pressure suppresses the reflectivity in the blue region, turning the colour from pink towards orange at lower pressures than those necessary for LuH\({}_{1.875}\). The reflectivities and colour changes of other hydrogen vacancy concentrations are detailed in the Supplementary Information.
These results suggest that the seemingly contradictory experimental observations of colour changes in lutetium hydride are likely due to varying hydrogen vacancy concentrations in LuH\({}_{2}\). In particular, Dasenbrock-Gammon and co-workers observe superconductivity in the pink phase starting with a pressure of 3 kbar [13], significantly lower than the pressure reported in multiple subsequent experiments. Our results suggest that this is due to a higher concentration of hydrogen vacancies in the original work compared to subsequent studies.
The concentration of nitrogen dopants also induces colour changes in LuH\({}_{2}\) (see Supplementary Information). However, the colour changes driven by nitrogen doping only play a secondary role compared to hydrogen vacancies.
Finally, we note that we have also tested the role that hydrogen vacancies and nitrogen doping have on the reflectivity and colour of cubic \(Fm\overline{3}m\) LuH\({}_{3}\), as this phase has been tentatively identified as the parent phase responsible for superconductivity [13]. Our results show that the LuH\({}_{3}\) phase retains a grey colour under doping, discarding it as the parent phase for superconductivity.
### Absence of phonon-mediated superconductivity in the lutetium-hydrogen-nitrogen system
No experimental report since the original announcement of near-ambient conditions superconductivity in the lutetium-hydrogen-nitrogen system has been able to confirm this claim. Similarly, no calculation of stable and metastable phases in the lutetium-hydrogen-nitrogen system has predicted a high superconducting critical temperature within a phonon-mediated framework.
Our results suggest that hydrogen-deficient LuH\({}_{2}\) is responsible for the colour changes observed experimentally and associated with the superconducting state. We therefore calculate the superconducting critical temperature of LuH\({}_{1.875}\) under pressure but find no room-temperature superconductivity (details in the Supplementary Information).
## Discussion
Dasenbrock-Gammon and co-workers report superconductivity in nitrogen-doped lutetium hydride over the pressure range 3-30 kbar [13]. Importantly, the pressure-driven transition to and from the superconducting phase occurs simultaneously with drastic colour changes in the sample, which is pink in the superconducting phase, compared to blue below 3 kbar and red above 30 kbar. Additionally, they attribute the superconductivity to a cubic LuH\({}_{3}\) phase with some unknown concentration of nitrogen dopants and some unknown concentration of hydrogen vacancies.
Our calculations show that the cubic LuH\({}_{3}\) phase is not consistent with the colour changes observed experimentally, and therefore LuH\({}_{3}\) cannot be responsible for the observed superconductivity. Additionally, our results suggest that the only phase that is consistent with the observed colour changes is hydrogen-deficient cubic LuH\({}_{2}\), and that the concentration of hydrogen vacancies and nitrogen dopants controls the colour at each pressure. Finally, we also show that hydrogen-deficient LuH\({}_{2}\) is unlikely to be a high temperature phonon-mediated superconductor.
## Methods
_Electronic structure calculations. -_ We perform density functional theory (DFT) calculations using the Vienna \(ab\)\(initio\) simulation package (vasp) [31; 32] implementing the projector-augmented wave method [33]. For the exchange-correlation energy, we use the generalized-gradient approximation functional of Perdew-Burke-Ernzerhof modified for solids (PBEsol) [34]. Converged results are obtained with a kinetic energy cutoff for the plane wave basis of 400 eV and a \(\mathbf{k}\)-point grid of size \(40\times 40\times 40\) for the LuH\({}_{2}\) primitive cell and commensurate grids for other cell sizes and shapes. The geometry of the structures is optimised until all forces are below 0.01 eV/A and the pressure is below 1 kbar. We also perform select calculations using DFT corrected with a Hubbard \(U\) term, and for these we use a value of \(U=3\) eV. We also perform select calculations using dynamical mean field theory with the edmft code [35; 36], which implements density functional theory with embedded dynamical mean field theory (DFT+eDFMT). For the DFT part we have used the wien2k code [37].
_Reflectivity. -_ Our reflectivity calculations follow the methodology described in Ref. [38]. We calculate the complex dielectric function within the independent-particle approximation as implemented in vasp. In the optical limit (\(\mathbf{q}\to 0\)), the dielectric function \(\varepsilon(\mathbf{q},\omega)\) is given by the sum of an intraband Drude-like term \(\varepsilon^{\text{intra}}(\mathbf{q},\omega)\) due to the electrons at the Fermi surface and an inter
band term \(\varepsilon^{\rm inter}({\bf q},\omega)\) describing vertical transitions between valence and conduction bands. The explicit form of each term is given by [39; 40]:
\[\varepsilon^{\rm intra}({\bf q},\omega)=-\frac{\omega_{D}^{2}(\mathbf{\hat{q}})} {\omega(\omega+i\gamma)}, \tag{1}\]
where the independent particle approximation Drude plasma frequency is
\[\omega_{D}^{2}(\mathbf{\hat{q}})=\frac{4\pi}{V}\sum_{\bf k}\sum_{n}\left| \left\langle\psi_{n{\bf k}}\right|\mathbf{\hat{q}}\cdot{\bf v}\left|\psi_{n{ \bf k}}\right\rangle\right|^{2}\left(-\frac{\partial f_{n{\bf k}}}{\partial E _{n{\bf k}}}\right), \tag{2}\]
and
\[\varepsilon^{\rm inter}({\bf q},\omega)=1-\frac{4\pi}{V}\sum_{\bf k}\sum_{n;n \neq n^{\prime}}\sum_{n^{\prime}}\frac{\left|\left\langle\psi_{n{\bf k}} \right|\mathbf{\hat{q}}\cdot{\bf v}\left|\psi_{n^{\prime}{\bf k}}\right\rangle \right|^{2}}{(E_{n{\bf k}}-E_{n^{\prime}{\bf k}})^{2}}\frac{f_{n{\bf k}}-f_{n^ {\prime}{\bf k}}}{\omega+E_{n{\bf k}}-E_{n^{\prime}{\bf k}}+i\eta}. \tag{3}\]
In these equations, \(V\) is the volume of the system, \(\left|\psi_{n{\bf k}}\right\rangle\) is an electronic state with associated energy \(E_{n{\bf k}}\) and labelled with quantum numbers \((n,{\bf k})\), \(\mathbf{\hat{q}}\cdot{\bf v}\) is the dipole operator, and \(f_{n{\bf k}}\) is the Fermi-Dirac distribution. We use the empirical broadening parameters \(\gamma=\eta=0.1\) eV. To obtain the reflectivity, we average the dielectric function and the Drude plasma frequency as
\[\varepsilon(\omega)=\frac{\varepsilon(\mathbf{\hat{x}},\omega)+ \varepsilon(\mathbf{\hat{y}},\omega)+\varepsilon(\mathbf{\hat{z}},\omega)}{3} \hskip 28.452756pt\text{and}\hskip 28.452756pt\omega_{D}^{2}=\frac{ \omega_{D}^{2}(\mathbf{\hat{x}})+\omega_{D}^{2}(\mathbf{\hat{y}})+\omega_{D} ^{2}(\mathbf{\hat{z}})}{3}. \tag{4}\]
Using the relation \(\varepsilon(\omega)=[n(\omega)+ik(\omega)]^{2}\) with the refractive index \(n(\omega)\) and the optical extinction coefficient \(k(\omega)\), we compute the reflectivity at normal incidence by assuming a vacuum-material interface as
\[R(\omega)=\frac{[n(\omega)-1]^{2}+k(\omega)^{2}}{[n(\omega)+1]^{2}+k(\omega)^ {2}}. \tag{5}\]
Finally, we follow the method described in Ref. [38] to obtain the colour from the reflectivity and we use the Mitsuba 3 renderer for the photorealistic rendering [41].
_Electron-phonon coupling calculations._ - We use the finite displacement method in conjunction with nondiagonal supercells [42] to calculate the phonon frequencies \(\omega_{{\bf q}\nu}\) and eigenvectors \({\bf e}_{{\bf q}\nu}\) of a phonon mode labelled by wave vector \({\bf q}\) and branch \(\nu\). The electronic structure parameters are the same as those reported above, and we use a \(4\times 4\times 4\) coarse \({\bf q}\)-point grid to construct the matrix of force constants. Representative phonon dispersions are reported in the Supplementary Information. We evaluate the imaginary part of the dielectric function at temperature \(T\) renormalized by electron-phonon coupling using the Williams-Lax theory [43; 44]:
\[\varepsilon_{2}(\omega;T)=\frac{1}{\mathcal{Z}}\sum_{\bf s}\left\langle\Phi_{ \bf s}({\bf u})\left|\varepsilon_{2}(\omega;{\bf u})\right|\Phi_{\bf s}({\bf u })\right\rangle e^{-E_{\bf s}/k_{\rm B}T}, \tag{6}\]
where \({\cal Z}\) is the partition function, \(|\Phi_{\bf s}({\bf u})\rangle\) is a harmonic eigenstate \({\bf s}\) of energy \(E_{\bf s}\), \({\bf u}=\{u_{{\bf q}\nu}\}\) is a vector containing all atomic positions expressed in terms of normal mode amplitudes \(u_{{\bf q}\nu}\), and \(k_{\rm B}\) is Boltzmann's constant. We evaluate Eq. (6) by Monte Carlo integration accelerated with thermal lines [45]: we generate atomic configurations in which the atoms are distributed according to the harmonic nuclear wave function in which every normal mode has an amplitude of \(\left(\frac{1}{2\omega_{\rm q}\nu}\left[1+2n_{\rm B}(\omega_{{\bf q}\nu},T) \right]\right)^{1/2}\), where \(n_{\rm B}(\omega,T)\) is the Bose-Einstein factor. We note that the electron-phonon renormalised dielectric function includes the effects of both quantum fluctuations at \(T=0\,\)K and thermal fluctuations at finite temperature. We build the electron-phonon renormalised reflectivity using the electron-phonon renormalised dielectric function.
###### Acknowledgements.
S.-W.K. and B.M. are supported by a UKRI Future Leaders Fellowship [MR/V023926/1]. B.M. also acknowledges support from the Gianna Angelopoulos Programme for Science, Technology, and Innovation, and from the Winton Programme for the Physics of Sustainability. G.L.P. acknowledges funding from the Ministry of Research, Innovation, and Digitalisation within Program 1-Development of National Research and Development System, Subprogram 1.2-Institutional Performance-RDI Excellence Funding Projects, under contract no. 10PFE/2021. The computational resources were provided by the Cambridge Tier-2 system operated by the University of Cambridge Research Computing Service and funded by EPSRC [EP/P020259/1], by the UK National Supercomputing Service ARCHER2, for which access was obtained via the UKCP consortium and funded by EPSRC [EP/X035891/1], and by the SCARF cluster of the STFC Scientific Computing Department.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.